[netperf-talk] ENOBUFS on linux?
Andrew Gallatin
gallatin at cs.duke.edu
Wed May 19 14:09:13 PDT 2010
Rick Jones wrote:
> Is that 5.5 or 5.4? Anyway, on a 2.6.18-164-el5 kernel over a 1GbE
> interface being driven by bnx2 I do not see the "greater than linkrate"
> with a 96K send socket buffer, so there may be a driver component in
> there somewhere. Still, ifconfig shows the txqueuelen as 1000 in this
> case and 96K of 1472 byte sends would be only 67 packets. Perhaps being
> < txquelen is a coincidence. If I go to -s 1M I get just a little bit
> of >> linkrate. If I go to -s 2M I get 10X link-rate. Of course, even
> 1MB as 1472 byte sends is < 1000, and even in the 10X faster case I see
> no TX overruns reported, and the ethtool -S stats reported appear clean.
The txqueuelen seems to be the determining factor. It defaults to 100
for the driver I was using (igb). If I double txqueuelen, then 128K
works fine (but 256K results in the silent drops).
I found a thread which seems to indicate that this is
the desired behavior (http://article.gmane.org/gmane.linux.network/136046)
Using the following hack (as suggested in the message above)
I can get -ENOBUFS returned. I tried this before, but was
passing the wrong level to setsockopt()
--- /nfs/home/gallatin/netperf/netperf-2.4.5.orig/src/netlib.c
2009-05-27 18:27:34.000000000 -0400
+++ src/netlib.c 2010-05-19 17:04:53.000000000 -0400
@@ -2839,6 +2839,15 @@
requested_size);
fflush(where);
}
+ if (which == SEND_BUFFER) {
+ int on = 1;
+ if (setsockopt(sd, 0, IP_RECVERR, (char *)&on, sizeof(int)) < 0) {
+ fprintf(where, "netperf: IP_RECVERR failed (%d)\n", errno);
+ fflush(where);
+ exit(2);
+ }
+ }
+
}
/* the getsockopt() call that used to be here has been hoisted into
Now I have sanity!
netperf -Hmist13 -C -c -tUDP_STREAM -- -s 1M
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
mist13.sw.myri.com (172.31.171.13) port 0 AF_INET
Socket Message Elapsed Messages CPU Service
Size Size Time Okay Errors Throughput Util Demand
bytes bytes secs # # 10^6bits/sec % SS us/KB
2097152 65507 10.00 24 436290 1.3 13.19 6873.250
129024 10.00 24 1.3 0.54 280.275
How would you feel if I cleaned this up some, and submitted a
patch which applied this to the UDP_STREAM send-side test?
Drew
More information about the netperf-talk
mailing list