[netperf-talk] ENOBUFS on linux?

Andrew Gallatin gallatin at cs.duke.edu
Wed May 19 10:54:09 PDT 2010


Hi,

I discovered what seems like a bug in the linux stack which is
causing netperf to report bogus performance results. When I
do a netperf -tUDP_STREAM on a slow interface with a large
socket buffer size, netperf reports totally bogus performance.
In the following example, there is a 100Mb/s switch linking
this host to "mist13":

netperf -Hmist13 -tUDP_STREAM  -- -m 1472 -s 96K
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
mist13.sw.myri.com (172.31.171.13) port 0 AF_INET

Socket  Message  Elapsed      Messages
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

196608    1472   10.00     12199906      0    14365.98
129024           10.00       81621             96.11


This behavior goes away if I decrease the socket buffer size:

netperf -Hmist13 -tUDP_STREAM -- -m 1472 -s 64K
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
mist13.sw.myri.com (172.31.171.13) port 0 AF_INET
netperf: IP_RECVERR set!
Socket  Message  Elapsed      Messages
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

131072    1472   10.00       82367      0      96.98
129024           10.00       82367             96.98


It seems obvious that the kernel is dropping the packets
due to them exceeding a queue size someplace.  In a BSD
stack, sendto() would return ENOBUFS and netperf would
report correct results.    I don't think this is a driver
issue, as it happens with 3 different ethernet drivers.

Does anybody know how to get linux to throw an error in
this circumstance? Based on some googling, I tried setting
IP_RECVERR on the socket, but with no luck.

(FWIW, this is RHEL 5.5 2.6.18-164).

Thanks,

Drew


More information about the netperf-talk mailing list