[netperf-talk] ENOBUFS on linux?

Rick Jones rick.jones2 at hp.com
Wed May 19 14:42:42 PDT 2010


Andrew Gallatin wrote:
> The txqueuelen seems to be the determining factor.  It defaults to 100
> for the driver I was using (igb).   If I double txqueuelen, then 128K
> works fine (but 256K results in the silent drops).
> 
> I found a thread which seems to indicate that this is
> the desired behavior (http://article.gmane.org/gmane.linux.network/136046)
> 
> 
> Using the following hack (as suggested in the message above)
> I can get -ENOBUFS returned.   I tried this before, but was
> passing the wrong level to setsockopt()
> 
> --- /nfs/home/gallatin/netperf/netperf-2.4.5.orig/src/netlib.c 
> 2009-05-27 18:27:34.000000000 -0400
> +++ src/netlib.c        2010-05-19 17:04:53.000000000 -0400
> @@ -2839,6 +2839,15 @@
>               requested_size);
>        fflush(where);
>      }
> +    if (which == SEND_BUFFER) {
> +       int on = 1;
> +      if (setsockopt(sd, 0, IP_RECVERR, (char *)&on, sizeof(int)) < 0) {
> +       fprintf(where, "netperf: IP_RECVERR failed (%d)\n", errno);
> +       fflush(where);
> +       exit(2);
> +      }
> +    }
> +
>    }
> 
>    /* the getsockopt() call that used to be here has been hoisted into
> 
> 
> Now I have sanity!
> 
> netperf -Hmist13 -C -c -tUDP_STREAM -- -s 1M
> UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
> mist13.sw.myri.com (172.31.171.13) port 0 AF_INET
> Socket  Message  Elapsed      Messages                   CPU      Service
> Size    Size     Time         Okay Errors   Throughput   Util     Demand
> bytes   bytes    secs            #      #   10^6bits/sec % SS     us/KB
> 
> 2097152   65507   10.00          24 436290        1.3     13.19    6873.250
> 129024           10.00          24               1.3     0.54     280.275
> 
> 
> How would you feel if I cleaned this up some, and submitted a
> patch which applied this to the UDP_STREAM send-side test?

If you can show the effect on achieved performance is epsilon for the cases when 
the drops don't occur I'm OK with that (preferably with something other than a 
100 Mbit/s link, or at least with high confidence level service demands).  Bonus 
points for making sure it works with the omni tests (although IIRC those are 
already using the nettest_bsd.c create_data_socket() stuff anyway)

I would also like to know that it doesn't get in the way of a TCP_CC or TCP_CRR 
test, as that will be creating lots of data sockets and so "extra" setsockopt() 
calls are generally to be avoided.

happy benchmarking,

rick jones



More information about the netperf-talk mailing list