[netperf-talk] netperf2.4.4 on linux, socket size issue ?

mark wagner mwagner at redhat.com
Tue Apr 22 13:31:36 PDT 2008


Thanks Rick

I think that you have answered my questions in my second email, (sent 
from a different desktop so I missed this response :)

I will also try to update to the newer omni version in the near future.

-mark


Rick Jones wrote:
> mark wagner wrote:
>> Hi I'm seeing an apparent drop in performance when specifying socket 
>> size while using netperf 2.4.4 on rhel5.2.  If I use -s to specify 
>> the size of the local socket during a TCP_STREAM test the throughput 
>> goes down by a factor of three for what would appear to be the same 
>> size socket.
>>
>>
>> [root at specclient2 np2.4]# ./netperf -P1 -l 20 -H 192.168.10.10 -- -m 
>> 16K -s 8K
>> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
>> 192.168.10.10 (192.168.10.10) port 0 AF_INET : spin interval : demo
>> Recv   Send    Send                         Socket Socket  Message  
>> Elapsed             Size   Size    Size     Time     Throughput 
>> bytes  bytes   bytes    secs.    10^6bits/sec
>>  87380  16384  16384    20.00    1389.43 
>> [root at specclient2 np2.4]# ./netperf -P1 -l 20 -H 192.168.10.10 -- -m 16K
>> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
>> 192.168.10.10 (192.168.10.10) port 0 AF_INET : spin interval : demo
>> Recv   Send    Send                         Socket Socket  Message  
>> Elapsed             Size   Size    Size     Time     Throughput 
>> bytes  bytes   bytes    secs.    10^6bits/sec
>>  87380  16384  16384    20.00    4562.66 
>>  From the output, it would appear that both tests report that they 
>> are using the same buffer and message sizes, thus I would expect 
>> similar results.
>>
>> I recall reading in the release notes that the buffer returned by 
>> getsock opt under Linux actually doubled the size of the socket 
>> requested, hence I'm using -s 8K to get a 16K socket in the first run 
>> above.  So, is the drop in performance because I'm really only using 
>> an 8K buffer even though its getting reported as 16K or is something 
>> else going on here that I'm oblivious to?
>
> There are a few things happening here.  First, when one doesn't 
> specify the socket buffer size(s), Linux will start to "autotune" 
> them.  By the end of the test, the socket buffer sizes can be 
> significantly larger than what they were at the beginning.  Netperf 
> 2.4.4 only snaps at the beginning.  So, while in both cases above, 
> netperf is reporting the same socket buffer sizes, in one case, Linux 
> has gone behind netperf's back and ended-up using even larger socket 
> buffers.  This is another in a series of things where Linux wanted to 
> be different from everyone else and in so doing changed/violated some 
> basic assumptions about stack behavour implicit in netperf. (and 
> perhaps other benchmarks too)
>
> Further, when one is setting socket buffer sizes explicitly, there is 
> a further sysctl limit enforced which is not enforced on the autotuning.
>
> Now, in the top-of-trunk netperf, the "omni" tests (./configure 
> --enable-omni) will snap and report the socket buffer sizes at both 
> the beginning and the end of the run.  When one accepts the defaults, 
> it will allow one to see how large the Linux autotuning let things 
> get.  If you then use _that_ value in a -S/-s option I suspect you 
> will achieve the same results as accepting the defaults - assuming 
> that the values of (iirc) net.core.rmem_max and .wmem_max allow it.
>
> happy benchmarking,
>
> rick jones
>
> the top of trunk and omni tests will also try to compile code to pull 
> OS name and rev, driver name and rev, egress interface name, and even 
> slot into which the NIC was inserted.  The omni tests will report 
> these as either human readable (test-specific -O) or CSV 
> (test-specific -o) output, and you can pass a file name containing 
> output variables to display to either one.  More to it than just this, 
> but it should give a flavor.  Work in progress.  If it works-out, the 
> omni tests will replace (most of) the stuff in nettest_bsd.c, 
> nettest_sctp.c, and perhaps nettest_sdp.c.  it should also add DCCP 
> support.


More information about the netperf-talk mailing list