[netperf-talk] Thin streams in Netperf

Rick Jones rick.jones2 at hp.com
Wed Feb 8 12:53:26 PST 2006


[As this is useful for anyone utins netperf I'm going to take the liberty of 
redirecting the discussion to netperf-talk]

> Hi!
> 
> With thin streams I mean data that is sent in specific intervals. Does 
> the -w parameter define the interval between consecutive bursts, each of 
> size given by the -b parameter? Or, does the -b parameter define the 
> total amount of data you want to send?

The -w option specifies how often a burst -b sends are sent.  That is, -w says 
how often sends are made, and -b says how many sends are made in each burst. 
The size of each send is controlled by the test-specific settings (-m for a 
_STREAM test -r for a _RR test)

> 
> Assuming that Netperf follows the first one (which is the one I want), 
> Netperf did not perform as expected when running with this option.
> Tried with:
> netperf -H 192.168.2.2 <http://192.168.2.2> -w 100 -b 20000 -l 10 -t 
> TCP_STREAM
> 
> But I got this output from netperf a fraction of a second later:
> 
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
> 
>  87380  16384  16384    0.11       93.58

The socket buffer sizes suggest you are using Linux.  The early exit suggests it 
took longer than the time specified in -w to send the burst of 20000 sends.

If sending the burst takes longer than the burst interval, the signal from the 
interval timer comes before we get to the sigsuspend() call.  This means one of 
the send() calls (or recv() in the case of an _RR test) may return with EINTR 
which is interpreted as the end-of test sigalarm having fired.

On HP-UX it is possible to avoid this problem by looking at some siginfo data 
while in the signal hander to see which system call was being interrupted, and 
restarting that system call if it was not sigsuspend().  I, and by extension 
netperf :) do not know how to do that on OSes other than HP-UX.

So, I either need to be educated as to how one can determine the system call 
being interrupted by the signal for OSes other than HP-UX (hint, this is a call 
for a response from the netperf-talk members :)  or for non-HP-UX platforms, one 
has to make sure that it does not take -w units of time to send -b sends.

Or, if on the top-of-trunk netperf bits, and willing to accept the CPU cycles 
that will be consumed, one can configure with --enable-spin and instead of using 
an interval timer to control the sending of the bursts, netperf will sit and 
spin in a gettimeofday() or gethrtime() call.  (The --enable-spin stuff is 
presently only in the nettest_bsd.c tests and none of the rest)

> Tried to increase the burst size to see if I got any changes:
> 
> netperf -H 192.168.2.2 <http://192.168.2.2> -w 100 -b 20000000000 -l 10 
> -t TCP_STREAM
> 
> But Netperf gave approximately the same result:
> 
> Recv   Send    Send
> Socket Socket  Message  Elapsed
> Size   Size    Size     Time     Throughput
> bytes  bytes   bytes    secs.    10^6bits/sec
> 
>  87380  16384  16384    0.11       93.63
> 
> Am I missing something :) ?

The burst size is number of sends, not the size of a send.  I might see about 
making some of the help text a bit more explanitory.  However, note that the 
send message size remained 16384 bytes as you altered the -b setting, which 
suggests that it was not controlling the size of a send :)

If you'd shrunk it severely - say to 20 or maybe 200 or something you'd probably 
have seen the test run for the full ten seconds.


happy benchmarking,

rick jones


More information about the netperf-talk mailing list