[netperf-talk] Questions on UDP_STREAM test

Rick Jones rick.jones2 at hp.com
Mon Jan 7 17:28:11 PST 2008


Jay Kim wrote:
> Hi, I'm learning netperf now.
> 
> When using UDP_STREAM tests, I have the following questions. Would you please answer these?
> 
> 1) Doesn't netperf provide parameters to control packet interval? For
> UDP test, I'm thinking to set a fixed data rate with a packet size and a
> packet interval parameters as iperf does. The packet size seems to be
> set by using '-m' option, but, I couldn't find how to control packet
> intervals. Am I missing something?

Sadly, that part isn't terribly well documented in:

http://www.netperf.org/svn/netperf2/trunk/doc/netperf.html

./configure --enable-intervals ...

or

./configure --enable-spin ...

and recompile netperf in its entirety.

> 2) I thought that -w option (when compiling with --enabale-intervals)
> would be for controlling the interval, but it didn't work. What does
> this 'interval' mean?

Please be explicit in what you mean by didn't work.  When netperf is 
configured with --enable-intervals or --enable-spin - and recompiled - 
then the global -w option will set how often burst of up to -b sends 
will be send.  However, if it takes longer than the interval specified 
by -w to send that burst, behaviour is undefined - on many platforms the 
test may end prematurely.

> 3) What can I do if I want to run a UDP overload test, that is,
> sending UDP packet stream of rate more than actual network link
> capacity? Is it not possible with netperf?

Depends on the stack.  Some stacks have intra-stack flow control which 
will block netperf when it tries to send faster than the link.  Others 
do not.

> 
> Thanks. What I've got in my environment with udp_stream_script example is like the following;
> 
> $ ./udp_stream_script demeter
> 
> ------------------------------------------------------
> Testing with the following command line:
> /usr/local/bin/netperf -l 60 -H demeter -i 10,2 -I 99,10 -t UDP_STREAM -- -m 64 -s 32768 -S 32768
> UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to demeter (10.89.1.122) port 0 AF_INET : +/-5.0% @ 99% conf.  : interval
> Socket  Message  Elapsed      Messages                
> Size    Size     Time         Okay Errors   Throughput
> bytes   bytes    secs            #      #   10^6bits/sec
> 
>  32768      64   59.99     2855756      0      24.37
>  32768           59.99     2849044             24.31
> 
> 
> ------------------------------------------------------
> Testing with the following command line:
> /usr/local/bin/netperf -l 60 -H demeter -i 10,2 -I 99,10 -t UDP_STREAM -- -m 1024 -s 32768 -S 32768
> UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to demeter (10.89.1.122) port 0 AF_INET : +/-5.0% @ 99% conf.  : interval
> Socket  Message  Elapsed      Messages                
> Size    Size     Time         Okay Errors   Throughput
> bytes   bytes    secs            #      #   10^6bits/sec
> 
>  32768    1024   60.00      678009      0      92.58
>  32768           60.00      678009             92.58
> 
> 
> ------------------------------------------------------
> Testing with the following command line:
> /usr/local/bin/netperf -l 60 -H demeter -i 10,2 -I 99,10 -t UDP_STREAM -- -m 1472 -s 32768 -S 32768
> UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to demeter (10.89.1.122) port 0 AF_INET : +/-5.0% @ 99% conf.  : interval
> Socket  Message  Elapsed      Messages                
> Size    Size     Time         Okay Errors   Throughput
> bytes   bytes    secs            #      #   10^6bits/sec
> 
>  32768    1472   59.99      481993      0      94.61
>  32768           59.99      481993             94.61
> 
> 
> If you wish to submit these results to the netperf database at
> http://www.cup.hp.com/netperf/NetperfPage.html, please submit each
> datapoint individually. Individual datapoints are separated by
> lines of dashes.


Sounds like you are running over a 100Mbit/s link, and are CPU 
saturating when -m is set to 64 and link saturating when -m is set to 
1024 or 1472.

happy benchmarking,

rick jones


More information about the netperf-talk mailing list