[netperf-talk] Considering declaring a 2.6.0 release

Rick Jones rick.jones2 at hp.com
Mon Apr 30 15:51:32 PDT 2012


On 04/30/2012 03:26 PM, Cook, Jonathan wrote:
> The autogen.sh script fixed the ./configure problem and the new
> version of netserver fixed the problem with using -b and -D
> together.

Good.

> I always use the omni tests and I am doing UDP and TCP tests in both
> receive and transmit directions.  The PAD_TIME is always added to the
> elapsed time when performing a UDP receive test (that is, with
> netserver running on Linux and netperf running on Windows).  PAD_TIME
> is never added to the elapsed time for the other tests I am running.
>
> I was hoping there would be a way for netserver to report interim
> results to netperf during a UDP send test so the user would only have
> to monitor the netperf side to see true throughput.  I haven't looked
> at the messaging between netserver and netperf so I don't know if
> that is a practical possibility.

Well, it would have to be a stream of messages on the control 
connection, and to avoid clogging the control connection it would be 
necessary for the test(s) to start watching the control connection *and* 
the data connection at the same time.  That would be a very non-trivial 
change.  Less involved, but not as viscerally satisfying might be to 
cause netserver to emit interim results to its debug log.  That of 
course calls for being able to get the netserver debug log off of system B.

If I had system A and B, I wanted to see what B received when A sent, I 
would be inclined to run a "UDP_MAERTS" test on B and do the interim 
results there.  Yes, TCP_MAERTS has as part of its genesis the 
recognition that one cannot always login to B, but still...


> Here is an example of the PAD_TIME being added to the elapsed time.  The test was set to run for 10 seconds, but the elapsed time is shown as 12 seconds.  If you calculate the elapsed time from the local bytes received and final throughput then netperf appears to have used an elapsed time of 16 seconds.
>
> G:\netperf>netperf.exe -H10.8.7.240 -L10.8.34.147 -tOMNI -l 10 -fk -D 1 -- -Tudp
>   -dreceive -m,1400 -s128k -S128k -O PROTOCOL,THROUGHPUT,THROUGHPUT_UNITS,DIRECTI
> ON,ELAPSED_TIME,REMOTE_BYTES_RECVD,LOCAL_BYTES_RECVD, -R1
> OMNI Receive TEST from 10.8.34.147 () port 0 AF_INET to 10.8.7.240 () port 0 AF_
> INET : histogram : demo
> Interim result: 57259.71 10^3bits/s over 1.001 seconds ending at 1335800367.046
> Interim result: 58016.58 10^3bits/s over 1.008 seconds ending at 1335800368.062
> Interim result: 57996.42 10^3bits/s over 1.004 seconds ending at 1335800369.062
> Interim result: 58831.09 10^3bits/s over 1.001 seconds ending at 1335800370.062
> Interim result: 58759.71 10^3bits/s over 1.003 seconds ending at 1335800371.062
> Interim result: 58475.54 10^3bits/s over 1.005 seconds ending at 1335800372.062
> Interim result: 58236.37 10^3bits/s over 1.004 seconds ending at 1335800373.078
> Interim result: 57772.79 10^3bits/s over 1.008 seconds ending at 1335800374.078
> Interim result: 57373.61 10^3bits/s over 1.007 seconds ending at 1335800375.093
> Interim result: 57303.65 10^3bits/s over 1.001 seconds ending at 1335800376.093
> Protocol Throughput Throughput  Direction Elapsed Remote   Local
>                      Units                 Time    Bytes    Bytes
>                                            (sec)   Received Received
>
> UDP      39081.70   10^3bits/s  Receive   12.00   0        78163400

The uncertainty of "how far into the PAD might have receives continued" 
makes me wonder if the only way to deal with that is to have no added 
pad time for the "UDP_MAERTS" test and then making the netserver side 
willing to cope with the flow of ICMPs that the race will no doubt trigger.

rick


More information about the netperf-talk mailing list