[netperf-talk] global question concerning Netperf test and SMP support

Simon Duboue Simon.Duboue at ces.ch
Fri Apr 27 04:05:02 PDT 2012


Hello,

>These were consecutive (one at a time) right?  Because if they were 
>concurrent (all at once) you really must set max,min iterations for the 
>confidence intervals to the same value, lest one or more of the 
>instances terminate before the others.

They are normally concurrent. I think I just forget to put the -P 0 
options in what I echo. I execute with -P 0 and I echo without. Max,min 
iteration are set to the same value (-i 10,2 -I 99,5) so I don't really 
understand your remark... Could you explain more?

>Did you "fix-up" the output?  If they were indeed concurrent (all 
>launched in the background) I would have expected to see all the test 
>banners, then all the test results.  By way of quick-and-dirty example:

This is due to the -P 0 options mistake I think.

>The output as you have presented it makes it look as though the tests 
>ran one-at-a-time.
>
>Another sanity check would be to compare the service demands between a 
>single (all by itself) test and then with your five-at-once tests. 
>Since each netperf/netserver measures overall CPU utilization, but only 
>what it itself transferred, when you run actually concurrent tests, the 
>service demands reported by the ostensibly-all-at-once tests will be 
>(much) higher (and wrong :).  If you see the same service demand for 
>each of the ostensibly-all-at-once as you do for a single test, you know 
>the ostensibly-all-at-once were not really all-at once.

Ok, this is a good complementary check. Here is the result of the first 
test I do concerning service demand:
single test:
netperf -l 60 -H 10.0.17.200 -i 10,2 -I 99,5 -t TCP_STREAM -cC
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200 
(10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf.  : demo
Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

 87380  65536  65536    60.01      2075.12   5.19     9.32     0.409 3.027 
 


two-at-once test:
netperf -l 60 -H 10.0.17.200 -i 10,2 -I 99,5 -t TCP_STREAM -cC &
netperf -l 60 -H 10.0.17.200 -i 10,2 -I 99,5 -t TCP_STREAM -cC

TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200 
(10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf.  : demo
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200 
(10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf.  : demo

Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

 87380  65536  65536    60.06      1760.01   9.21     13.11    0.858 4.801 
 

Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

 87380  65536  65536    60.06      1750.31   9.21     13.14    0.861 4.730 


This looks good isn't it?

Here are some results for x4 instance tests:

netperf -P 0 -l 60 -H 10.0.17.200 -t TCP_STREAM -cC -i 10,2 -I 99,5 -- -m 
65535 &
netperf -P 0 -l 60 -H 10.0.17.200 -t TCP_STREAM -cC -i 10,2 -I 99,5 -- -m 
65535 &
netperf -P 0 -l 60 -H 10.0.17.200 -t TCP_STREAM -cC -i 10,2 -I 99,5 -- -m 
65535 &
netperf -P 0 -l 60 -H 10.0.17.200 -t TCP_STREAM -cC -i 10,2 -I 99,5 -- -m 
65535

TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200 
(10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf.  : demo
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200 
(10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf.  : demo
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200 
(10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf.  : demo
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200 
(10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf.  : demo

Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

 87380  65536  65536    60.02      1522.23   17.14    8.14     1.846 3.138 
 

Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

 87380  65536  65536    60.03      1516.81   17.15    8.14     1.854 2.918 
 

Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

 87380  65536  65536    60.04      1526.86   17.16    8.15     1.841 3.390 
 

Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

 87380  65536  65536    60.04      1568.20   17.15    8.14     1.793 3.840 


Thank you and have a nice weekend!

Best regards.

Simon Duboué
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.netperf.org/pipermail/netperf-talk/attachments/20120427/15a04343/attachment.html>


More information about the netperf-talk mailing list