[netperf-talk] global question concerning Netperf test and SMP support
Rick Jones
rick.jones2 at hp.com
Thu Apr 26 10:47:34 PDT 2012
On 04/26/2012 07:19 AM, Simon Duboue wrote:
> Hello everyone,
>
> Here are a part of my last results:
>
> netperf -l 60 -H 10.0.17.200 -t TCP_STREAM -cC -i 10,2 -I 99,5 -- -m
> 32768 -s 57344 -S 57344
> netperf -l 60 -H 10.0.17.200 -t TCP_STREAM -cC -i 10,2 -I 99,5 -- -m
> 32768 -s 57344 -S 57344
> netperf -l 60 -H 10.0.17.200 -t TCP_STREAM -cC -i 10,2 -I 99,5 -- -m
> 32768 -s 57344 -S 57344
> netperf -l 60 -H 10.0.17.200 -t TCP_STREAM -cC -i 10,2 -I 99,5 -- -m
> 32768 -s 57344 -S 57344
> netperf -l 60 -H 10.0.17.200 -t TCP_STREAM -cC -i 10,2 -I 99,5 -- -m
> 32768 -s 57344 -S 57344
These were consecutive (one at a time) right? Because if they were
concurrent (all at once) you really must set max,min iterations for the
confidence intervals to the same value, lest one or more of the
instances terminate before the others.
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
> (10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf. : demo
> Recv Send Send Utilization Service Demand
> Socket Socket Message Elapsed Send Recv Send Recv
> Size Size Size Time Throughput local remote local remote
> bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
>
> 114688 114688 32768 60.00 1471.34 25.67 33.66 2.889 15.081
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
> (10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf. : demo
> Recv Send Send Utilization Service Demand
> Socket Socket Message Elapsed Send Recv Send Recv
> Size Size Size Time Throughput local remote local remote
> bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
>
> 114688 114688 32768 60.00 1456.63 25.68 33.67 2.901 15.043
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
> (10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf. : demo
> Recv Send Send Utilization Service Demand
> Socket Socket Message Elapsed Send Recv Send Recv
> Size Size Size Time Throughput local remote local remote
> bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
>
> 114688 114688 32768 60.01 1512.94 25.68 33.67 2.816 14.969
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
> (10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf. : demo
> Recv Send Send Utilization Service Demand
> Socket Socket Message Elapsed Send Recv Send Recv
> Size Size Size Time Throughput local remote local remote
> bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
>
> 114688 114688 32768 60.03 1465.22 25.67 33.68 2.920 14.895
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
> (10.0.17.200) port 0 AF_INET : +/-2.5% @ 99% conf. : demo
> Recv Send Send Utilization Service Demand
> Socket Socket Message Elapsed Send Recv Send Recv
> Size Size Size Time Throughput local remote local remote
> bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
>
> 114688 114688 32768 60.03 1498.52 25.67 33.68 2.819 14.849
>
> Total throughput = 1471.34 + 1456.63 + 1512.94 + 1465.22 + 1498.52 =
> 7404.65 Mbits/s.
Did you "fix-up" the output? If they were indeed concurrent (all
launched in the background) I would have expected to see all the test
banners, then all the test results. By way of quick-and-dirty example:
raj at tardy:~/netperf2_trunk$ for i in `seq 1 5`
> do
> netperf -H 192.168.1.3 -l 10 &
> done
[1] 11289
[2] 11290
[3] 11291
[4] 11292
[5] 11293
raj at tardy:~/netperf2_trunk$ MIGRATED TCP STREAM TEST from 0.0.0.0
(0.0.0.0) port 0 AF_INET to 192.168.1.3 () port 0 AF_INET : demo
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.1.3 () port 0 AF_INET : demo
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.1.3 () port 0 AF_INET : demo
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.1.3 () port 0 AF_INET : demo
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.1.3 () port 0 AF_INET : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.39 1.83
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.40 1.85
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.42 1.85
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.43 1.99
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.46 1.93
The output as you have presented it makes it look as though the tests
ran one-at-a-time.
Another sanity check would be to compare the service demands between a
single (all by itself) test and then with your five-at-once tests.
Since each netperf/netserver measures overall CPU utilization, but only
what it itself transferred, when you run actually concurrent tests, the
service demands reported by the ostensibly-all-at-once tests will be
(much) higher (and wrong :). If you see the same service demand for
each of the ostensibly-all-at-once as you do for a single test, you know
the ostensibly-all-at-once were not really all-at once.
> The CPU utilization seems correct and are very close together and they
> are not negative! (this seems to appear more with UDP stream test than
> with TCP ones)
> I observe that these negative values mainly appears on the first results
> of my test script.
>
> If you have any remarks.
Apart from what I've already mentioned, I'll simply repeat my concern
about the negative CPU utilization reported before and leave it at that.
rick
More information about the netperf-talk
mailing list