[netperf-talk] global question concerning Netperf test and SMP support

Simon Duboue Simon.Duboue at ces.ch
Wed Apr 4 03:15:29 PDT 2012


Ok,
This is great. I will already go deeper based on these considerations.

My server is based on a Freescale QorIQ P4080 with 8 e500mc cores 
(1.5GHz), each with 32/32kB instruction/data L1 caches and a 128 kB L2 
cache. The chip has dual 1 MB L3 caches, each connected to a 64-bit 
DDR2/DDR3 memory controller.
It is a uma system.

I have a last question concerning the background run of netperf test:
for example, if I use in a shell script:

netperf -p 17170 -cC -H ip_addr -t UDP_STREAM -- -m 8000&
netperf -p 17170 -cC -H ip_addr -t UDP_STREAM -- -m 8000

results:
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
10.0.17.200 (10.0.17.200) port 0 AF_INET
Socket  Message  Elapsed      Messages                   CPU      Service
Size    Size     Time         Okay Errors   Throughput   Util     Demand
bytes   bytes    secs            #      #   10^6bits/sec % SS     us/KB

112640    8000   10.00      770705      0     4932.5     69.02    3.364 
108544           10.00      525289            3361.8     84.58    16.487

UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
10.0.17.200 (10.0.17.200) port 0 AF_INET
Socket  Message  Elapsed      Messages                   CPU      Service
Size    Size     Time         Okay Errors   Throughput   Util     Demand
bytes   bytes    secs            #      #   10^6bits/sec % SS     us/KB

112640    8000   10.00      767345      0     4911.0     69.02    3.388 
108544           10.00      521573            3338.0     79.60    15.628


or


netperf -p 17170 -cC -H ip_addr -t TCP_STREAM -- -m 128&
netperf -p 17170 -cC -H ip_addr -t TCP_STREAM -- -m 128

results:
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200 
(10.0.17.200) port 0 AF_INET
Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

 87380  16384    128    10.00      1867.39   100.00   46.75    8.774 
16.406 
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200 
(10.0.17.200) port 0 AF_INET
Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

 87380  16384    128    10.00      1966.14   100.00   46.75    8.333 
15.582 

Could the total throughput be considered as the sum of the two obtained 
results?
If so, what could be the limitations of the single process throughput?
It seems that the two processes are running in parallel according to the 
top command and to the system performance tool.

Have a nice day and thank you.

Best regards.

/Simon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.netperf.org/pipermail/netperf-talk/attachments/20120404/8eb0c133/attachment.html>


More information about the netperf-talk mailing list