[netperf-talk] global question concerning Netperf test and SMP support
Simon Duboue
Simon.Duboue at ces.ch
Tue Apr 17 01:37:30 PDT 2012
Thank you for the explanations concerning the -m option. I misinterpret
it.
I attach 128b results because it is where the throughput results are the
best in TCP and I am just looking for the best reachable performance.
According to what you say I make a new test without configuring the -m
option.
This is the first script. I start getting statistics and then I launch my
netperf test.
#!/bin/bash
mpstat -P ALL 1 > mpstat100412_x4 &
./test-netperf-fsl-x4_tcp2.sh 10.0.17.200 > res160412_x4
pkill -x mpstat
Here is the netperf test:
#!/bin/bash
echo "Date: `date`"
echo "=================TCP==================="
echo "============Msg Size : 65535=============="
netperf -cC -H $1 -t TCP_STREAM -- -C &
netperf -cC -H $1 -t TCP_STREAM -- -C &
netperf -cC -H $1 -t TCP_STREAM -- -C
wait
echo "Date: `date`"
Does the '>' redirection could be limiting?
This is the final results obtained:
Date: Tue Apr 17 08:52:09 CEST 2012
=================TCP===================
============Msg Size : 65535==============
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
(10.0.17.200) port 0 AF_INET
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
87380 16384 16384 10.01 484.42 8.26 -1617.02 2.794
-2187.620
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
(10.0.17.200) port 0 AF_INET
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
87380 16384 16384 10.01 493.43 8.22 -1616.32 2.729
-2146.741
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
(10.0.17.200) port 0 AF_INET
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
87380 16384 16384 10.21 493.68 8.07 -1582.90 2.677
-2101.319
Date: Tue Apr 17 08:52:19 CEST 2012
This is what top returns:
top - 06:41:14 up 56 min, 1 user, load average: 0.08, 0.05, 0.05
Tasks: 67 total, 5 running, 62 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 2.1%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.0%hi, 0.2%si,
0.0%st
Mem: 3762812k total, 184724k used, 3578088k free, 928k buffers
Swap: 0k total, 0k used, 0k free, 8640k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3132 root 20 0 3044 536 288 R 35 0.0 0:01.06 netserver
3134 root 20 0 3044 536 288 R 24 0.0 0:00.73 netserver
3133 root 20 0 3044 536 288 R 22 0.0 0:00.67 netserver
3131 root 20 0 2952 1104 908 R 0 0.0 0:00.06 top
1 root 20 0 2000 536 476 S 0 0.0 0:03.65 init
Is that 3 netserver threads? Independent but running on the cpu where
netserver process is allocated? In fact, there is no synchronization!
The other test uses the cpu bind option as following:
#!/bin/bash
echo "Date: `date`"
echo "=================TCP==================="
echo "============Msg Size : 65535=============="
netperf -cC -H $1 -T,1 -t TCP_STREAM -- -C &
netperf -cC -H $1 -T,1 -t TCP_STREAM -- -C &
netperf -cC -H $1 -T,1 -t TCP_STREAM -- -C
wait
echo "Date: `date`"
The result of the top command looks like the previous one:
top - 06:50:19 up 1:05, 1 user, load average: 0.01, 0.03, 0.05
Tasks: 67 total, 2 running, 65 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.6%sy, 0.0%ni, 99.3%id, 0.0%wa, 0.0%hi, 0.1%si,
0.0%st
Mem: 3762812k total, 184184k used, 3578628k free, 928k buffers
Swap: 0k total, 0k used, 0k free, 8640k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3138 root 20 0 3044 536 304 S 32 0.0 0:01.14 netserver
3139 root 20 0 3044 536 304 R 30 0.0 0:01.13 netserver
3140 root 20 0 3044 536 304 S 25 0.0 0:00.94 netserver
3131 root 20 0 2952 1104 908 R 1 0.0 0:00.94 top
853 root 20 0 0 0 0 S 0 0.0 0:00.01 kworker/2:1
1 root 20 0 2000 536 476 S 0 0.0 0:03.66 init
And the results of the netperf test:
Date: Tue Apr 17 09:00:24 CEST 2012
=================TCP===================
============Msg Size : 65535==============
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
(10.0.17.200) port 0 AF_INET : cpu bind
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
87380 16384 16384 10.02 405.51 19.51 -547.57 7.882
-884.953
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
(10.0.17.200) port 0 AF_INET : cpu bind
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
87380 16384 16384 10.01 405.01 19.54 -548.15 7.903
-886.995
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
(10.0.17.200) port 0 AF_INET : cpu bind
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
87380 16384 16384 10.01 394.21 19.54 -548.10 8.119
-911.191
Date: Tue Apr 17 09:00:34 CEST 2012
The performance are a bit lower. In any case, they are not really what is
expected.
The CPU utilization is more realistic than with 128b socket size. This CPU
utilization concerns the sender which is a dual core Intel Xeon and not
the 8 cores Freescale processor. Netserver is running on the 8 The CPU
utilization is very high when I send little packet and decrease when I get
closer to the MTU both in UDP and TCP.
netperf -p 17170 -cC -H ip_addr -t TCP_STREAM -- -m 128
So in this results, the send message size corresponds to the send buffer
size according to what you say?
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.17.200
(10.0.17.200) port 0 AF_INET
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
87380 16384 128 10.00 1867.39 100.00 46.75 8.774
Have a nice day.
Best regards.
Simon Duboué
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.netperf.org/pipermail/netperf-talk/attachments/20120417/c6ac8498/attachment.html>
More information about the netperf-talk
mailing list