[netperf-talk] Testing on Freescale MPC8313ERDB

Dominic Lemire DominicLemire at vtech.ca
Wed May 5 15:33:37 PDT 2010


Thanks a lot Rick and Andrew.

The CPU seems to be the bottleneck (single core 333MHz). I get better 
results when I connect 2 Freescale boards together (see results below). 

I tried the sendfile test with a 10MB file of random data, but I still see 
the cpu saturated and lower throughput (see last test below). Is this 10MB 
big enough?

Thanks again,

Dominic

---------- 10Mbit hub ----------
PHY: e0024520:04 - Link is Up - 10/Half
~ # ./netperf -H 10.42.43.2 -c -C -- -s 128K -S 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.42.43.2 
(10.42.43.2) port 0 AF_INET
Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

217088 217088 217088    10.12         7.67   3.45     6.14     36.846 
65.596

---------- 10/100Mbit hub ----------
PHY: e0024520:04 - Link is Up - 100/Half
~ # ./netperf -H 10.42.43.2 -c -C -- -s 128K -S 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.42.43.2 
(10.42.43.2) port 0 AF_INET
Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

217088 217088 217088    10.02        79.87   29.32    67.95    30.072 
69.699

---------- 10/100Mbit switch ----------
PHY: e0024520:04 - Link is Up - 100/Full
~ # ./netperf -H 10.42.43.2 -c -C -- -s 128K -S 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.42.43.2 
(10.42.43.2) port 0 AF_INET
Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

217088 217088 217088    10.02        94.11   45.09    83.53    39.252 
72.709

---------- Two Freescale boards with cross-over cable (1000Mbit, 
full-duplex) ----------
PHY: e0024520:04 - Link is Up - 1000/Full
~ # ./netperf -H 10.42.43.2 -c -C -- -s 128K -S 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.42.43.2 
(10.42.43.2) port 0 AF_INET
Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

217088 217088 217088    10.01       191.72   99.90    92.01    42.686 
39.313

---------- Sendfile test  ----------
~ # ./netperf -H 10.42.43.2 -c -C -tTCP_SENDFILE -F /dev/shm/10meg.bin
TCP SENDFILE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.42.43.2 
(10.42.43.2) port 0 AF_INET
Recv   Send    Send                          Utilization       Service 
Demand
Socket Socket  Message  Elapsed              Send     Recv     Send Recv
Size   Size    Size     Time     Throughput  local    remote   local 
remote
bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB us/KB

 87380  16384  16384    10.00       150.21   99.90    96.50    54.481 
52.628




Andrew Gallatin <gallatin at cs.duke.edu>
2010/05/05 01:08 PM
 
        To:     Dominic Lemire <DominicLemire at vtech.ca>
        cc:     netperf-talk at netperf.org
        Subject:        Re: [netperf-talk] Testing on Freescale 
MPC8313ERDB


Rick Jones wrote:
> Dominic Lemire wrote:

> and make certain your CPU is not saturated.  If there is any question 
> whatsoever the remote CPU might be bottlenecking, you should check there 

> too - add a "-C" after the "-c"

And if the CPU is saturated on the sender, try using the sendfile
test:

./netperf -H 192.168.1.1 -c -C -tTCP_SENDFILE -F /path/to/a/big/file

This eliminates the copy on the transmit side.

Are there multiple CPUS (cores?) on this system, if yes
and your CPU is still saturated, then try using the -T CPU binding
options and multiple copies of netperf.  Assuming 2 quad-cores

netperf -H 192.168.1.1 -T0,0 -P 0 -l 120 &
netperf -H 192.168.1.1 -T1,1 -P 0 -l 120 &
netperf -H 192.168.1.1 -T2,2 -P 0 -l 120 &
netperf -H 192.168.1.1 -T3,3 -P 0 -l 120 &

Note the "-l 120", which is intended to run a longer
test, and minimize the percent of time when not all
4 are running at once.   To really do a multithreaded
test, you need netperf4, uperf, or even iperf.

Drew

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.netperf.org/pipermail/netperf-talk/attachments/20100505/e871927b/attachment.html>


More information about the netperf-talk mailing list