[netperf-talk] mystifying performance output from netperf on my eth bonded servers
Rahul Nabar
rpnabar at gmail.com
Wed Sep 10 09:58:10 PDT 2008
I just ran some netperf tests on my system (which has a bunch of eth
bonded nodes) and wanted to make sure that I am indeed trying the
right tests.
I have netserver running on my "server" and ran netperf from several
nodes pointing to my "server" as the remote host by the -H flag like
so:
/opt/netperf2/bin/netperf -t TCP_RR -H 10.0.0.100
The summary of the results (one detailed run attached below) was as
follows. I reproduce the "Trans rate per sec" figures:
node1: mode=4 (802.3) 7023.38
node3: mode=6 (alb Adaptive Load Balancing) 7600.67
node5: Took one link down from the switch effectively simulating a
node with just a single eth card. 6952.47
This confuses me a bit. All three numbers seem essentially the same.
So I got no bandwidth advantage from bonding? Or are these tests not
saturating my interfaces so the outputs are meaningless? Any better
way of testing this?
Essentially I am interested in two parameters: (1) bonded vs
non-bonded peak performance (2) mode=4 vs mode=6 bonding performance.
Additionally, a primitive test I ran was just timing the transfer of a
4 GB file via scp. It took approx. 86 secs. Some number crunching
tells me this is about 0.5 Gbps. Ignoring all protocol overheads etc.
But the above numbers from netperf are in the ballpark of 7000
bits/sec. That seems totally off by orders of magnitude! Am I
confusing my units here or just some fundamental understanding hiccup
on my part!
Sorry, I'm a networking newbiee! Any leads?
--
Rahul
More information about the netperf-talk
mailing list