[netperf-talk] Bug in UDP_STREAM test when built w/ omni tests enabled
Alexander Duyck
alexander.h.duyck at intel.com
Thu Dec 1 11:48:31 PST 2011
Starting with version 2.5.0 of netperf I have been seeing issues with
the UDP_STREAM test putting out garbage data in the results. I also
tested the tip of trunk and the same issue is there. Below is an
example of what it is I am seeing with a standard test run via "netperf
-H localhost -t UDP_STREAM -- -m 1472":
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
localhost (127.0.0.1) port 0 AF_INET
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
114688 1472 10.00 1446576 0
-25601878157136826853035381601897889696738077819666360974222349802419681726547261867936797914769925040136444939441029859440038206236524544.00
1083874757
26815615860568076694529673297026416648975986770567292599155085326693968645065144021470572325735123815425202585064184242890315464382272384572711610422919168.00
1076101154 0.00
If I reconfigure with the "--disable-omni" option I get what I expect
which is:
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
localhost (127.0.0.1) port 0 AF_INET
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
114688 1472 10.00 1460094 0 1719.40
114688 10.00 1402378 1651.44
So far I have only been seeing this on a Fedora 14 32 bit system with
gcc 4.5.1. The same issue doesn't currently seem to be occurring with
the TCP tests.
Thanks,
Alex
More information about the netperf-talk
mailing list