[netperf-dev] Are you parsing "classic" or default "omni human" netperf output with scripts?
Rick Jones
rick.jones2 at hp.com
Wed Sep 19 11:43:02 PDT 2012
Folks -
A request has come my way to enhance the TCP_STREAM test to include some
latency measurements. Specifically the idea is to measure the time it
takes to complete the connect() call on the data connection, and also
measure the time it takes between the shutdown() and the read return of
zero and use those to demonstrate the degree of bufferbloat during the test.
The request further asks that this be displayed automagically in the
default netperf output. The idea being to make it more immediately
clear to people whether bufferbloat is a factor in their environments.
It is that latter bit which has me dealing with a bit of FUD on whether
or not people are trying to parse the "classic" or default "omni human"
netperf output of a TCP_STREAM test with a script.
Thus the question - are you parsing "classic" or default "omni human"
output formats with a script?
happy benchmarking,
rick jones
More information about the netperf-dev
mailing list