[netperf-talk] How is throughput and round-trip time measured..

Priya pbhat at acis.ufl.edu
Wed Apr 20 08:30:42 PDT 2011


Dear all,

Sometime ago I started developing benchmark tests to verify the performance
of a custom networking application, and I am comparing the results of my
benchmark tests with those obtained from netperf.

In order to make sure that the comparisons are sane, I have to understand
exactly what goes inside the code of the netperf benchmarks. For example, I
have misgivings about the correct way in which "average through-put" should
be computed. Say we have three samples (observations) about data
transmission over the network in terms of bit sent in number of seconds:

   1. 5 bits in 1 sec
   2. 15 bits in 2 sec
   3. 10 bits in 3 sec

In this case, will the average throughput be computed as a ratio of total
bits over total time ( (5+15+10) / (1+2+3)  = 30/6 = 5 bits / sec). Or would
it be computed as a average of the three throughputs i.e., (  ((5/1) +
(15/2) + (10/3)) / 3 ) = 5.27 bits /sec) ?

While the first method is what is generally taught in class, the second
method is what falls more in line with the notion of average = (total of all
the observations / number of observations).

I want to know how these issues are handled by the netperf code, for example
while reporting the throughput in the TCP_STEAM test.

While the netperf
documentation<http://www.netperf.org/svn/netperf2/tags/netperf-2.4.5/doc/netperf.html>provides
some pointers to this, it does not give me all the necessary
details. Please guide me regarding how to go about understanding the theory
behind the tests.

Thanks !

-- 
Priya Bhat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.netperf.org/pipermail/netperf-talk/attachments/20110420/d94a02c2/attachment.html>


More information about the netperf-talk mailing list