<div dir="ltr"><font color="#000099"><font face="tahoma,sans-serif">Dear all,</font></font><div><font color="#000099"><font face="tahoma,sans-serif"><br></font></font></div><div><font color="#000099"><font face="tahoma,sans-serif">Sometime ago I started developing benchmark tests to verify the performance of a custom networking application, and I am comparing the results of my benchmark tests with those obtained from netperf.</font></font></div>
<div><font color="#000099"><font face="tahoma,sans-serif"><br></font></font></div><div><font color="#000099"><font face="tahoma,sans-serif">In order to make sure that the comparisons are sane, I have to understand exactly what goes inside the code of the netperf benchmarks. For example, I have misgivings about the correct way in which "average through-put" should be computed. Say we have three samples (observations) about data transmission over the network in terms of bit sent in number of seconds:</font></font></div>
<div><ol><li><font class="Apple-style-span" color="#000099" face="tahoma, sans-serif">5 bits in 1 sec</font></li><li><font class="Apple-style-span" color="#000099" face="tahoma, sans-serif">15 bits in 2 sec</font></li><li>
<font class="Apple-style-span" color="#000099" face="tahoma, sans-serif">10 bits in 3 sec</font></li></ol><div><font class="Apple-style-span" color="#000099" face="tahoma, sans-serif">In this case, will the average throughput be computed as a ratio of total bits over total time ( (5+15+10) / (1+2+3) = 30/6 = 5 bits / sec). Or would it be computed as a average of the three throughputs i.e., ( ((5/1) + (15/2) + (10/3)) / 3 ) = 5.27 bits /sec) ? </font></div>
<div><font class="Apple-style-span" color="#000099" face="tahoma, sans-serif"><br></font></div><div><font class="Apple-style-span" color="#000099" face="tahoma, sans-serif">While the first method is what is generally taught in class, the second method is what falls more in line with the notion of average = (total of all the observations / number of observations).</font></div>
<div><font class="Apple-style-span" color="#000099" face="tahoma, sans-serif"><br></font></div><div><font class="Apple-style-span" color="#000099" face="tahoma, sans-serif">I want to know how these issues are handled by the netperf code, for example while reporting the throughput in the TCP_STEAM test. </font></div>
<div><font class="Apple-style-span" color="#000099" face="tahoma, sans-serif"><br></font></div><div><font class="Apple-style-span" color="#000099" face="tahoma, sans-serif">While the <a href="http://www.netperf.org/svn/netperf2/tags/netperf-2.4.5/doc/netperf.html">netperf documentation</a> provides some pointers to this, it does not give me all the necessary details. Please guide me regarding how to go about understanding the theory behind the tests.</font></div>
<div><font class="Apple-style-span" color="#000099" face="tahoma, sans-serif"><br></font></div><div><font class="Apple-style-span" color="#000099" face="tahoma, sans-serif">Thanks !<br clear="all"><br>-- <br>Priya Bhat<br>
<br><br>
<br><br><br>
</font></div></div></div>