[netperf-dev] netperf2 commit notice r519 - in trunk: doc src
raj at netperf.org
raj at netperf.org
Mon Jan 30 15:00:10 PST 2012
Author: raj
Date: 2012-01-30 15:00:10 -0800 (Mon, 30 Jan 2012)
New Revision: 519
Modified:
trunk/doc/netperf.html
trunk/doc/netperf.info
trunk/doc/netperf.texi
trunk/src/nettest_omni.c
Log:
document using enable-demo in aggregate testing and fix a few code nits
Modified: trunk/doc/netperf.html
===================================================================
--- trunk/doc/netperf.html 2012-01-24 00:09:38 UTC (rev 518)
+++ trunk/doc/netperf.html 2012-01-30 23:00:10 UTC (rev 519)
@@ -100,7 +100,8 @@
<ul>
<li><a href="#Issues-in-Running-Concurrent-Tests">7.1.1 Issues in Running Concurrent Tests</a>
</li></ul>
-<li><a href="#Using-_002d_002denable_002dburst">7.2 Using –enable-burst</a>
+<li><a href="#Using-_002d_002denable_002dburst">7.2 Using - -enable-burst</a>
+<li><a href="#Using-_002d_002denable_002ddemo">7.3 Using - -enable-demo</a>
</li></ul>
<li><a name="toc_Using-Netperf-to-Measure-Bidirectional-Transfer" href="#Using-Netperf-to-Measure-Bidirectional-Transfer">8 Using Netperf to Measure Bidirectional Transfer</a>
<ul>
@@ -2639,6 +2640,7 @@
<ul class="menu">
<li><a accesskey="1" href="#Running-Concurrent-Netperf-Tests">Running Concurrent Netperf Tests</a>
<li><a accesskey="2" href="#Using-_002d_002denable_002dburst">Using --enable-burst</a>
+<li><a accesskey="3" href="#Using-_002d_002denable_002ddemo">Using --enable-demo</a>
</ul>
<div class="node">
@@ -2780,17 +2782,31 @@
netperf <b>will be wrong</b>. One has to compute service demands for
concurrent tests by hand
+ <p>Running concurrent tests can also become difficult when there is no
+one “central” node. Running tests between pairs of systems may be
+more difficult, calling for remote shell commands in the for loop
+rather than netperf commands. This introduces more skew error, which
+the confidence intervals may not be able to sufficiently mitigate.
+One possibility is to actually run three consecutive netperf tests on
+each node - the first being a warm-up, the last being a cool-down.
+The idea then is to ensure that the time it takes to get all the
+netperfs started is less than the length of the first netperf command
+in the sequence of three. Similarly, it assumes that all “middle”
+netperfs will complete before the first of the “last” netperfs
+complete.
+
<div class="node">
<a name="Using---enable-burst"></a>
<a name="Using-_002d_002denable_002dburst"></a>
<p><hr>
+Next: <a rel="next" accesskey="n" href="#Using-_002d_002denable_002ddemo">Using --enable-demo</a>,
Previous: <a rel="previous" accesskey="p" href="#Running-Concurrent-Netperf-Tests">Running Concurrent Netperf Tests</a>,
Up: <a rel="up" accesskey="u" href="#Using-Netperf-to-Measure-Aggregate-Performance">Using Netperf to Measure Aggregate Performance</a>
</div>
<!-- node-name, next, previous, up -->
-<h3 class="section">7.2 Using –enable-burst</h3>
+<h3 class="section">7.2 Using - -enable-burst</h3>
<p>Starting in version 2.5.0 <code>--enable-burst=yes</code> is the default,
which means one no longer must:
@@ -2977,6 +2993,120 @@
specified in the test-specific <samp><span class="option">-b</span></samp> option.
<div class="node">
+<a name="Using---enable-demo"></a>
+<a name="Using-_002d_002denable_002ddemo"></a>
+<p><hr>
+Previous: <a rel="previous" accesskey="p" href="#Using-_002d_002denable_002dburst">Using --enable-burst</a>,
+Up: <a rel="up" accesskey="u" href="#Using-Netperf-to-Measure-Aggregate-Performance">Using Netperf to Measure Aggregate Performance</a>
+
+</div>
+
+<h3 class="section">7.3 Using - -enable-demo</h3>
+
+<p>One can
+<pre class="example"> configure --enable-demo
+</pre>
+ <p>and compile netperf to enable netperf to emit “interim results” at
+semi-regular intervals. This enables a global <code>-D</code> option which
+takes a reporting interval as an argument. With that specified, the
+output of netperf will then look something like
+
+<pre class="example"> $ src/netperf -D 1.25
+ MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain () port 0 AF_INET : demo
+ Interim result: 25425.52 10^6bits/s over 1.25 seconds ending at 1327962078.405
+ Interim result: 25486.82 10^6bits/s over 1.25 seconds ending at 1327962079.655
+ Interim result: 25474.96 10^6bits/s over 1.25 seconds ending at 1327962080.905
+ Interim result: 25523.49 10^6bits/s over 1.25 seconds ending at 1327962082.155
+ Interim result: 25053.57 10^6bits/s over 1.27 seconds ending at 1327962083.429
+ Interim result: 25349.64 10^6bits/s over 1.25 seconds ending at 1327962084.679
+ Interim result: 25292.84 10^6bits/s over 1.25 seconds ending at 1327962085.932
+ Recv Send Send
+ Socket Socket Message Elapsed
+ Size Size Size Time Throughput
+ bytes bytes bytes secs. 10^6bits/sec
+
+ 87380 16384 16384 10.00 25375.66
+</pre>
+ <p>The units of the “Interim result” lines will follow the units
+selected via the global <code>-f</code> option. If the test-specific
+<code>-o</code> option is specified on the command line, the format will be
+CSV:
+<pre class="example"> ...
+ 2978.81,MBytes/s,1.25,1327962298.035
+ ...
+</pre>
+ <p>If the test-specific <code>-k</code> option is used the format will be
+keyval with each keyval being given an index:
+<pre class="example"> ...
+ NETPERF_INTERIM_RESULT[2]=25.00
+ NETPERF_UNITS[2]=10^9bits/s
+ NETPERF_INTERVAL[2]=1.25
+ NETPERF_ENDING[2]=1327962357.249
+ ...
+</pre>
+ <p>The expectation is it may be easier to utilize the keyvals if they
+have indices.
+
+ <p>But how does this help with aggregate tests? Well, what one can do is
+start the netperfs via a script, giving each a Very Long (tm) run
+time. Direct the output to a file per instance. Then, once all the
+netperfs have been started, take a timestamp and wait for some desired
+test interval. Once that interval expires take another timestamp and
+then start terminating the netperfs by sending them a SIGALRM signal
+via the likes of the <code>kill</code> or <code>pkill</code> command. The
+netperfs will terminate and emit the rest of the “usual” output, and
+you can then bring the files to a central location for post
+processing to find the aggregate performance over the “test interval.”
+
+ <p>This method has the advantage that it does not require advance
+knowledge of how long it takes to get netperf tests started and/or
+stopped. It does though require sufficiently synchronized clocks on
+all the test systems.
+
+ <p>While calls to get the current time can be inexpensive, that neither
+has nor is universally true. For that reason netperf tries to
+minimize the number of such “timestamping” calls (eg
+<code>gettimeofday</code>) calls it makes when in demo mode. Rather than
+take a timestamp after each <code>send</code> or <code>recv</code> call completes
+netperf tries to guess how many units of work will be performed over
+the desired interval. Only once that many units of work have been
+completed will netperf check the time. If the reporting interval has
+passed, netperf will emit an “interim result.” If the interval has
+not passed, netperf will update its estimate for units and continue.
+
+ <p>After a bit of thought one can see that if things “speed-up” netperf
+will still honor the interval. However, if things “slow-down”
+netperf may be late with an “interim result.” Here is an example of
+both of those happening during a test - with the interval being
+honored while throughput increases, and then about half-way through
+when another netperf (not shown) is started we see things slowing down
+and netperf not hitting the interval as desired.
+<pre class="example"> $ src/netperf -D 2 -H tardy.hpl.hp.com -l 20
+ MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to tardy.hpl.hp.com () port 0 AF_INET : demo
+ Interim result: 36.46 10^6bits/s over 2.01 seconds ending at 1327963880.565
+ Interim result: 59.19 10^6bits/s over 2.00 seconds ending at 1327963882.569
+ Interim result: 73.39 10^6bits/s over 2.01 seconds ending at 1327963884.576
+ Interim result: 84.01 10^6bits/s over 2.03 seconds ending at 1327963886.603
+ Interim result: 75.63 10^6bits/s over 2.21 seconds ending at 1327963888.814
+ Interim result: 55.52 10^6bits/s over 2.72 seconds ending at 1327963891.538
+ Interim result: 70.94 10^6bits/s over 2.11 seconds ending at 1327963893.650
+ Interim result: 80.66 10^6bits/s over 2.13 seconds ending at 1327963895.777
+ Interim result: 86.42 10^6bits/s over 2.12 seconds ending at 1327963897.901
+ Recv Send Send
+ Socket Socket Message Elapsed
+ Size Size Size Time Throughput
+ bytes bytes bytes secs. 10^6bits/sec
+
+ 87380 16384 16384 20.34 68.87
+</pre>
+ <p>So long as your post-processing mechanism can account for that, there
+should be no problem. As time passes there may be changes to try to
+improve the netperf's honoring the interval but one should not
+ass-u-me it will always do so. One should not assume the precision
+will remain fixed - future versions may change it - perhaps going
+beyond tenths of seconds in reporting the interval length etc.
+
+<div class="node">
<a name="Using-Netperf-to-Measure-Bidirectional-Transfer"></a>
<p><hr>
Next: <a rel="next" accesskey="n" href="#The-Omni-Tests">The Omni Tests</a>,
Modified: trunk/doc/netperf.info
===================================================================
--- trunk/doc/netperf.info 2012-01-24 00:09:38 UTC (rev 518)
+++ trunk/doc/netperf.info 2012-01-30 23:00:10 UTC (rev 519)
@@ -2305,6 +2305,7 @@
* Running Concurrent Netperf Tests::
* Using --enable-burst::
+* Using --enable-demo::
File: netperf.info, Node: Running Concurrent Netperf Tests, Next: Using --enable-burst, Prev: Using Netperf to Measure Aggregate Performance, Up: Using Netperf to Measure Aggregate Performance
@@ -2431,7 +2432,7 @@
hand
-File: netperf.info, Node: Using --enable-burst, Prev: Running Concurrent Netperf Tests, Up: Using Netperf to Measure Aggregate Performance
+File: netperf.info, Node: Using --enable-burst, Next: Using --enable-demo, Prev: Running Concurrent Netperf Tests, Up: Using Netperf to Measure Aggregate Performance
7.2 Using -enable-burst
=======================
@@ -2616,6 +2617,12 @@
specified in the test-specific `-b' option.
+File: netperf.info, Node: Using --enable-demo, Prev: Using --enable-burst, Up: Using Netperf to Measure Aggregate Performance
+
+7.3 Using -enable-demo
+======================
+
+
File: netperf.info, Node: Using Netperf to Measure Bidirectional Transfer, Next: The Omni Tests, Prev: Using Netperf to Measure Aggregate Performance, Up: Top
8 Using Netperf to Measure Bidirectional Transfer
@@ -4163,25 +4170,26 @@
Node: DLCO_RR105459
Node: SCTP_RR105611
Node: Using Netperf to Measure Aggregate Performance105747
-Node: Running Concurrent Netperf Tests106726
-Node: Issues in Running Concurrent Tests110973
-Node: Using --enable-burst112481
-Node: Using Netperf to Measure Bidirectional Transfer119348
-Node: Bidirectional Transfer with Concurrent Tests120480
-Node: Bidirectional Transfer with TCP_RR122836
-Node: Implications of Concurrent Tests vs Burst Request/Response125220
-Node: The Omni Tests127034
-Node: Native Omni Tests128081
-Node: Migrated Tests133359
-Node: Omni Output Selection135464
-Node: Omni Output Selectors138447
-Node: Other Netperf Tests166155
-Node: CPU rate calibration166590
-Node: UUID Generation168958
-Node: Address Resolution169674
-Node: Enhancing Netperf171650
-Node: Netperf4173145
-Node: Concept Index174050
-Node: Option Index176376
+Node: Running Concurrent Netperf Tests106750
+Node: Issues in Running Concurrent Tests110997
+Node: Using --enable-burst112505
+Node: Using --enable-demo119400
+Node: Using Netperf to Measure Bidirectional Transfer119579
+Node: Bidirectional Transfer with Concurrent Tests120711
+Node: Bidirectional Transfer with TCP_RR123067
+Node: Implications of Concurrent Tests vs Burst Request/Response125451
+Node: The Omni Tests127265
+Node: Native Omni Tests128312
+Node: Migrated Tests133590
+Node: Omni Output Selection135695
+Node: Omni Output Selectors138678
+Node: Other Netperf Tests166386
+Node: CPU rate calibration166821
+Node: UUID Generation169189
+Node: Address Resolution169905
+Node: Enhancing Netperf171881
+Node: Netperf4173376
+Node: Concept Index174281
+Node: Option Index176607
End Tag Table
Modified: trunk/doc/netperf.texi
===================================================================
(Binary files differ)
Modified: trunk/src/nettest_omni.c
===================================================================
--- trunk/src/nettest_omni.c 2012-01-24 00:09:38 UTC (rev 518)
+++ trunk/src/nettest_omni.c 2012-01-30 23:00:10 UTC (rev 519)
@@ -202,7 +202,7 @@
switch (netperf_output_mode) {
case HUMAN:
fprintf(where,
- "Interim result: %7.2f %s/s over %.2f seconds ending at %ld.%.3ld\n",
+ "Interim result: %7.2f %s/s over %.3f seconds ending at %ld.%.3ld\n",
calc_thruput_interval(units_this_tick,
actual_interval/1000000.0),
format_units(),
@@ -212,7 +212,7 @@
break;
case CSV:
fprintf(where,
- "%7.2f,%s/s,%.2f,%ld.%.3ld\n",
+ "%7.2f,%s/s,%.3f,%ld.%.3ld\n",
calc_thruput_interval(units_this_tick,
actual_interval/1000000.0),
format_units(),
@@ -222,9 +222,9 @@
break;
case KEYVAL:
fprintf(where,
- "NETPERF_INTERIM_RESULT[%d]=%7.2f\n"
+ "NETPERF_INTERIM_RESULT[%d]=%.2f\n"
"NETPERF_UNITS[%d]=%s/s\n"
- "NETPERF_INTERVAL[%d]=%.2f\n"
+ "NETPERF_INTERVAL[%d]=%.3f\n"
"NETPERF_ENDING[%d]=%ld.%.3ld\n",
count,
calc_thruput_interval(units_this_tick,
@@ -3064,6 +3064,9 @@
we assume a reliable connection and can do the usual graceful
shutdown thing */
+ /* this needs to be revisited for the netperf receiving case when
+ the test is terminated by a Ctrl-C. raj 2012-01-24 */
+
if (protocol != IPPROTO_UDP) {
if (initiate)
shutdown(data_socket, SHUT_WR);
@@ -3072,15 +3075,15 @@
connection close, or an error. of course, we *may* never
receive anything from the remote which means we probably really
aught to have a select here but until we are once bitten we
- will remain twice bold */
+ will remain twice bold. */
bytes_recvd = recv(data_socket,
buffer,
1,
0);
if (bytes_recvd != 0) {
- /* connection close, call close. we assume that the requisite */
- /* number of bytes have been received */
+ /* connection close, call close. we assume that the requisite
+ number of bytes have been received */
if (SOCKET_EINTR(bytes_recvd))
{
/* We hit the end of a timed test. */
More information about the netperf-dev
mailing list