[netperf-dev] netperf2 commit notice r534 - in trunk: doc src

raj at netperf.org raj at netperf.org
Thu Feb 2 16:06:18 PST 2012


Author: raj
Date: 2012-02-02 16:06:18 -0800 (Thu, 02 Feb 2012)
New Revision: 534

Modified:
   trunk/doc/netperf.info
   trunk/src/nettest_omni.c
Log:
call demo_interval_tick with the correct values for an RR test which wants other than transactions per second as the format units

Modified: trunk/doc/netperf.info
===================================================================
--- trunk/doc/netperf.info	2012-02-02 18:22:48 UTC (rev 533)
+++ trunk/doc/netperf.info	2012-02-03 00:06:18 UTC (rev 534)
@@ -2341,7 +2341,16 @@
       87380  16384  16384    10.03     233.96
 
    We can take the sum of the results and be reasonably confident that
-the aggregate performance was 940 Mbits/s.
+the aggregate performance was 940 Mbits/s.  This method does not need
+to be limited to one system speaking to one other system.  It can be
+extended to one system talking to N other systems.  It could be as
+simple as:
+     for host in 'foo bar baz bing'
+     do
+     netperf -t TCP_STREAM -H $hosts -i 10 -P 0 &
+     done
+   A more complicated/sophisticated example can be found in
+`doc/examples/runemomniagg2.sh' where.
 
    If you see warnings about netperf not achieving the confidence
 intervals, the best thing to do is to increase the number of iterations
@@ -2395,9 +2404,9 @@
 
    You will notice that the tests completed in an order other than they
 were started from the shell.  This underscores why there is a threat of
-skew error and why netperf4 is the preferred tool for aggregate tests.
-Even if you see the Netperf Contributing Editor acting to the
-contrary!-)
+skew error and why netperf4 will eventually be the preferred tool for
+aggregate tests.  Even if you see the Netperf Contributing Editor
+acting to the contrary!-)
 
 * Menu:
 
@@ -2691,15 +2700,15 @@
 all the test systems.
 
    While calls to get the current time can be inexpensive, that neither
-has nor is universally true.  For that reason netperf tries to minimize
-the number of such "timestamping" calls (eg `gettimeofday') calls it
-makes when in demo mode.  Rather than take a timestamp after each
-`send' or `recv' call completes netperf tries to guess how many units
-of work will be performed over the desired interval.  Only once that
-many units of work have been completed will netperf check the time.  If
-the reporting interval has passed, netperf will emit an "interim
-result."  If the interval has not passed, netperf will update its
-estimate for units and continue.
+has been nor is universally true.  For that reason netperf tries to
+minimize the number of such "timestamping" calls (eg `gettimeofday')
+calls it makes when in demo mode.  Rather than take a timestamp after
+each `send' or `recv' call completes netperf tries to guess how many
+units of work will be performed over the desired interval.  Only once
+that many units of work have been completed will netperf check the
+time.  If the reporting interval has passed, netperf will emit an
+"interim result."  If the interval has not passed, netperf will update
+its estimate for units and continue.
 
    After a bit of thought one can see that if things "speed-up" netperf
 will still honor the interval.  However, if things "slow-down" netperf
@@ -4281,25 +4290,25 @@
 Node: SCTP_RR105611
 Node: Using Netperf to Measure Aggregate Performance105747
 Node: Running Concurrent Netperf Tests106779
-Node: Issues in Running Concurrent Tests111026
-Node: Using --enable-burst113290
-Node: Using --enable-demo120189
-Node: Using Netperf to Measure Bidirectional Transfer125740
-Node: Bidirectional Transfer with Concurrent Tests126872
-Node: Bidirectional Transfer with TCP_RR129228
-Node: Implications of Concurrent Tests vs Burst Request/Response131612
-Node: The Omni Tests133426
-Node: Native Omni Tests134473
-Node: Migrated Tests139751
-Node: Omni Output Selection141856
-Node: Omni Output Selectors144839
-Node: Other Netperf Tests172547
-Node: CPU rate calibration172982
-Node: UUID Generation175350
-Node: Address Resolution176066
-Node: Enhancing Netperf178042
-Node: Netperf4179537
-Node: Concept Index180442
-Node: Option Index182768
+Node: Issues in Running Concurrent Tests111420
+Node: Using --enable-burst113684
+Node: Using --enable-demo120583
+Node: Using Netperf to Measure Bidirectional Transfer126139
+Node: Bidirectional Transfer with Concurrent Tests127271
+Node: Bidirectional Transfer with TCP_RR129627
+Node: Implications of Concurrent Tests vs Burst Request/Response132011
+Node: The Omni Tests133825
+Node: Native Omni Tests134872
+Node: Migrated Tests140150
+Node: Omni Output Selection142255
+Node: Omni Output Selectors145238
+Node: Other Netperf Tests172946
+Node: CPU rate calibration173381
+Node: UUID Generation175749
+Node: Address Resolution176465
+Node: Enhancing Netperf178441
+Node: Netperf4179936
+Node: Concept Index180841
+Node: Option Index183167
 
 End Tag Table

Modified: trunk/src/nettest_omni.c
===================================================================
--- trunk/src/nettest_omni.c	2012-02-02 18:22:48 UTC (rev 533)
+++ trunk/src/nettest_omni.c	2012-02-03 00:06:18 UTC (rev 534)
@@ -4057,7 +4057,12 @@
 
 #ifdef WANT_DEMO
       if (NETPERF_IS_RR(direction)) {
-	demo_interval_tick(1);
+	if (libfmt == 'x') {
+	  demo_interval_tick(1);
+	}
+	else {
+	  demo_interval_tick(req_size + rsp_size);
+	}
       }
       else if (NETPERF_XMIT_ONLY(direction)) {
 	demo_interval_tick(bytes_to_send);



More information about the netperf-dev mailing list