[netperf-dev] netperf2 commit notice r397 - trunk/doc
raj at netperf.org
raj at netperf.org
Fri Jun 24 14:21:30 PDT 2011
Author: raj
Date: 2011-06-24 14:21:30 -0700 (Fri, 24 Jun 2011)
New Revision: 397
Modified:
trunk/doc/netperf.html
trunk/doc/netperf.info
trunk/doc/netperf.pdf
trunk/doc/netperf.texi
Log:
document the omni output selectors
Modified: trunk/doc/netperf.html
===================================================================
--- trunk/doc/netperf.html 2011-06-24 00:15:14 UTC (rev 396)
+++ trunk/doc/netperf.html 2011-06-24 21:21:30 UTC (rev 397)
@@ -109,7 +109,10 @@
<li><a href="#Native-Omni-Tests">9.1 Native Omni Tests</a>
<li><a href="#Migrated-Tests">9.2 Migrated Tests</a>
<li><a href="#Omni-Output-Selection">9.3 Omni Output Selection</a>
+<ul>
+<li><a href="#Omni-Output-Selectors">9.3.1 Omni Output Selectors</a>
</li></ul>
+</li></ul>
<li><a name="toc_Other-Netperf-Tests" href="#Other-Netperf-Tests">10 Other Netperf Tests</a>
<ul>
<li><a href="#CPU-rate-calibration">10.1 CPU rate calibration</a>
@@ -434,6 +437,14 @@
omni tests will not be compiled-in and the classic tests will not be
migrated.
+ <p>Starting with version 2.5.0, netperf will include the “burst mode”
+functionality in a default compilation of the bits. If you encounter
+problems with this, please first attempt to obtain help via
+<a href="mailto:netperf-talk at netperf.org">netperf-talk at netperf.org</a> or
+<a href="mailto:netperf-feedback at netperf.org">netperf-feedback at netperf.org</a>. If that is unsuccessful, you
+can add a <code>--enable-burst=no</code> to the configure command and the
+burst mode functionality will nt be compiled-in.
+
<p>On some platforms, it may be necessary to precede the configure
command with a CFLAGS and/or LIBS variable as the netperf configure
script is not yet smart enough to set them itself. Whenever possible,
@@ -1676,10 +1687,13 @@
no opportunity to reserve space for headers and so a packet will be
contained in two or more buffers.
- <p>The <a href="#Global-Options">global <samp><span class="option">-F</span></samp> option</a> is required for this test and it must
-specify a file of at least the size of the send ring (See <a href="#Global-Options">the global <samp><span class="option">-W</span></samp> option</a>.) multiplied by the send size
-(See <a href="#Options-common-to-TCP-UDP-and-SCTP-tests">the test-specific <samp><span class="option">-m</span></samp> option</a>.). All other TCP-specific options are available
-and optional.
+ <p>As of some time before version 2.5.0, the <a href="#Global-Options">global <samp><span class="option">-F</span></samp> option</a> is no longer required for this test. If it is not
+specified, netperf will create a temporary file, which it will delete
+at the end of the test. If the <samp><span class="option">-F</span></samp> option is specified it
+must reference a file of at least the size of the send ring
+(See <a href="#Global-Options">the global <samp><span class="option">-W</span></samp> option</a>.) multiplied by
+the send size (See <a href="#Options-common-to-TCP-UDP-and-SCTP-tests">the test-specific <samp><span class="option">-m</span></samp> option</a>.). All other TCP-specific options
+remain available and optional.
<p>In this first example:
<pre class="example"> $ netperf -H lag -F ../src/netperf -t TCP_SENDFILE -- -s 128K -S 128K
@@ -1689,7 +1703,7 @@
</pre>
<p>we see what happens when the file is too small. Here:
-<pre class="example"> $ ../src/netperf -H lag -F /boot/vmlinuz-2.6.8-1-686 -t TCP_SENDFILE -- -s 128K -S 128K
+<pre class="example"> $ netperf -H lag -F /boot/vmlinuz-2.6.8-1-686 -t TCP_SENDFILE -- -s 128K -S 128K
TCP SENDFILE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to lag.hpl.hp.com (15.4.89.214) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
@@ -3050,7 +3064,7 @@
first line of the test banners will include the word “MIGRATED” at
the beginning as in:
-<pre class="example"> $ ../src/netperf
+<pre class="example"> $ netperf
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET : demo
Recv Send Send
Socket Socket Message Elapsed
@@ -3078,13 +3092,13 @@
<samp><span class="option">-k</span></samp> test-specific option with a value of
“MIN_LATENCY,MAX_LATENCY” with a migrated TCP_RR test one will see:
-<pre class="example"> $ ../src/netperf -t tcp_rr -- -k THROUGHPUT,THROUGHPUT_UNITS
+<pre class="example"> $ netperf -t tcp_rr -- -k THROUGHPUT,THROUGHPUT_UNITS
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET : demo
THROUGHPUT=60074.74
THROUGHPUT_UNITS=Trans/s
</pre>
<p>rather than:
-<pre class="example"> $ ../src/netperf -t tcp_rr
+<pre class="example"> $ netperf -t tcp_rr
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET : demo
Local /Remote
Socket Size Request Resp. Elapsed Trans.
@@ -3140,7 +3154,412 @@
inspired by classic netperf output.
</dl>
+<ul class="menu">
+<li><a accesskey="1" href="#Omni-Output-Selectors">Omni Output Selectors</a>
+</ul>
+
<div class="node">
+<a name="Omni-Output-Selectors"></a>
+<p><hr>
+Previous: <a rel="previous" accesskey="p" href="#Omni-Output-Selection">Omni Output Selection</a>,
+Up: <a rel="up" accesskey="u" href="#Omni-Output-Selection">Omni Output Selection</a>
+
+</div>
+
+<h4 class="subsection">9.3.1 Omni Output Selectors</h4>
+
+<p>As of version 2.5.0 the output selectors are:
+
+ <dl>
+<dt><code>OUTPUT_NONE</code><dd>This is essentially a null output. For <samp><span class="option">-k</span></samp> output it will
+simply add a line that reads “OUTPUT_NONE=” to the output. For
+<samp><span class="option">-o</span></samp> it will cause an empty “column” to be included. For
+<samp><span class="option">-O</span></samp> output it will cause extra spaces to separate “real” output.
+<br><dt><code>SOCKET_TYPE</code><dd>This will cause the socket type (eg SOCK_STREAM, SOCK_DGRAM) for the
+data connection to be output.
+<br><dt><code>PROTOCOL</code><dd>This will cause the protocol used for the data connection to be displayed.
+<br><dt><code>DIRECTION</code><dd>This will display the data flow direction relative to the netperf
+process. Units: Send or Recv for a unidirectional bulk-transfer test,
+or Send|Recv for a request/response test.
+<br><dt><code>ELAPSED_TIME</code><dd>This will display the elapsed time in seconds for the test.
+<br><dt><code>THROUGHPUT</code><dd>This will display the througput for the test. Units: As requested via
+the global <samp><span class="option">-f</span></samp> option and displayed by the THROUGHPUT_UNITS
+output selector.
+<br><dt><code>THROUGHPUT_UNITS</code><dd>This will display the units for what is displayed by the
+<code>THROUGHPUT</code> output selector.
+<br><dt><code>LSS_SIZE_REQ</code><dd>This will display the local (netperf) send socket buffer size (aka
+SO_SNDBUF) requested via the command line. Units: Bytes.
+<br><dt><code>LSS_SIZE</code><dd>This will display the local (netperf) send socket buffer size
+(SO_SNDBUF) immediately after the data connection socket was created.
+Peculiarities of different networking stacks may lead to this
+differing from the size requested via the command line. Units: Bytes.
+<br><dt><code>LSS_SIZE_END</code><dd>This will display the local (netperf) send socket buffer size
+(SO_SNDBUF) immediately before the data connection socket is closed.
+Peculiarities of different networking stacks may lead this to differ
+from the size requested via the command line and/or the size
+immediately after the data connection socket was created. Units: Bytes.
+<br><dt><code>LSR_SIZE_REQ</code><dd>This will display the local (netperf) receive socket buffer size (aka
+SO_RCVBUF) requested via the command line. Units: Bytes.
+<br><dt><code>LSR_SIZE</code><dd>This will display the local (netperf) receive socket buffer size
+(SO_RCVBUF) immediately after the data connection socket was created.
+Peculiarities of different networking stacks may lead to this
+differing from the size requested via the command line. Units: Bytes.
+<br><dt><code>LSR_SIZE_END</code><dd>This will display the local (netperf) receive socket buffer size
+(SO_RCVBUF) immediately before the data connection socket is closed.
+Peculiarities of different networking stacks may lead this to differ
+from the size requested via the command line and/or the size
+immediately after the data connection socket was created. Units: Bytes.
+<br><dt><code>RSS_SIZE_REQ</code><dd>This will display the remote (netserver) send socket buffer size (aka
+SO_SNDBUF) requested via the command line. Units: Bytes.
+<br><dt><code>RSS_SIZE</code><dd>This will display the remote (netserver) send socket buffer size
+(SO_SNDBUF) immediately after the data connection socket was created.
+Peculiarities of different networking stacks may lead to this
+differing from the size requested via the command line. Units: Bytes.
+<br><dt><code>RSS_SIZE_END</code><dd>This will display the remote (netserver) send socket buffer size
+(SO_SNDBUF) immediately before the data connection socket is closed.
+Peculiarities of different networking stacks may lead this to differ
+from the size requested via the command line and/or the size
+immediately after the data connection socket was created. Units: Bytes.
+<br><dt><code>RSR_SIZE_REQ</code><dd>This will display the remote (netserver) receive socket buffer size (aka
+SO_RCVBUF) requested via the command line. Units: Bytes.
+<br><dt><code>RSR_SIZE</code><dd>This will display the remote (netserver) receive socket buffer size
+(SO_RCVBUF) immediately after the data connection socket was created.
+Peculiarities of different networking stacks may lead to this
+differing from the size requested via the command line. Units: Bytes.
+<br><dt><code>RSR_SIZE_END</code><dd>This will display the remote (netserver) receive socket buffer size
+(SO_RCVBUF) immediately before the data connection socket is closed.
+Peculiarities of different networking stacks may lead this to differ
+from the size requested via the command line and/or the size
+immediately after the data connection socket was created. Units: Bytes.
+<br><dt><code>LOCAL_SEND_SIZE</code><dd>This will display the size of the buffers netperf passed in any
+“send” calls it made on the data connection for a
+non-request/response test. Units: Bytes.
+<br><dt><code>LOCAL_RECV_SIZE</code><dd>This will display the size of the buffers netperf passed in any
+“receive” calls it made on the data connection for a
+non-request/response test. Units: Bytes.
+<br><dt><code>REMOTE_SEND_SIZE</code><dd>This will display the size of the buffers netserver passed in any
+“send” calls it made on the data connection for a
+non-request/response test. Units: Bytes.
+<br><dt><code>REMOTE_RECV_SIZE</code><dd>This will display the size of the buffers netserver passed in any
+“receive” calls it made on the data connection for a
+non-request/response test. Units: Bytes.
+<br><dt><code>REQUEST_SIZE</code><dd>This will display the size of the requests netperf sent in a
+request-response test. Units: Bytes.
+<br><dt><code>RESPONSE_SIZE</code><dd>This will display the size of the responses netserver sent in a
+request-response test. Units: Bytes.
+<br><dt><code>LOCAL_CPU_UTIL</code><dd>This will display the overall CPU utilization during the test as
+measured by netperf. Units: 0 to 100 percent.
+<br><dt><code>LOCAL_CPU_METHOD</code><dd>This will display the method used by netperf to measure CPU
+utilization. Units: single character denoting method.
+<br><dt><code>LOCAL_SD</code><dd>This will display the service demand, or units of CPU consumed per
+unit of work, as measured by netperf. Units: microsconds of CPU
+consumed per either KB (K==1024) of data transferred or request/response
+transaction.
+<br><dt><code>REMOTE_CPU_UTIL</code><dd>This will display the overall CPU utilization during the test as
+measured by netserver. Units 0 to 100 percent.
+<br><dt><code>REMOTE_CPU_METHOD</code><dd>This will display the method used by netserver to measure CPU
+utilization. Units: single character denoting method.
+<br><dt><code>REMOTE_SD</code><dd>This will display the service demand, or units of CPU consumed per
+unit of work, as measured by netserver. Units: microseconds of CPU
+consumed consumed per either KB (K==1024) of data transferred or
+request/response transaction.
+<br><dt><code>SD_UNITS</code><dd>This will display the units for LOCAL_SD and REMOTE_SD
+<br><dt><code>CONFIDENCE_LEVEL</code><dd>This will display the confidence level requested by the user either
+explicitly via the global <samp><span class="option">-I</span></samp> option, or implicitly via the
+global <samp><span class="option">-i</span></samp> option. The value will be either 95 or 99 if
+confidence intervals have been requested or 0 if they were not. Units:
+Percent
+<br><dt><code>CONFIDENCE_INTERVAL</code><dd>This will display the width of the confidence interval requested
+either explicitly via the global <samp><span class="option">-I</span></samp> option or implicitly via
+the global <samp><span class="option">-i</span></samp> option. Units: Width in percent of mean value
+computed. A value of -1.0 means that confidence intervals were not requested.
+<br><dt><code>CONFIDENCE_ITERATION</code><dd>This will display the number of test iterations netperf undertook,
+perhaps while attempting to achieve the requested confidence interval
+and level. If confidence intervals were requested via the command line
+then the value will be between 3 and 30. If confidence intervals were
+not requested the value will be 1. Units: Iterations
+<br><dt><code>THROUGHPUT_CONFID</code><dd>This will display the width of the confidence interval actually
+achieved for throughput during the test. Units: Width of interval as
+percentage of reported throughput value.
+<br><dt><code>LOCAL_CPU_CONFID</code><dd>This will display the width of the confidence interval actually
+achieved for CPU utilization on the system running netperf during the
+test, if CPU utilization measurement was enabled. Units: Width of
+interval as percentage of reported CPU utilization.
+<br><dt><code>REMOTE_CPU_CONFID</code><dd>This will display the width of the confidence interval actually
+achieved for CPU utilzation on the system running netserver during the
+test, if CPU utilization measurement was enabled. Units: Width of
+interval as percentage of reported CPU utilization.
+<br><dt><code>TRANSACTION_RATE</code><dd>This will display the transaction rate in transactions per second for
+a request/response test even if the user has requested a throughput
+units in number of bits or bytes per second via the global <samp><span class="option">-f</span></samp>
+option. It is undefined for a non-request/response test. Units:
+Transactions per second.
+<br><dt><code>RT_LATENCY</code><dd>This will display the average round-trip latency for a
+request/response test, accounting for number of transactions in flight
+at one time. It is undefined for a non-request/response test. Units:
+Microseconds per transaction
+<br><dt><code>BURST_SIZE</code><dd>This will display the “burst size” or added transactions in flight
+in a request/response test as requested via a test-specific
+<samp><span class="option">-b</span></samp> option. The number of transactions in flight at one time
+will be one greater than this value. It is undefined for a
+non-request/response test. Units: added Transactions in flight.
+<br><dt><code>LOCAL_TRANSPORT_RETRANS</code><dd>This will display the number of retransmissions experienced on the
+data connection during the test as determined by netperf. A value of
+-1 means the attempt to determine the number of retransmissions failed
+or the concept was not valid for the given protocol or the mechanism
+is not known for the platform. A value of -2 means it was not
+attempted. As of version 2.5.0 the meaning of values are in flux and
+subject to change. Units: number of retransmissions.
+<br><dt><code>REMOTE_TRANSPORT_RETRANS</code><dd>This will display the number of retransmissions experienced on the
+data connection during the test as determined by netserver. A value
+of -1 means the attempt to determine the number of retransmissions
+failed or the concept was not valid for the given protocol or the
+mechanism is not known for the platform. A value of -2 means it was
+not attempted. As of version 2.5.0 the meaning of values are in flux
+and subject to change. Units: number of retransmissions.
+<br><dt><code>TRANSPORT_MSS</code><dd>This will display the Maximum Segment Size (aka MSS) or its equivalent
+for the protocol being used during the test. A value of -1 means
+either the concept of an MSS did not apply to the protocol being used,
+or there was an error in retrieving it. Units: Bytes.
+<br><dt><code>LOCAL_SEND_THROUGHPUT</code><dd>The throughput as measured by netperf for the successful “send”
+calls it made on the data connection. Units: as requested via the
+global <samp><span class="option">-f</span></samp> option and displayed via the THROUGHPUT_UNITS
+output selector.
+<br><dt><code>LOCAL_RECV_THROUGHPUT</code><dd>The throughput as measured by netperf for the successful “receive”
+calls it made on the data connection. Units: as requested via the
+global <samp><span class="option">-f</span></samp> option and displayed via the THROUGHPUT_UNITS
+output selector.
+<br><dt><code>REMOTE_SEND_THROUGHPUT</code><dd>The throughput as measured by netserver for the successful “send”
+calls it made on the data connection. Units: as requested via the
+global <samp><span class="option">-f</span></samp> option and displayed via the THROUGHPUT_UNITS
+output selector.
+<br><dt><code>REMOTE_RECV_THROUGHPUT</code><dd>The throughput as measured by netserver for the successful “receive”
+calls it made on the data connection. Units: as requested via the
+global <samp><span class="option">-f</span></samp> option and displayed via the THROUGHPUT_UNITS
+output selector.
+<br><dt><code>LOCAL_CPU_BIND</code><dd>The CPU to which netperf was bound, if at all, during the test. A
+value of -1 means that netperf was not explicitly bound to a CPU
+during the test. Units: CPU ID
+<br><dt><code>LOCAL_CPU_COUNT</code><dd>The number of CPUs (cores, threads) detected by netperf. Units: CPU count.
+<br><dt><code>LOCAL_CPU_PEAK_UTIL</code><dd>The utilization of the CPU most heavily utilized during the test, as
+measured by netperf. This can be used to see if any one CPU of a
+multi-CPU system was saturated even though the overall CPU utilization
+as reported by LOCAL_CPU_UTIL was low. Units: 0 to 100%
+<br><dt><code>LOCAL_CPU_PEAK_ID</code><dd>The id of the CPU most heavily utilized during the test as determined
+by netperf. Units: CPU ID.
+<br><dt><code>LOCAL_CPU_MODEL</code><dd>Model information for the processor(s) present on the system running
+netperf. Assumes all processors in the system (as perceived by
+netperf) on which netperf is running are the same model. Units: Text
+<br><dt><code>LOCAL_CPU_FREQUENCY</code><dd>The frequency of the processor(s) on the system running netperf, at
+the time netperf made the call. Assumes that all processors present
+in the system running netperf are running at the same
+frequency. Units: MHz
+<br><dt><code>REMOTE_CPU_BIND</code><dd>The CPU to which netserver was bound, if at all, during the test. A
+value of -1 means that netperf was not explicitly bound to a CPU
+during the test. Units: CPU ID
+<br><dt><code>REMOTE_CPU_COUNT</code><dd>The number of CPUs (cores, threads) detected by netserver. Units: CPU
+count.
+<br><dt><code>REMOTE_CPU_PEAK_UTIL</code><dd>The utilization of the CPU most heavily utilized during the test, as
+measured by netserver. This can be used to see if any one CPU of a
+multi-CPU system was saturated even though the overall CPU utilization
+as reported by LOCAL_CPU_UTIL was low. Units: 0 to 100%
+<br><dt><code>REMOTE_CPU_PEAK_ID</code><dd>The id of the CPU most heavily utilized during the test as determined
+by netserver. Units: CPU ID.
+<br><dt><code>REMOTE_CPU_MODEL</code><dd>Model information for the processor(s) present on the system running
+netserver. Assumes all processors in the system (as perceived by
+netserver) on which netserver is running are the same model. Units:
+Text
+<br><dt><code>REMOTE_CPU_FREQUENCY</code><dd>The frequency of the processor(s) on the system running netserver, at
+the time netserver made the call. Assumes that all processors present
+in the system running netserver are running at the same
+frequency. Units: MHz
+<br><dt><code>SOURCE_PORT</code><dd>The port ID/service name to which the data socket created by netperf
+was bound. A value of 0 means the data socket was not explicitly
+bound to a port number. Units: ASCII text.
+<br><dt><code>SOURCE_ADDR</code><dd>The name/address to which the data socket created by netperf was
+bound. A value of 0.0.0.0 means the data socket was not explicitly
+bound to an address. Units: ASCII text.
+<br><dt><code>SOURCE_FAMILY</code><dd>The address family to which the data socket created by netperf was
+bound. A value of 0 means the data socket was not explicitly bound to
+a given address family. Units: ASCII text.
+<br><dt><code>DEST_PORT</code><dd>The port ID to which the data socket created by netserver was bound. A
+value of 0 means the data socket was not explicitly bound to a port
+number. Units: ASCII text.
+<br><dt><code>DEST_ADDR</code><dd>The name/address of the data socket created by netserver. Units:
+ASCII text.
+<br><dt><code>DEST_FAMILY</code><dd>The address family to which the data socket created by netserver was
+bound. A value of 0 means the data socket was not explicitly bound to
+a given address family. Units: ASCII text.
+<br><dt><code>LOCAL_SEND_CALLS</code><dd>The number of successful “send” calls made by netperf against its
+data socket. Units: Calls.
+<br><dt><code>LOCAL_RECV_CALLS</code><dd>The number of successful “receive” calls made by netperf against its
+data socket. Units: Calls.
+<br><dt><code>LOCAL_BYTES_PER_RECV</code><dd>The average number of bytes per “receive” call made by netperf
+against its data socket. Units: Bytes.
+<br><dt><code>LOCAL_BYTES_PER_SEND</code><dd>The average number of bytes per “send” call made by netperf against
+its data socket. Units: Bytes.
+<br><dt><code>LOCAL_BYTES_SENT</code><dd>The number of bytes successfully sent by netperf through its data
+socket. Units: Bytes.
+<br><dt><code>LOCAL_BYTES_RECVD</code><dd>The number of bytes successfully received by netperf through its data
+socket. Units: Bytes.
+<br><dt><code>LOCAL_BYTES_XFERD</code><dd>The sum of bytes sent and received by netperf through its data
+socket. Units: Bytes.
+<br><dt><code>LOCAL_SEND_OFFSET</code><dd>The offset from the alignment of the buffers passed by netperf in its
+“send” calls. Specified via the global <samp><span class="option">-o</span></samp> option and
+defaults to 0. Units: Bytes.
+<br><dt><code>LOCAL_RECV_OFFSET</code><dd>The offset from the alignment of the buffers passed by netperf in its
+“receive” calls. Specified via the global <samp><span class="option">-o</span></samp> option and
+defaults to 0. Units: Bytes.
+<br><dt><code>LOCAL_SEND_ALIGN</code><dd>The alignment of the buffers passed by netperf in its “send” calls
+as specified via the global <samp><span class="option">-a</span></samp> option. Defaults to 8. Units:
+Bytes.
+<br><dt><code>LOCAL_RECV_ALIGN</code><dd>The alignment of the buffers passed by netperf in its “receive”
+calls as specified via the global <samp><span class="option">-a</span></samp> option. Defaults to
+8. Units: Bytes.
+<br><dt><code>LOCAL_SEND_WIDTH</code><dd>The “width” of the ring of buffers through which netperf cycles as
+it makes its “send” calls. Defaults to one more than the local send
+socket buffer size divided by the send size as determined at the time
+the data socket is created. Can be used to make netperf more processor
+data cache unfiendly. Units: number of buffers.
+<br><dt><code>LOCAL_RECV_WIDTH</code><dd>The “width” of the ring of buffers through which netperf cycles as
+it makes its “receive” calls. Defaults to one more than the local
+receive socket buffer size divided by the receive size as determined
+at the time the data socket is created. Can be used to make netperf
+more processor data cache unfiendly. Units: number of buffers.
+<br><dt><code>LOCAL_SEND_DIRTY_COUNT</code><dd>The number of bytes to “dirty” (write to) before netperf makes a
+“send” call. Specified via the global <samp><span class="option">-k</span></samp> option, which
+requires that –enable-dirty=yes was specificed with the configure
+command prior to building netperf. Units: Bytes.
+<br><dt><code>LOCAL_RECV_DIRTY_COUNT</code><dd>The number of bytes to “dirty” (write to) before netperf makes a
+“recv” call. Specified via the global <samp><span class="option">-k</span></samp> option which
+requires that –enable-dirty was specified with the configure command
+prior to building netperf. Units: Bytes.
+<br><dt><code>LOCAL_RECV_CLEAN_COUNT</code><dd>The number of bytes netperf should read “cleanly” before making a
+“receive” call. Specified via the global <samp><span class="option">-k</span></samp> option which
+requires that –enable-dirty was specified with configure command
+prior to building netperf. Clean reads start were dirty writes ended.
+Units: Bytes.
+<br><dt><code>LOCAL_NODELAY</code><dd>Indicates whether or not setting the test protocol-specific “no
+delay” (eg TCP_NODELAY) option on the data socket used by netperf was
+requested by the test-specific <samp><span class="option">-D</span></samp> option and
+successful. Units: 0 means no, 1 means yes.
+<br><dt><code>LOCAL_CORK</code><dd>Indicates whether or not TCP_CORK was set on the data socket used by
+netperf as requested via the test-specific <samp><span class="option">-C</span></samp> option. 1 means
+yes, 0 means no/not applicable.
+<br><dt><code>REMOTE_SEND_CALLS</code><br><dt><code>REMOTE_RECV_CALLS</code><br><dt><code>REMOTE_BYTES_PER_RECV</code><br><dt><code>REMOTE_BYTES_PER_SEND</code><br><dt><code>REMOTE_BYTES_SENT</code><br><dt><code>REMOTE_BYTES_RECVD</code><br><dt><code>REMOTE_BYTES_XFERD</code><br><dt><code>REMOTE_SEND_OFFSET</code><br><dt><code>REMOTE_RECV_OFFSET</code><br><dt><code>REMOTE_SEND_ALIGN</code><br><dt><code>REMOTE_RECV_ALIGN</code><br><dt><code>REMOTE_SEND_WIDTH</code><br><dt><code>REMOTE_RECV_WIDTH</code><br><dt><code>REMOTE_SEND_DIRTY_COUNT</code><br><dt><code>REMOTE_RECV_DIRTY_COUNT</code><br><dt><code>REMOTE_RECV_CLEAN_COUNT</code><br><dt><code>REMOTE_NODELAY</code><br><dt><code>REMOTE_CORK</code><dd>These are all like their “LOCAL_” counterparts only for the
+netserver rather than netperf.
+<br><dt><code>LOCAL_SYSNAME</code><dd>The name of the OS (eg “Linux”) running on the system on which
+netperf was running. Units: ASCII Text
+<br><dt><code>LOCAL_SYSTEM_MODEL</code><dd>The model name of the system on which netperf was running. Units:
+ASCII Text.
+<br><dt><code>LOCAL_RELEASE</code><dd>The release name/number of the OS running on the system on which
+netperf was running. Units: ASCII Text
+<br><dt><code>LOCAL_VERSION</code><dd>The version number of the OS runningon the system on which netperf was
+running. Units: ASCII Text
+<br><dt><code>LOCAL_MACHINE</code><dd>The machine architecture of the machine on which netperf was
+running. Units: ASCII Text.
+<br><dt><code>REMOTE_SYSNAME</code><br><dt><code>REMOTE_SYSTEM_MODEL</code><br><dt><code>REMOTE_RELEASE</code><br><dt><code>REMOTE_VERSION</code><br><dt><code>REMOTE_MACHINE</code><dd>These are all like their “LOCAL_” counterparts only for the
+netserver rather than netperf.
+<br><dt><code>LOCAL_INTERFACE_NAME</code><dd>The name of the probable egress interface through which the data
+connection went on the system running netperf. Example: eth0. Units:
+ASCII Text.
+<br><dt><code>LOCAL_INTERFACE_VENDOR</code><dd>The vendor ID of the probable egress interface through which traffic
+on the data connection went on the system running netperf. Units:
+Hexadecimal IDs as might be found in a <samp><span class="file">pci.ids</span></samp> file or at
+<a href="http://pciids.soruceforge.net/">the PCI ID Repository</a>.
+<br><dt><code>LOCAL_INTERFACE_DEVICE</code><dd>The device ID of the probable egress interface through which traffic
+on the data connection went on the system running netperf. Units:
+Hexadecimal IDs as might be found in a <samp><span class="file">pci.ids</span></samp> file or at
+<a href="http://pciids.soruceforge.net/">the PCI ID Repository</a>.
+<br><dt><code>LOCAL_INTERFACE_SUBVENDOR</code><dd>The sub-vendor ID of the probable egress interface through which
+traffic on the the data connection went on the system running
+netperf. Units: Hexadecimal IDs as might be found in a <samp><span class="file">pci.ids</span></samp>
+file or at <a href="http://pciids.soruceforge.net/">the PCI ID Repository</a>.
+<br><dt><code>LOCAL_INTERFACE_SUBDEVICE</code><dd>The sub-device ID of the probable egress interface through which
+traffic on the data connection went on the system running
+netperf. Units: Hexadecimal IDs as might be found in a <samp><span class="file">pci.ids</span></samp>
+file or at <a href="http://pciids.soruceforge.net/">the PCI ID Repository</a>.
+<br><dt><code>LOCAL_DRIVER_NAME</code><dd>The name of the driver used for the probable egress interface through
+which traffic on the data connection went on the system running
+netperf. Units: ASCII Text.
+<br><dt><code>LOCAL_DRIVER_VERSION</code><dd>The version string for the driver used for the probable egress
+interface through which traffic on the data connection went on the
+system running netperf. Units: ASCII Text.
+<br><dt><code>LOCAL_DRIVER_FIRMWARE</code><dd>The firmware version for the driver used for the probable egress
+interface through which traffic on the data connection went on the
+system running netperf. Units: ASCII Text.
+<br><dt><code>LOCAL_DRIVER_BUS</code><dd>The bus address of the probable egress interface through which traffic
+on the data connection went on the system running netperf. Units:
+ASCII Text.
+<br><dt><code>LOCAL_INTERFACE_SLOT</code><dd>The slot ID of the probable egress interface through which traffic
+on the data connection went on the system running netperf. Units:
+ASCII Text.
+<br><dt><code>REMOTE_INTERFACE_NAME</code><br><dt><code>REMOTE_INTERFACE_VENDOR</code><br><dt><code>REMOTE_INTERFACE_DEVICE</code><br><dt><code>REMOTE_INTERFACE_SUBVENDOR</code><br><dt><code>REMOTE_INTERFACE_SUBDEVICE</code><br><dt><code>REMOTE_DRIVER_NAME</code><br><dt><code>REMOTE_DRIVER_VERSION</code><br><dt><code>REMOTE_DRIVER_FIRMWARE</code><br><dt><code>REMOTE_DRIVER_BUS</code><br><dt><code>REMOTE_INTERFACE_SLOT</code><dd>These are all like their “LOCAL_” counterparts only for the
+netserver rather than netperf.
+<br><dt><code>LOCAL_INTERVAL_USECS</code><dd>The interval at which bursts of operations (sends, receives,
+transactions) were attempted by netperf. Specified by the
+global <samp><span class="option">-w</span></samp> option which requires –enable-intervals to have
+been specified with the configure command prior to building
+netperf. Units: Microseconds (though specified by default in
+milliseconds on the command line)
+<br><dt><code>LOCAL_INTERVAL_BURST</code><dd>The number of operations (sends, receives, transactions depending on
+the test) which were attempted by netperf each LOCAL_INTERVAL_USECS
+units of time. Specified by the global <samp><span class="option">-b</span></samp> option which
+requires –enable-intervals to have been specified with the configure
+command prior to building netperf. Units: number of operations per burst.
+<br><dt><code>REMOTE_INTERVAL_USECS</code><dd>The interval at which bursts of operations (sends, receives,
+transactions) were attempted by netserver. Specified by the
+global <samp><span class="option">-w</span></samp> option which requires –enable-intervals to have
+been specified with the configure command prior to building
+netperf. Units: Microseconds (though specified by default in
+milliseconds on the command line)
+<br><dt><code>REMOTE_INTERVAL_BURST</code><dd>The number of operations (sends, receives, transactions depending on
+the test) which were attempted by netperf each LOCAL_INTERVAL_USECS
+units of time. Specified by the global <samp><span class="option">-b</span></samp> option which
+requires –enable-intervals to have been specified with the configure
+command prior to building netperf. Units: number of operations per burst.
+<br><dt><code>LOCAL_SECURITY_TYPE_ID</code><br><dt><code>LOCAL_SECURITY_TYPE</code><br><dt><code>LOCAL_SECURITY_ENABLED_NUM</code><br><dt><code>LOCAL_SECURITY_ENABLED</code><br><dt><code>LOCAL_SECURITY_SPECIFIC</code><br><dt><code>REMOTE_SECURITY_TYPE_ID</code><br><dt><code>REMOTE_SECURITY_TYPE</code><br><dt><code>REMOTE_SECURITY_ENABLED_NUM</code><br><dt><code>REMOTE_SECURITY_ENABLED</code><br><dt><code>REMOTE_SECURITY_SPECIFIC</code><dd>A bunch of stuff related to what sort of security mechanisms (eg
+SELINUX) were enabled on the systems during the test.
+<br><dt><code>RESULT_BRAND</code><dd>The string specified by the user with the global <samp><span class="option">-B</span></samp>
+option. Units: ASCII Text.
+<br><dt><code>UUID</code><dd>The universally unique identifier associated with this test, either
+generated automagically by netperf, or passed to netperf via a
+test-specific <samp><span class="option">-u</span></samp> option. Note: Future versions may make this
+a global command-line option. Units: ASCII Text.
+<br><dt><code>MIN_LATENCY</code><dd>The minimum “latency” or operation time (send, receive or
+request/response exchange depending on the test) as measured on the
+netperf side when the global <samp><span class="option">-j</span></samp> option was specified. Units:
+Microseconds.
+<br><dt><code>MAX_LATENCY</code><dd>The maximum “latency” or operation time (send, receive or
+request/response exchange depending on the test) as measured on the
+netperf side when the global <samp><span class="option">-j</span></samp> option was specified. Units:
+Microseconds.
+<br><dt><code>P50_LATENCY</code><dd>The 50th percentile value of “latency” or operation time (send, receive or
+request/response exchange depending on the test) as measured on the
+netperf side when the global <samp><span class="option">-j</span></samp> option was specified. Units:
+Microseconds.
+<br><dt><code>P90_LATENCY</code><dd>The 90th percentile value of “latency” or operation time (send, receive or
+request/response exchange depending on the test) as measured on the
+netperf side when the global <samp><span class="option">-j</span></samp> option was specified. Units:
+Microseconds.
+<br><dt><code>P99_LATENCY</code><dd>The 99th percentile value of “latency” or operation time (send, receive or
+request/response exchange depending on the test) as measured on the
+netperf side when the global <samp><span class="option">-j</span></samp> option was specified. Units:
+Microseconds.
+<br><dt><code>MEAN_LATENCY</code><dd>The average “latency” or operation time (send, receive or
+request/response exchange depending on the test) as measured on the
+netperf side when the global <samp><span class="option">-j</span></samp> option was specified. Units:
+Microseconds.
+<br><dt><code>STDDEV_LATENCY</code><dd>The standard deviation of “latency” or operation time (send, receive or
+request/response exchange depending on the test) as measured on the
+netperf side when the global <samp><span class="option">-j</span></samp> option was specified. Units:
+Microseconds.
+<br><dt><code>COMMAND_LINE</code><dd>The full command line used when invoking netperf. Units: ASCII Text.
+<br><dt><code>OUTPUT_END</code><dd>While emitted with the list of output selectors, it is ignored when
+specified as an output selector.
+</dl>
+
+<div class="node">
<a name="Other-Netperf-Tests"></a>
<p><hr>
Next: <a rel="next" accesskey="n" href="#Address-Resolution">Address Resolution</a>,
Modified: trunk/doc/netperf.info
===================================================================
--- trunk/doc/netperf.info 2011-06-24 00:15:14 UTC (rev 396)
+++ trunk/doc/netperf.info 2011-06-24 21:21:30 UTC (rev 397)
@@ -288,6 +288,13 @@
command and the omni tests will not be compiled-in and the classic
tests will not be migrated.
+ Starting with version 2.5.0, netperf will include the "burst mode"
+functionality in a default compilation of the bits. If you encounter
+problems with this, please first attempt to obtain help via
+<netperf-talk at netperf.org> or <netperf-feedback at netperf.org>. If that
+is unsuccessful, you can add a `--enable-burst=no' to the configure
+command and the burst mode functionality will nt be compiled-in.
+
On some platforms, it may be necessary to precede the configure
command with a CFLAGS and/or LIBS variable as the netperf configure
script is not yet smart enough to set them itself. Whenever possible,
@@ -1503,12 +1510,14 @@
no opportunity to reserve space for headers and so a packet will be
contained in two or more buffers.
- The *note global `-F' option: Global Options. is required for this
-test and it must specify a file of at least the size of the send ring
-(*Note the global `-W' option: Global Options.) multiplied by the send
-size (*Note the test-specific `-m' option: Options common to TCP UDP
-and SCTP tests.). All other TCP-specific options are available and
-optional.
+ As of some time before version 2.5.0, the *note global `-F' option:
+Global Options. is no longer required for this test. If it is not
+specified, netperf will create a temporary file, which it will delete
+at the end of the test. If the `-F' option is specified it must
+reference a file of at least the size of the send ring (*Note the
+global `-W' option: Global Options.) multiplied by the send size (*Note
+the test-specific `-m' option: Options common to TCP UDP and SCTP
+tests.). All other TCP-specific options remain available and optional.
In this first example:
$ netperf -H lag -F ../src/netperf -t TCP_SENDFILE -- -s 128K -S 128K
@@ -1518,7 +1527,7 @@
we see what happens when the file is too small. Here:
- $ ../src/netperf -H lag -F /boot/vmlinuz-2.6.8-1-686 -t TCP_SENDFILE -- -s 128K -S 128K
+ $ netperf -H lag -F /boot/vmlinuz-2.6.8-1-686 -t TCP_SENDFILE -- -s 128K -S 128K
TCP SENDFILE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to lag.hpl.hp.com (15.4.89.214) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
@@ -2674,7 +2683,7 @@
the test banners will include the word "MIGRATED" at the beginning as
in:
- $ ../src/netperf
+ $ netperf
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET : demo
Recv Send Send
Socket Socket Message Elapsed
@@ -2705,12 +2714,12 @@
test-specific option with a value of "MIN_LATENCY,MAX_LATENCY" with a
migrated TCP_RR test one will see:
- $ ../src/netperf -t tcp_rr -- -k THROUGHPUT,THROUGHPUT_UNITS
+ $ netperf -t tcp_rr -- -k THROUGHPUT,THROUGHPUT_UNITS
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET : demo
THROUGHPUT=60074.74
THROUGHPUT_UNITS=Trans/s
rather than:
- $ ../src/netperf -t tcp_rr
+ $ netperf -t tcp_rr
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET : demo
Local /Remote
Socket Size Request Resp. Elapsed Trans.
@@ -2770,7 +2779,738 @@
selects a default set of output selectors inspired by classic
netperf output.
+* Menu:
+
+* Omni Output Selectors::
+
+File: netperf.info, Node: Omni Output Selectors, Prev: Omni Output Selection, Up: Omni Output Selection
+
+9.3.1 Omni Output Selectors
+---------------------------
+
+As of version 2.5.0 the output selectors are:
+
+`OUTPUT_NONE'
+ This is essentially a null output. For `-k' output it will simply
+ add a line that reads "OUTPUT_NONE=" to the output. For `-o' it
+ will cause an empty "column" to be included. For `-O' output it
+ will cause extra spaces to separate "real" output.
+
+`SOCKET_TYPE'
+ This will cause the socket type (eg SOCK_STREAM, SOCK_DGRAM) for
+ the data connection to be output.
+
+`PROTOCOL'
+ This will cause the protocol used for the data connection to be
+ displayed.
+
+`DIRECTION'
+ This will display the data flow direction relative to the netperf
+ process. Units: Send or Recv for a unidirectional bulk-transfer
+ test, or Send|Recv for a request/response test.
+
+`ELAPSED_TIME'
+ This will display the elapsed time in seconds for the test.
+
+`THROUGHPUT'
+ This will display the througput for the test. Units: As requested
+ via the global `-f' option and displayed by the THROUGHPUT_UNITS
+ output selector.
+
+`THROUGHPUT_UNITS'
+ This will display the units for what is displayed by the
+ `THROUGHPUT' output selector.
+
+`LSS_SIZE_REQ'
+ This will display the local (netperf) send socket buffer size (aka
+ SO_SNDBUF) requested via the command line. Units: Bytes.
+
+`LSS_SIZE'
+ This will display the local (netperf) send socket buffer size
+ (SO_SNDBUF) immediately after the data connection socket was
+ created. Peculiarities of different networking stacks may lead to
+ this differing from the size requested via the command line.
+ Units: Bytes.
+
+`LSS_SIZE_END'
+ This will display the local (netperf) send socket buffer size
+ (SO_SNDBUF) immediately before the data connection socket is
+ closed. Peculiarities of different networking stacks may lead
+ this to differ from the size requested via the command line and/or
+ the size immediately after the data connection socket was created.
+ Units: Bytes.
+
+`LSR_SIZE_REQ'
+ This will display the local (netperf) receive socket buffer size
+ (aka SO_RCVBUF) requested via the command line. Units: Bytes.
+
+`LSR_SIZE'
+ This will display the local (netperf) receive socket buffer size
+ (SO_RCVBUF) immediately after the data connection socket was
+ created. Peculiarities of different networking stacks may lead to
+ this differing from the size requested via the command line.
+ Units: Bytes.
+
+`LSR_SIZE_END'
+ This will display the local (netperf) receive socket buffer size
+ (SO_RCVBUF) immediately before the data connection socket is
+ closed. Peculiarities of different networking stacks may lead
+ this to differ from the size requested via the command line and/or
+ the size immediately after the data connection socket was created.
+ Units: Bytes.
+
+`RSS_SIZE_REQ'
+ This will display the remote (netserver) send socket buffer size
+ (aka SO_SNDBUF) requested via the command line. Units: Bytes.
+
+`RSS_SIZE'
+ This will display the remote (netserver) send socket buffer size
+ (SO_SNDBUF) immediately after the data connection socket was
+ created. Peculiarities of different networking stacks may lead to
+ this differing from the size requested via the command line.
+ Units: Bytes.
+
+`RSS_SIZE_END'
+ This will display the remote (netserver) send socket buffer size
+ (SO_SNDBUF) immediately before the data connection socket is
+ closed. Peculiarities of different networking stacks may lead
+ this to differ from the size requested via the command line and/or
+ the size immediately after the data connection socket was created.
+ Units: Bytes.
+
+`RSR_SIZE_REQ'
+ This will display the remote (netserver) receive socket buffer
+ size (aka SO_RCVBUF) requested via the command line. Units: Bytes.
+
+`RSR_SIZE'
+ This will display the remote (netserver) receive socket buffer size
+ (SO_RCVBUF) immediately after the data connection socket was
+ created. Peculiarities of different networking stacks may lead to
+ this differing from the size requested via the command line.
+ Units: Bytes.
+
+`RSR_SIZE_END'
+ This will display the remote (netserver) receive socket buffer size
+ (SO_RCVBUF) immediately before the data connection socket is
+ closed. Peculiarities of different networking stacks may lead
+ this to differ from the size requested via the command line and/or
+ the size immediately after the data connection socket was created.
+ Units: Bytes.
+
+`LOCAL_SEND_SIZE'
+ This will display the size of the buffers netperf passed in any
+ "send" calls it made on the data connection for a
+ non-request/response test. Units: Bytes.
+
+`LOCAL_RECV_SIZE'
+ This will display the size of the buffers netperf passed in any
+ "receive" calls it made on the data connection for a
+ non-request/response test. Units: Bytes.
+
+`REMOTE_SEND_SIZE'
+ This will display the size of the buffers netserver passed in any
+ "send" calls it made on the data connection for a
+ non-request/response test. Units: Bytes.
+
+`REMOTE_RECV_SIZE'
+ This will display the size of the buffers netserver passed in any
+ "receive" calls it made on the data connection for a
+ non-request/response test. Units: Bytes.
+
+`REQUEST_SIZE'
+ This will display the size of the requests netperf sent in a
+ request-response test. Units: Bytes.
+
+`RESPONSE_SIZE'
+ This will display the size of the responses netserver sent in a
+ request-response test. Units: Bytes.
+
+`LOCAL_CPU_UTIL'
+ This will display the overall CPU utilization during the test as
+ measured by netperf. Units: 0 to 100 percent.
+
+`LOCAL_CPU_METHOD'
+ This will display the method used by netperf to measure CPU
+ utilization. Units: single character denoting method.
+
+`LOCAL_SD'
+ This will display the service demand, or units of CPU consumed per
+ unit of work, as measured by netperf. Units: microsconds of CPU
+ consumed per either KB (K==1024) of data transferred or
+ request/response transaction.
+
+`REMOTE_CPU_UTIL'
+ This will display the overall CPU utilization during the test as
+ measured by netserver. Units 0 to 100 percent.
+
+`REMOTE_CPU_METHOD'
+ This will display the method used by netserver to measure CPU
+ utilization. Units: single character denoting method.
+
+`REMOTE_SD'
+ This will display the service demand, or units of CPU consumed per
+ unit of work, as measured by netserver. Units: microseconds of CPU
+ consumed consumed per either KB (K==1024) of data transferred or
+ request/response transaction.
+
+`SD_UNITS'
+ This will display the units for LOCAL_SD and REMOTE_SD
+
+`CONFIDENCE_LEVEL'
+ This will display the confidence level requested by the user either
+ explicitly via the global `-I' option, or implicitly via the
+ global `-i' option. The value will be either 95 or 99 if
+ confidence intervals have been requested or 0 if they were not.
+ Units: Percent
+
+`CONFIDENCE_INTERVAL'
+ This will display the width of the confidence interval requested
+ either explicitly via the global `-I' option or implicitly via the
+ global `-i' option. Units: Width in percent of mean value
+ computed. A value of -1.0 means that confidence intervals were not
+ requested.
+
+`CONFIDENCE_ITERATION'
+ This will display the number of test iterations netperf undertook,
+ perhaps while attempting to achieve the requested confidence
+ interval and level. If confidence intervals were requested via the
+ command line then the value will be between 3 and 30. If
+ confidence intervals were not requested the value will be 1.
+ Units: Iterations
+
+`THROUGHPUT_CONFID'
+ This will display the width of the confidence interval actually
+ achieved for throughput during the test. Units: Width of interval
+ as percentage of reported throughput value.
+
+`LOCAL_CPU_CONFID'
+ This will display the width of the confidence interval actually
+ achieved for CPU utilization on the system running netperf during
+ the test, if CPU utilization measurement was enabled. Units:
+ Width of interval as percentage of reported CPU utilization.
+
+`REMOTE_CPU_CONFID'
+ This will display the width of the confidence interval actually
+ achieved for CPU utilzation on the system running netserver during
+ the test, if CPU utilization measurement was enabled. Units: Width
+ of interval as percentage of reported CPU utilization.
+
+`TRANSACTION_RATE'
+ This will display the transaction rate in transactions per second
+ for a request/response test even if the user has requested a
+ throughput units in number of bits or bytes per second via the
+ global `-f' option. It is undefined for a non-request/response
+ test. Units: Transactions per second.
+
+`RT_LATENCY'
+ This will display the average round-trip latency for a
+ request/response test, accounting for number of transactions in
+ flight at one time. It is undefined for a non-request/response
+ test. Units: Microseconds per transaction
+
+`BURST_SIZE'
+ This will display the "burst size" or added transactions in flight
+ in a request/response test as requested via a test-specific `-b'
+ option. The number of transactions in flight at one time will be
+ one greater than this value. It is undefined for a
+ non-request/response test. Units: added Transactions in flight.
+
+`LOCAL_TRANSPORT_RETRANS'
+ This will display the number of retransmissions experienced on the
+ data connection during the test as determined by netperf. A value
+ of -1 means the attempt to determine the number of retransmissions
+ failed or the concept was not valid for the given protocol or the
+ mechanism is not known for the platform. A value of -2 means it
+ was not attempted. As of version 2.5.0 the meaning of values are
+ in flux and subject to change. Units: number of retransmissions.
+
+`REMOTE_TRANSPORT_RETRANS'
+ This will display the number of retransmissions experienced on the
+ data connection during the test as determined by netserver. A
+ value of -1 means the attempt to determine the number of
+ retransmissions failed or the concept was not valid for the given
+ protocol or the mechanism is not known for the platform. A value
+ of -2 means it was not attempted. As of version 2.5.0 the meaning
+ of values are in flux and subject to change. Units: number of
+ retransmissions.
+
+`TRANSPORT_MSS'
+ This will display the Maximum Segment Size (aka MSS) or its
+ equivalent for the protocol being used during the test. A value
+ of -1 means either the concept of an MSS did not apply to the
+ protocol being used, or there was an error in retrieving it.
+ Units: Bytes.
+
+`LOCAL_SEND_THROUGHPUT'
+ The throughput as measured by netperf for the successful "send"
+ calls it made on the data connection. Units: as requested via the
+ global `-f' option and displayed via the THROUGHPUT_UNITS output
+ selector.
+
+`LOCAL_RECV_THROUGHPUT'
+ The throughput as measured by netperf for the successful "receive"
+ calls it made on the data connection. Units: as requested via the
+ global `-f' option and displayed via the THROUGHPUT_UNITS output
+ selector.
+
+`REMOTE_SEND_THROUGHPUT'
+ The throughput as measured by netserver for the successful "send"
+ calls it made on the data connection. Units: as requested via the
+ global `-f' option and displayed via the THROUGHPUT_UNITS output
+ selector.
+
+`REMOTE_RECV_THROUGHPUT'
+ The throughput as measured by netserver for the successful
+ "receive" calls it made on the data connection. Units: as
+ requested via the global `-f' option and displayed via the
+ THROUGHPUT_UNITS output selector.
+
+`LOCAL_CPU_BIND'
+ The CPU to which netperf was bound, if at all, during the test. A
+ value of -1 means that netperf was not explicitly bound to a CPU
+ during the test. Units: CPU ID
+
+`LOCAL_CPU_COUNT'
+ The number of CPUs (cores, threads) detected by netperf. Units:
+ CPU count.
+
+`LOCAL_CPU_PEAK_UTIL'
+ The utilization of the CPU most heavily utilized during the test,
+ as measured by netperf. This can be used to see if any one CPU of a
+ multi-CPU system was saturated even though the overall CPU
+ utilization as reported by LOCAL_CPU_UTIL was low. Units: 0 to 100%
+
+`LOCAL_CPU_PEAK_ID'
+ The id of the CPU most heavily utilized during the test as
+ determined by netperf. Units: CPU ID.
+
+`LOCAL_CPU_MODEL'
+ Model information for the processor(s) present on the system
+ running netperf. Assumes all processors in the system (as
+ perceived by netperf) on which netperf is running are the same
+ model. Units: Text
+
+`LOCAL_CPU_FREQUENCY'
+ The frequency of the processor(s) on the system running netperf, at
+ the time netperf made the call. Assumes that all processors
+ present in the system running netperf are running at the same
+ frequency. Units: MHz
+
+`REMOTE_CPU_BIND'
+ The CPU to which netserver was bound, if at all, during the test. A
+ value of -1 means that netperf was not explicitly bound to a CPU
+ during the test. Units: CPU ID
+
+`REMOTE_CPU_COUNT'
+ The number of CPUs (cores, threads) detected by netserver. Units:
+ CPU count.
+
+`REMOTE_CPU_PEAK_UTIL'
+ The utilization of the CPU most heavily utilized during the test,
+ as measured by netserver. This can be used to see if any one CPU
+ of a multi-CPU system was saturated even though the overall CPU
+ utilization as reported by LOCAL_CPU_UTIL was low. Units: 0 to 100%
+
+`REMOTE_CPU_PEAK_ID'
+ The id of the CPU most heavily utilized during the test as
+ determined by netserver. Units: CPU ID.
+
+`REMOTE_CPU_MODEL'
+ Model information for the processor(s) present on the system
+ running netserver. Assumes all processors in the system (as
+ perceived by netserver) on which netserver is running are the same
+ model. Units: Text
+
+`REMOTE_CPU_FREQUENCY'
+ The frequency of the processor(s) on the system running netserver,
+ at the time netserver made the call. Assumes that all processors
+ present in the system running netserver are running at the same
+ frequency. Units: MHz
+
+`SOURCE_PORT'
+ The port ID/service name to which the data socket created by
+ netperf was bound. A value of 0 means the data socket was not
+ explicitly bound to a port number. Units: ASCII text.
+
+`SOURCE_ADDR'
+ The name/address to which the data socket created by netperf was
+ bound. A value of 0.0.0.0 means the data socket was not explicitly
+ bound to an address. Units: ASCII text.
+
+`SOURCE_FAMILY'
+ The address family to which the data socket created by netperf was
+ bound. A value of 0 means the data socket was not explicitly
+ bound to a given address family. Units: ASCII text.
+
+`DEST_PORT'
+ The port ID to which the data socket created by netserver was
+ bound. A value of 0 means the data socket was not explicitly bound
+ to a port number. Units: ASCII text.
+
+`DEST_ADDR'
+ The name/address of the data socket created by netserver. Units:
+ ASCII text.
+
+`DEST_FAMILY'
+ The address family to which the data socket created by netserver
+ was bound. A value of 0 means the data socket was not explicitly
+ bound to a given address family. Units: ASCII text.
+
+`LOCAL_SEND_CALLS'
+ The number of successful "send" calls made by netperf against its
+ data socket. Units: Calls.
+
+`LOCAL_RECV_CALLS'
+ The number of successful "receive" calls made by netperf against
+ its data socket. Units: Calls.
+
+`LOCAL_BYTES_PER_RECV'
+ The average number of bytes per "receive" call made by netperf
+ against its data socket. Units: Bytes.
+
+`LOCAL_BYTES_PER_SEND'
+ The average number of bytes per "send" call made by netperf against
+ its data socket. Units: Bytes.
+
+`LOCAL_BYTES_SENT'
+ The number of bytes successfully sent by netperf through its data
+ socket. Units: Bytes.
+
+`LOCAL_BYTES_RECVD'
+ The number of bytes successfully received by netperf through its
+ data socket. Units: Bytes.
+
+`LOCAL_BYTES_XFERD'
+ The sum of bytes sent and received by netperf through its data
+ socket. Units: Bytes.
+
+`LOCAL_SEND_OFFSET'
+ The offset from the alignment of the buffers passed by netperf in
+ its "send" calls. Specified via the global `-o' option and
+ defaults to 0. Units: Bytes.
+
+`LOCAL_RECV_OFFSET'
+ The offset from the alignment of the buffers passed by netperf in
+ its "receive" calls. Specified via the global `-o' option and
+ defaults to 0. Units: Bytes.
+
+`LOCAL_SEND_ALIGN'
+ The alignment of the buffers passed by netperf in its "send" calls
+ as specified via the global `-a' option. Defaults to 8. Units:
+ Bytes.
+
+`LOCAL_RECV_ALIGN'
+ The alignment of the buffers passed by netperf in its "receive"
+ calls as specified via the global `-a' option. Defaults to 8.
+ Units: Bytes.
+
+`LOCAL_SEND_WIDTH'
+ The "width" of the ring of buffers through which netperf cycles as
+ it makes its "send" calls. Defaults to one more than the local
+ send socket buffer size divided by the send size as determined at
+ the time the data socket is created. Can be used to make netperf
+ more processor data cache unfiendly. Units: number of buffers.
+
+`LOCAL_RECV_WIDTH'
+ The "width" of the ring of buffers through which netperf cycles as
+ it makes its "receive" calls. Defaults to one more than the local
+ receive socket buffer size divided by the receive size as
+ determined at the time the data socket is created. Can be used to
+ make netperf more processor data cache unfiendly. Units: number of
+ buffers.
+
+`LOCAL_SEND_DIRTY_COUNT'
+ The number of bytes to "dirty" (write to) before netperf makes a
+ "send" call. Specified via the global `-k' option, which requires
+ that -enable-dirty=yes was specificed with the configure command
+ prior to building netperf. Units: Bytes.
+
+`LOCAL_RECV_DIRTY_COUNT'
+ The number of bytes to "dirty" (write to) before netperf makes a
+ "recv" call. Specified via the global `-k' option which requires
+ that -enable-dirty was specified with the configure command prior
+ to building netperf. Units: Bytes.
+
+`LOCAL_RECV_CLEAN_COUNT'
+ The number of bytes netperf should read "cleanly" before making a
+ "receive" call. Specified via the global `-k' option which
+ requires that -enable-dirty was specified with configure command
+ prior to building netperf. Clean reads start were dirty writes
+ ended. Units: Bytes.
+
+`LOCAL_NODELAY'
+ Indicates whether or not setting the test protocol-specific "no
+ delay" (eg TCP_NODELAY) option on the data socket used by netperf
+ was requested by the test-specific `-D' option and successful.
+ Units: 0 means no, 1 means yes.
+
+`LOCAL_CORK'
+ Indicates whether or not TCP_CORK was set on the data socket used
+ by netperf as requested via the test-specific `-C' option. 1 means
+ yes, 0 means no/not applicable.
+
+`REMOTE_SEND_CALLS'
+
+`REMOTE_RECV_CALLS'
+
+`REMOTE_BYTES_PER_RECV'
+
+`REMOTE_BYTES_PER_SEND'
+
+`REMOTE_BYTES_SENT'
+
+`REMOTE_BYTES_RECVD'
+
+`REMOTE_BYTES_XFERD'
+
+`REMOTE_SEND_OFFSET'
+
+`REMOTE_RECV_OFFSET'
+
+`REMOTE_SEND_ALIGN'
+
+`REMOTE_RECV_ALIGN'
+
+`REMOTE_SEND_WIDTH'
+
+`REMOTE_RECV_WIDTH'
+
+`REMOTE_SEND_DIRTY_COUNT'
+
+`REMOTE_RECV_DIRTY_COUNT'
+
+`REMOTE_RECV_CLEAN_COUNT'
+
+`REMOTE_NODELAY'
+
+`REMOTE_CORK'
+ These are all like their "LOCAL_" counterparts only for the
+ netserver rather than netperf.
+
+`LOCAL_SYSNAME'
+ The name of the OS (eg "Linux") running on the system on which
+ netperf was running. Units: ASCII Text
+
+`LOCAL_SYSTEM_MODEL'
+ The model name of the system on which netperf was running. Units:
+ ASCII Text.
+
+`LOCAL_RELEASE'
+ The release name/number of the OS running on the system on which
+ netperf was running. Units: ASCII Text
+
+`LOCAL_VERSION'
+ The version number of the OS runningon the system on which netperf
+ was running. Units: ASCII Text
+
+`LOCAL_MACHINE'
+ The machine architecture of the machine on which netperf was
+ running. Units: ASCII Text.
+
+`REMOTE_SYSNAME'
+
+`REMOTE_SYSTEM_MODEL'
+
+`REMOTE_RELEASE'
+
+`REMOTE_VERSION'
+
+`REMOTE_MACHINE'
+ These are all like their "LOCAL_" counterparts only for the
+ netserver rather than netperf.
+
+`LOCAL_INTERFACE_NAME'
+ The name of the probable egress interface through which the data
+ connection went on the system running netperf. Example: eth0.
+ Units: ASCII Text.
+
+`LOCAL_INTERFACE_VENDOR'
+ The vendor ID of the probable egress interface through which
+ traffic on the data connection went on the system running netperf.
+ Units: Hexadecimal IDs as might be found in a `pci.ids' file or at
+ the PCI ID Repository (http://pciids.soruceforge.net/).
+
+`LOCAL_INTERFACE_DEVICE'
+ The device ID of the probable egress interface through which
+ traffic on the data connection went on the system running netperf.
+ Units: Hexadecimal IDs as might be found in a `pci.ids' file or at
+ the PCI ID Repository (http://pciids.soruceforge.net/).
+
+`LOCAL_INTERFACE_SUBVENDOR'
+ The sub-vendor ID of the probable egress interface through which
+ traffic on the the data connection went on the system running
+ netperf. Units: Hexadecimal IDs as might be found in a `pci.ids'
+ file or at the PCI ID Repository (http://pciids.soruceforge.net/).
+
+`LOCAL_INTERFACE_SUBDEVICE'
+ The sub-device ID of the probable egress interface through which
+ traffic on the data connection went on the system running netperf.
+ Units: Hexadecimal IDs as might be found in a `pci.ids' file or at
+ the PCI ID Repository (http://pciids.soruceforge.net/).
+
+`LOCAL_DRIVER_NAME'
+ The name of the driver used for the probable egress interface
+ through which traffic on the data connection went on the system
+ running netperf. Units: ASCII Text.
+
+`LOCAL_DRIVER_VERSION'
+ The version string for the driver used for the probable egress
+ interface through which traffic on the data connection went on the
+ system running netperf. Units: ASCII Text.
+
+`LOCAL_DRIVER_FIRMWARE'
+ The firmware version for the driver used for the probable egress
+ interface through which traffic on the data connection went on the
+ system running netperf. Units: ASCII Text.
+
+`LOCAL_DRIVER_BUS'
+ The bus address of the probable egress interface through which
+ traffic on the data connection went on the system running netperf.
+ Units: ASCII Text.
+
+`LOCAL_INTERFACE_SLOT'
+ The slot ID of the probable egress interface through which traffic
+ on the data connection went on the system running netperf. Units:
+ ASCII Text.
+
+`REMOTE_INTERFACE_NAME'
+
+`REMOTE_INTERFACE_VENDOR'
+
+`REMOTE_INTERFACE_DEVICE'
+
+`REMOTE_INTERFACE_SUBVENDOR'
+
+`REMOTE_INTERFACE_SUBDEVICE'
+
+`REMOTE_DRIVER_NAME'
+
+`REMOTE_DRIVER_VERSION'
+
+`REMOTE_DRIVER_FIRMWARE'
+
+`REMOTE_DRIVER_BUS'
+
+`REMOTE_INTERFACE_SLOT'
+ These are all like their "LOCAL_" counterparts only for the
+ netserver rather than netperf.
+
+`LOCAL_INTERVAL_USECS'
+ The interval at which bursts of operations (sends, receives,
+ transactions) were attempted by netperf. Specified by the global
+ `-w' option which requires -enable-intervals to have been
+ specified with the configure command prior to building netperf.
+ Units: Microseconds (though specified by default in milliseconds
+ on the command line)
+
+`LOCAL_INTERVAL_BURST'
+ The number of operations (sends, receives, transactions depending
+ on the test) which were attempted by netperf each
+ LOCAL_INTERVAL_USECS units of time. Specified by the global `-b'
+ option which requires -enable-intervals to have been specified
+ with the configure command prior to building netperf. Units:
+ number of operations per burst.
+
+`REMOTE_INTERVAL_USECS'
+ The interval at which bursts of operations (sends, receives,
+ transactions) were attempted by netserver. Specified by the
+ global `-w' option which requires -enable-intervals to have been
+ specified with the configure command prior to building netperf.
+ Units: Microseconds (though specified by default in milliseconds
+ on the command line)
+
+`REMOTE_INTERVAL_BURST'
+ The number of operations (sends, receives, transactions depending
+ on the test) which were attempted by netperf each
+ LOCAL_INTERVAL_USECS units of time. Specified by the global `-b'
+ option which requires -enable-intervals to have been specified
+ with the configure command prior to building netperf. Units:
+ number of operations per burst.
+
+`LOCAL_SECURITY_TYPE_ID'
+
+`LOCAL_SECURITY_TYPE'
+
+`LOCAL_SECURITY_ENABLED_NUM'
+
+`LOCAL_SECURITY_ENABLED'
+
+`LOCAL_SECURITY_SPECIFIC'
+
+`REMOTE_SECURITY_TYPE_ID'
+
+`REMOTE_SECURITY_TYPE'
+
+`REMOTE_SECURITY_ENABLED_NUM'
+
+`REMOTE_SECURITY_ENABLED'
+
+`REMOTE_SECURITY_SPECIFIC'
+ A bunch of stuff related to what sort of security mechanisms (eg
+ SELINUX) were enabled on the systems during the test.
+
+`RESULT_BRAND'
+ The string specified by the user with the global `-B' option.
+ Units: ASCII Text.
+
+`UUID'
+ The universally unique identifier associated with this test, either
+ generated automagically by netperf, or passed to netperf via a
+ test-specific `-u' option. Note: Future versions may make this a
+ global command-line option. Units: ASCII Text.
+
+`MIN_LATENCY'
+ The minimum "latency" or operation time (send, receive or
+ request/response exchange depending on the test) as measured on the
+ netperf side when the global `-j' option was specified. Units:
+ Microseconds.
+
+`MAX_LATENCY'
+ The maximum "latency" or operation time (send, receive or
+ request/response exchange depending on the test) as measured on the
+ netperf side when the global `-j' option was specified. Units:
+ Microseconds.
+
+`P50_LATENCY'
+ The 50th percentile value of "latency" or operation time (send,
+ receive or request/response exchange depending on the test) as
+ measured on the netperf side when the global `-j' option was
+ specified. Units: Microseconds.
+
+`P90_LATENCY'
+ The 90th percentile value of "latency" or operation time (send,
+ receive or request/response exchange depending on the test) as
+ measured on the netperf side when the global `-j' option was
+ specified. Units: Microseconds.
+
+`P99_LATENCY'
+ The 99th percentile value of "latency" or operation time (send,
+ receive or request/response exchange depending on the test) as
+ measured on the netperf side when the global `-j' option was
+ specified. Units: Microseconds.
+
+`MEAN_LATENCY'
+ The average "latency" or operation time (send, receive or
+ request/response exchange depending on the test) as measured on the
+ netperf side when the global `-j' option was specified. Units:
+ Microseconds.
+
+`STDDEV_LATENCY'
+ The standard deviation of "latency" or operation time (send,
+ receive or request/response exchange depending on the test) as
+ measured on the netperf side when the global `-j' option was
+ specified. Units: Microseconds.
+
+`COMMAND_LINE'
+ The full command line used when invoking netperf. Units: ASCII
+ Text.
+
+`OUTPUT_END'
+ While emitted with the list of output selectors, it is ignored when
+ specified as an output selector.
+
+
File: netperf.info, Node: Other Netperf Tests, Next: Address Resolution, Prev: The Omni Tests, Up: Top
10 Other Netperf Tests
@@ -2938,7 +3678,7 @@
* Aggregate Performance: Using Netperf to Measure Aggregate Performance.
(line 6)
* Bandwidth Limitation: Installing Netperf Bits.
- (line 55)
+ (line 62)
* Connection Latency: TCP_CC. (line 6)
* CPU Utilization: CPU Utilization. (line 6)
* Design of Netperf: The Design of Netperf.
@@ -2960,7 +3700,7 @@
* Latency, Request-Response: TCP_RR. (line 6)
* Limiting Bandwidth <1>: UDP_STREAM. (line 9)
* Limiting Bandwidth: Installing Netperf Bits.
- (line 55)
+ (line 62)
* Measuring Latency: TCP_RR. (line 6)
* Packet Loss: UDP_RR. (line 6)
* Port Reuse: TCP_CC. (line 13)
@@ -2982,9 +3722,9 @@
* --enable-dlpi, Configure: Installing Netperf Bits.
(line 28)
* --enable-histogram, Configure: Installing Netperf Bits.
- (line 55)
+ (line 62)
* --enable-intervals, Configure: Installing Netperf Bits.
- (line 55)
+ (line 62)
* --enable-omni, Configure: Installing Netperf Bits.
(line 34)
* --enable-sctp, Configure: Installing Netperf Bits.
@@ -3075,57 +3815,58 @@
Node: Installing Netperf5699
Node: Getting Netperf Bits7253
Node: Installing Netperf Bits9071
-Node: Verifying Installation17265
-Node: The Design of Netperf17969
-Node: CPU Utilization19565
-Node: CPU Utilization in a Virtual Guest28280
-Node: Global Command-line Options29291
-Node: Command-line Options Syntax29830
-Node: Global Options31212
-Node: Using Netperf to Measure Bulk Data Transfer51916
-Node: Issues in Bulk Transfer52581
-Node: Options common to TCP UDP and SCTP tests56732
-Node: TCP_STREAM63034
-Node: TCP_MAERTS66802
-Node: TCP_SENDFILE68039
-Node: UDP_STREAM70355
-Node: XTI_TCP_STREAM73791
-Node: XTI_UDP_STREAM74436
-Node: SCTP_STREAM75081
-Node: DLCO_STREAM75781
-Node: DLCL_STREAM77754
-Node: STREAM_STREAM78628
-Node: DG_STREAM79486
-Node: Using Netperf to Measure Request/Response80167
-Node: Issues in Request/Response82088
-Node: Options Common to TCP UDP and SCTP _RR tests84094
-Node: TCP_RR89073
-Node: TCP_CC91417
-Node: TCP_CRR93614
-Node: UDP_RR94660
-Node: XTI_TCP_RR96681
-Node: XTI_TCP_CC97264
-Node: XTI_TCP_CRR97430
-Node: XTI_UDP_RR97598
-Node: DLCL_RR98175
-Node: DLCO_RR98328
-Node: SCTP_RR98480
-Node: Using Netperf to Measure Aggregate Performance98616
-Node: Running Concurrent Netperf Tests99451
-Node: Using --enable-burst103343
-Node: Using Netperf to Measure Bidirectional Transfer109529
-Node: Bidirectional Transfer with Concurrent Tests110597
-Node: Bidirectional Transfer with TCP_RR112463
-Node: The Omni Tests114997
-Node: Native Omni Tests116044
-Node: Migrated Tests118895
-Node: Omni Output Selection121024
-Node: Other Netperf Tests123287
-Node: CPU rate calibration123702
-Node: Address Resolution126045
-Node: Enhancing Netperf128021
-Node: Netperf4129450
-Node: Concept Index130360
-Node: Option Index132686
+Node: Verifying Installation17670
+Node: The Design of Netperf18374
+Node: CPU Utilization19970
+Node: CPU Utilization in a Virtual Guest28685
+Node: Global Command-line Options29696
+Node: Command-line Options Syntax30235
+Node: Global Options31617
+Node: Using Netperf to Measure Bulk Data Transfer52321
+Node: Issues in Bulk Transfer52986
+Node: Options common to TCP UDP and SCTP tests57137
+Node: TCP_STREAM63439
+Node: TCP_MAERTS67207
+Node: TCP_SENDFILE68444
+Node: UDP_STREAM70944
+Node: XTI_TCP_STREAM74380
+Node: XTI_UDP_STREAM75025
+Node: SCTP_STREAM75670
+Node: DLCO_STREAM76370
+Node: DLCL_STREAM78343
+Node: STREAM_STREAM79217
+Node: DG_STREAM80075
+Node: Using Netperf to Measure Request/Response80756
+Node: Issues in Request/Response82677
+Node: Options Common to TCP UDP and SCTP _RR tests84683
+Node: TCP_RR89662
+Node: TCP_CC92006
+Node: TCP_CRR94203
+Node: UDP_RR95249
+Node: XTI_TCP_RR97270
+Node: XTI_TCP_CC97853
+Node: XTI_TCP_CRR98019
+Node: XTI_UDP_RR98187
+Node: DLCL_RR98764
+Node: DLCO_RR98917
+Node: SCTP_RR99069
+Node: Using Netperf to Measure Aggregate Performance99205
+Node: Running Concurrent Netperf Tests100040
+Node: Using --enable-burst103932
+Node: Using Netperf to Measure Bidirectional Transfer110118
+Node: Bidirectional Transfer with Concurrent Tests111186
+Node: Bidirectional Transfer with TCP_RR113052
+Node: The Omni Tests115586
+Node: Native Omni Tests116633
+Node: Migrated Tests119484
+Node: Omni Output Selection121589
+Node: Omni Output Selectors123888
+Node: Other Netperf Tests151515
+Node: CPU rate calibration151930
+Node: Address Resolution154273
+Node: Enhancing Netperf156249
+Node: Netperf4157678
+Node: Concept Index158588
+Node: Option Index160914
End Tag Table
Modified: trunk/doc/netperf.pdf
===================================================================
(Binary files differ)
Modified: trunk/doc/netperf.texi
===================================================================
(Binary files differ)
More information about the netperf-dev
mailing list