[netperf-talk] Re: Difficultly in understanding the CPU utilization results for UDP stream test

Rick Jones rick.jones2 at hp.com
Tue Jul 3 11:03:10 PDT 2007


Brijraj Vaghani wrote:
> Hi Rick,
> 
> Thanks for your prompt response. My response to your comments are inline...

I'm going to have this continue in netperf-talk since it may help folks 
in the future.

> 
> Regards,
> Brijraj
> 
> On 7/2/07, Rick Jones <rick.jones2 at hp.com> wrote:
> 
>> Brijraj Vaghani wrote:
>> > Hi,
>> >
>> > I am trying to measure the CPU consumption of the IP/UDP stack of my
>> > linux machine. I read up the manual of Netperf and thought it can be
>> > useful to me. I installed netperf on my machine, ran netserver
>> > explicitly and ran the basic sanity test. All seemed to be working
>> > fine till this point. Then I tried measuring the CPU utilization of
>> > the IP/UDP stack using the following command line
>> > netperf -t UDP_STREAM -c
>> > which generated the following result
>> > Socket Message  Elapsed      Messages                          CPU
>> > Service
>> > Size    Size         Time         Okay    Errors   Throughput   Util
>> > Demand
>> > bytes   bytes        secs            #        #         10^6bits/sec %
>> > SU     us/KB
>> >
>> > 110592   65507   10.00        235223      0    12325.4         
>> 99.90    inf
>> > 110592           10.00           0               0.0     -1.00
>> > -1.000
>>
>> I'm guessing you trimmed a bit of the beginning of the output - not a
>> big deal, just checking.
>>
>> Anyway, the above suggests that the sending side completely overran the
>> receiver, or that the message size was too large - which is more likely
>> because normally at least _one_ message would make it.  It also suggests
>> the test consumed all the available CPU on the sending system.
> 
> 
> Yes, you are right. The problem was fixed by controlling the flow UDP data.
> 
>>
>> > After going through the mailing list archives, I figures that this
>> > might be a problem with flow control. So I reconfigured netperf and
>> > made a clean build.
>> > Then I carried out the same test using the following command line 
>> argument
>> > netperf -t UDP_STREAM -c, which generated the following result
>> > Socket    Message  Elapsed     Messages                        CPU
>> > Service
>> > Size       Size         Time         Okay Errors   Throughput   Util
>> >     Demand
>> > bytes      bytes        secs            #      #        10^6bits/sec %
>> > SU     us/KB
>> > 110592   65507      10.00           1      0        0.1
>> > 42.40    inf
>> > 110592                  10.00           0               0.0
>> >   -1.00    -1.000
>>
>> Re configured netperf how exactly - with --enable-intervals, or
>> something else?
> 
> Yes, I reconfigured netperf with --enable-intervals.

OK.

> 
>>
>> > and since I am more interested in finding the performance of the
>> > IP/UDP stack on the receiver side, I ran the following script as well
>> > netperf -b 1 -w 10000  -t UDP_STREAM -C, which generated the following
>> > result
>> >
>> > Socket    Message  Elapsed      Messages                         CPU
>> >   Service
>> > Size       Size         Time         Okay Errors    Throughput    Util
>> >    Demand
>> > bytes      bytes       secs            #      #         10^6bits/sec
>> > % SU     us/KB
>> > 110592   65507      10.01           1      0        0.1
>> > -1.00    -1.000
>> > 110592                  10.01           0               0.0
>> >    53.51    inf
>>
>> -b 1 means there should be one send call in each burst, and that bursts
>> should happen once every 10000 milliseconds, so yes, that first part of
>> it looks correct.  (well, without going too deep into the math)
> 
> Yes it does, however my concern is the CPU numbers.
> 
>>
>> > Do these numbers look right. I looked at the example results in the
>> > manual and their format seems to be different than that of what I got.
>> > For instance without using the throughput numbers show up in both the
>> > client and server in the examples where I get them only for the
>> > client.
>> > Also in the last set of results, is the following interpretation of
>> > the results correct " The IP/UDP stack takes up 53.51% of the CPU to
>> > receive UDP data at 0.1Mbps ?
>>
>> I'd double check that with top.  It does look rather off.  Which
>> rev/distro of Linux is this?
> 
> I checked with top. While netperf and netserver were running in the
> backgroung. For 0.1Mbps of throughput, TOP was showing 0.3%
> utilization for both whereas netperf is showing 15 - 20% utilization.
> The following is the result which I got
> Socket    Message Elapsed     Messages        CPU             Service
> Size        Size       Time         Okay  Errors     Throughput
> Util     Demand
> bytes       bytes      secs           #       #           10^6bits/sec
> % US     us/KB
> 
> 110592    1024      121.18       1200      0        0.1
> -1.00    -1.000
> 110592                 121.18       1200               0.1
>   15.03   15182.091
> 
> The CPU numbers still dont make sense to me. I read something about
> CPU calibration in the manual, but I thought that it is  not
> applicable to me as I was not giving -n parameter as a command line
> input. I assumed that the UDP_STREAM script would figure out the
> details by itself after looking at the script. Let me know if I am
> doing something wrong or I need to do something more.
> 
> I am using kernel revision 2.6.9-55.EL. Also I would like to bring to
> your notice that I am using VMware to run linux on a Windows machine.
> Does this change anything, also do I need to disable all the other
> network interface while carrying out this benchmarking?

IIRC neperf does not need/want a -n option for Linux, it figures-out how 
many CPUs there are all on its own.  Calibration is independent of 
need/want for the -n option, but indeed, netperf _should_ figure it all 
out on its own.

At the risk of showing I've already forgotten the start of this thread, 
does the bogus CPU util also appear with TCP_STREAM using the same 
"pacing" values?

rick jones
> 
> 
>>
>> As for why there are still no messages recorded as being received, I
>> would guess that 65507 was still just a little too large for the receive
>> side.  You could try a test-specific -m option to use a different size.
>>
>> happy benchmarking,
>>
>> rick jones
>>
>> > Thanks in advance for your help.
>> >
>> > Brijraj
>>
>>



More information about the netperf-talk mailing list