WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Network Checksum Removal

To: <bin.ren@xxxxxxxxxxxx>, Nivedita Singhvi <niv@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Network Checksum Removal
From: Rolf Neugebauer <rolf.neugebauer@xxxxxxxxx>
Date: Tue, 24 May 2005 00:55:36 +0100
Cc: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>, Andrew Theurer <habanero@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Jon Mason <jdmason@xxxxxxxxxx>
Delivery-date: Mon, 23 May 2005 23:56:30 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <8ae7802505052314485e898622@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Microsoft-Entourage/11.1.0.040913
These results are pretty bad.

What do you get for dom0->external? That definitely should be close or equal
to native.

Have you tweaked /proc/sys/net/core/rmem_max?
Is the socket buffer set to some large value?
Are you transmitting/receiving enough data?

I don't know netperf but for ttcp I would normally do:

echo 1048576 > /proc/sys/net/core/rmem_max
ttcp -b 65536 (or similar) ...
And then transmit a few gigabytes

What's the interrupt rate etc.

Rolf


On 23/5/05 10:48 pm, "Bin Ren" <bin.ren@xxxxxxxxx> wrote:

> On 5/23/05, Nivedita Singhvi <niv@xxxxxxxxxx> wrote:
>> Bin Ren wrote:
>>> I've added the support for ethtools. By turning on and off netfront
>>> checksum offloading, I'm getting the following throughput numbers,
>>> using iperf. Each test was run three times. CPU usages are quite
>>> similar in two cases ('top' output). Looks like checksum computation
>>> is not a major overhead in domU networking.
>>> 
>>> dom0/1/2 all have 128M memory. dom0 has e1000 tx checksum offloading turned
>>> on.
>> 
>> Yeah, if you want to do anything network intensive, 128MB is just
>> not enough - you really need more memory in your system.
> 
> I've given all the domains 256M memory and switched to netperf
> TCP_STREAM (netperf -H server). almost no change. Details:
> 
> dom1->external: 420Mbps
> dom1->dom0: 437Mbps
> dom0->dom1: 200Mbps (!!!)
> dom1->dom2: 327Mbps
> 
>>  
>>> With Tx checksum on:
>>> 
>>> dom1->dom2: 300Mb/s (dom0 cpu maxed out by software interrupts)
>>> dom1->dom0: 459Mb/s (dom0 cpu 80% in SI, dom1 cpu 20% in SI)
>>> dom1->external: 439Mb/s (over 1Gb/s ethernet) (dom0 cpu 50% in SI,
>>> dom1 60% in SI)
>>> 
>>> With Tx checksum off:
>>> 
>>> dom1->dom2: 301Mb/s
>>> dom1->dom0: 454Mb/s
>>> dom1->externel: 437Mb/s (over 1Gb/s ethernet)
>> 
>> 
>> iperf is a directional send test, correct?
>> i.e. is dom1 -> dom0 perf the same as dom0 -> dom1 for you?
> 
> Please see above.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel