WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Network Checksum Removal

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] [PATCH] Network Checksum Removal
From: Jon Mason <jdmason@xxxxxxxxxx>
Date: Tue, 24 May 2005 11:12:21 -0500
Cc: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>, Andrew Theurer <habanero@xxxxxxxxxx>, bin.ren@xxxxxxxxxxxx
Delivery-date: Tue, 24 May 2005 16:11:48 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D1E41B5@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: IBM
References: <A95E2296287EAD4EB592B5DEEFCE0E9D1E41B5@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.7.2
On Monday 23 May 2005 06:59 pm, Ian Pratt wrote:
> > I get the following domU->dom0 throughput on my system (using
> > netperf3 TCP_STREAM testcase):
> > tx on       ~1580Mbps
> > tx off      ~1230Mbps
> >
> > with my previous patch (on Friday's build), I was seeing the
> > following:
> > with patch  ~1610Mbps
> > no patch            ~1100Mbps
> >
> > The slight difference between the two might be caused by the
> > changes that were incorporated in xen between those dates.
> > If you think it is worth the time, I can back port the latest
> > patch to Friday's build to see if that makes a difference.
>
> Are you sure these aren't within 'experimental error'? I can't think of
> anything that's changed since Friday that could be effecting this, but
> it would be good to dig a bit further as the difference in 'no patch'
> results is quite significant.

The "tx off" is probably higher because of the offloading for the rx (in both 
the netback not checksumming and the physical ethernet checksum verification 
being passed to domU).

I'm not sure why "tx on" is lower than my previous tests.  It could be 
something outside the patch which has been incorporated, or it could be 
something in the patch that was committed.  The changelog patch diff was 
truncated, so I will have to create a diff to apply to my Friday tree to see 
if the problem lies in the latter.  

> It might be revealing to try running some results on the unpatched
> Fri/Sat/Sun tree.
>
> BTW, dom0<->domU is not that interesting as I'd generally discourage
> people from running services in dom0. 

That is why I designed the checksum offload patch the way I did, as there were 
otherways which would be significantly better domU->dom0 communication (but 
would cause significantly more calculation in dom0).

> I'd be really interested to see 
> the following tests:
>
> domU <-> external [dom0 on cpu0; dom1 on cpu1]
> domU <-> external [dom0 on cpu0; dom1 on cpu0]
> domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu2 ** on a 4 way]
> domU <-> domU [dom0 on cpu0; dom1 on cpu0; dom2 on cpu0 ]
> domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu1 ]
> domU <-> domU [dom0 on cpu0; dom1 on cpu0; dom2 on cpu1 ]
> domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu2 ** cpu2
> hyperthread w/ cpu 0]
> domU <-> domU [dom0 on cpu0; dom1 on cpu1; dom2 on cpu3 ** cpu3
> hyperthread w/ cpu 1]
>
> This might help us understand the performance of interdomin networking
> rather better than we do at present. If you could fill a few of these in
> that would be great.

I wish I had all the hardware you describe ;-)

My tests are running on a pentium4 (which has hyperthreading, which shows up 
as 2 cpus).  dom0 was on cpu0 and domU was on cpu1.  I'll be happy to run 
netperf on the hardware I have.

Thanks,
Jon

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel