WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] n/w performance degradation

To: "Diwaker Gupta" <diwaker.lists@xxxxxxxxx>, "Nivedita Singhvi" <niv@xxxxxxxxxx>
Subject: RE: [Xen-devel] n/w performance degradation
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Tue, 6 Dec 2005 10:42:42 -0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 06 Dec 2005 10:43:05 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcX6O2RT18dOXq3KQlOjuOBfTIfYIwAFW5/Q
Thread-topic: [Xen-devel] n/w performance degradation
> > Fow low mem systems, the default size of the tcp read buffer
> > (tcp_rmem[1]) is 43689, and max size (tcp_rmem[2]) is 
> 2*43689, which 
> > is really too low to do network heavy lifting.
> 
> Just as an aside, I wanted to point out that my dom0's were 
> running at the exact same configuration (memory, socket 
> sizes) as the VM. And I can mostly saturate a gig link from 
> dom0. So while socket sizes might certainly have an impact, 
> there are still additional bottlenecks that need to be fine tuned.

Xen is certainly going to be more sensitive to small socket buffer sizes
when you're trying to run dom0 and the guest on the same CPU thread. If
you're running a single TCP connection the socket buffer size basically
determines how frequently you're forced to switch between domains.
Switching every 43KB at 1Gb/s amounts to thoudands of domain switches a
second which burns CPU. Doubling the socket buffer size halves the rate
of domain switches. Under Xen this would be a more sensible default.

Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>