This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] Very slow domU network performance

To: "Stephen C. Tweedie" <sct@xxxxxxxxxx>
Subject: Re: [Xen-users] Very slow domU network performance
From: Winston Chang <winston@xxxxxxxxxx>
Date: Wed, 5 Apr 2006 12:04:10 -0400
Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 05 Apr 2006 09:04:48 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1144173588.3411.19.camel@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <0EDDFD7D-2C5D-4D47-880D-E7DC268EA149@xxxxxxxxxx> <1144173588.3411.19.camel@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Apr 4, 2006, at 1:59 PM, Stephen C. Tweedie wrote:

The packet loss is as follows:
domU --> domU  0% (using
domU --> domU  0% (using domU eth0 IP address)
dom0 --> domU  ~100% (only 7 of 38464 made it!)


There have been a number of weird checksum problems identified in the
past with Xen's networking; a checkin was just made a day or two ago
which cleans up the checksum handling in a way which may well help here.
We'll have to see whether an updated dom0/domU kernel improves things

I ran the test with the latest xen-unstable build. The results are the same.

When I ran 'xm sched-sedf 0 0 0 0 1 1' to prevent domU CPU starvation, network performance was good. The numbers in this case are the same as in my other message where I detail the results using the week-old xen build -- it could handle 90Mb/s with no datagram loss. So it looks like the checksum patches had no effect on this phenomenon; the only thing that mattered was the scheduling.

I also did some lower data-rate UDP tests with iperf (without the scheduling change). At 500 Kb/s it loses about 48% of the datagrams, at 2 Mb/s it loses 81%, and at 4 Mb/s it loses 99%. Ouch. iperf also manages to chew up 100% of CPU time doing this, so that might explain why domU chokes even at low bandwidths. Perhaps its timing is implemented with a while loop.

There's still the odd thing with domU<->dom0 communication being about 1/10 the speed of dom0<->dom0 or domU<->domU. It's roughly 170 Mb/s versus 1.7 Gb/s.


Xen-users mailing list