WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows

To: "jim burns" <jim_burn@xxxxxxxxxxxxx>
Subject: Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
From: "Emre ERENOGLU" <erenoglu@xxxxxxxxx>
Date: Sat, 1 Mar 2008 15:50:13 +0100
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sat, 01 Mar 2008 06:50:50 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references; bh=DKHlcKJVArrsJ1nzNJZby0IVioaxgNGG+zTdTCJVDmk=; b=Qs7zz1dD2bGtNpoS/VuycsfTz4+01Xi1AEta5M2oN8nmju4xj5xUSE4TRDs7ErFMF4V16uNfgbtqXK4TnpEd83ru6nV1/E9Mj9WShbhgLhX8fBlrp77Byi9Iv7eCU9rhttsxlC5wf+Q0Cgq5ZbkXlJRZ0NxK+3rsMRhBBMh+JBM=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references; b=LYhVuq78dOz2rWr9omnCOX6VLVB96w3E+7mul+G2fDxEebKtXnvgZ+xoT1q6Eg6rcAfJTgx/r/9GDqlzdTVtRpVxLwogB6lvsQTpeXZOUnWX09aqfQ297zX51SJ3DsHxkZ/t+d6gCYZxdseSP81DeKLAR9oa9j9IurmPatLp3YE=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200803010921.24969.jim_burn@xxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D0131AEF2@trantor> <47C6608C.8070603@xxxxxxxxx> <20080228110700.GQ21162@xxxxxxxxxxxxxxx> <200803010921.24969.jim_burn@xxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hi Jim,

Thanks for the test, great to have this information. I'm really wondering the performance of the "unmodified_drivers" in the xen package, which can be compiled in a "HVM Linux DomU" to get Paravirtual Drivers for disks and ethernet card.

When I tested these on Xen 3.1 on Pardus Linux DomU, I was getting very similar performance on -disks- with hdparm. No other "reliable" tests were performed. I also didn't test the network card.

Emre

On Sat, Mar 1, 2008 at 3:21 PM, jim burns <jim_burn@xxxxxxxxxxxxx> wrote:
On Thursday 28 February 2008 06:07:00 am Pasi Kärkkäinen wrote:
> I can recommend iperf too.
>
> Make sure you use the same iperf version everywhere.

Ok, here's my results.

Equipment: core duo 2300, 1.66ghz each, sata drive configured for UDMA/100
System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.0-rc7, dom0 2.6.21
Tested hvm: XP Pro SP2, 2002

Method:

The version tested was 1.7.0, to avoid having to apply the kernel patch that
comes with 2.0.2. The binaries downloaded were from the project homepage
http://dast.nlanr.net/Projects/Iperf/#download. For linux, I chose the 'Linux
libc 2.3 binary and (on fc8 at least) I still had to install the
compat-libstdc++-33 package to get it to run.

The server/listening side was always the dom0, invoked with 'iperf -s'. The
first machine is a linux fc8 pv domu, the second is another machine on my
subnet with a 100Mbps nic pipeline inbetween, and the rest are the various
drivers on a winxp hvm. The invoked command was 'iperf -c dom0-hostname -t
60'. '-t 60' sets the runtime to 60 secs. I used the default buffer size
(8k), mss/mtu, and window size (which actually varies between the client and
the server). I averaged 3 tcp runs.

For the udp tests, the default bandwidth is 1 Mbps (add the '-b 1000000' flag
to the command above). I added or subtracted a 0 till I got a packet loss
percentage of more than 0% and less than 5%, or an observed throughput
significantly less than the request (in other words, a stress test). In the
table below, 'udp Mpbs' is the observed, and '-b Mpbs' is the requested rate.
(The server has to be invoked with 'iperf -s -u'.)

machine  | tcp Mbps| udp Mbps| -b Mbps | udp packet loss
fc8 domu |   1563  |     48.6|     100 |    .08%
on subnet|     79.8|      5.4|      10 |   3.5%
gplpv    |     19.8|      2.0|      10 |   0.0%
realtek  |      9.6|      1.8|      10 |   0.0%

Conclusions: The pv domu tcp rate is a blistering 1.5 Gbps, showing that a
software nic *can* be even faster than a 100 Mpbs hardware nic, at least for
pv. The machine on the same subnet ('on subnet') achieved 80% of the max rate
supported by the hardware. Presumably, since the udp rates are consistently
less than the tcp ones, there was a lot of tcp retransmits. gplpv is twice as
fast as realtek for tcp, about the same for udp. 19.8/8 = ~2.5 MBps, which is
about the rate I was getting with my domu to dom0 file copies. I don't expect
pv data rates from an hvm, but it should be interesting to see how much
faster James & Andy can get this to go. Btw, this was gplpv 0.8.4.

Actually, pretty good work so far guys!

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



--
Emre Erenoglu
erenoglu@xxxxxxxxx
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users