WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
From: jim burns <jim_burn@xxxxxxxxxxxxx>
Date: Mon, 5 May 2008 20:32:46 -0400
Delivery-date: Mon, 05 May 2008 17:34:29 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20080505090007.GB20425@xxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D013DC578@trantor> <200805050047.40387.jim_burn@xxxxxxxxxxxxx> <20080505090007.GB20425@xxxxxxxxxxxxxxx> (sfid-20080505_051918_400373_7267F0E3)
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.9
On Monday May 05 2008 05:00:07 am Pasi Kärkkäinen wrote:
> Hmm.. have you tried LVM backed devices for HVM guest? Or raw devices..

From the previous post:
> And now for something totally different: I just upgraded my processor from
> an  Intel Core Duo 2300, 1.66Mhz to a Core 2 Duo 5600, 1.83 Mhz. Here's some
> new  iometer results:

pattern 4k, 50% read, 0% random

dynamo on?  |   io/s   |  MB/s  | Avg. i/o time(ms} | max i/o time(ms) | %CPU
domu w/gplpv|   501.7  |   1.96 |        2.90       |       0          | 31.68
domu w/qemu |   187.5  |   0.73 |        5.87       |       0          | 29.89
dom0 w/4Gb  |  1102.3  |   4.31 |        0.91       |      445.5       |  0
dom0 w/4Gb  |  1125.8  |   4.40 |        0.89       |      332.1       |  0
(2nd dom0 numbers from when booted w/o /gplpv)

pattern 32k, 50% read, 0% random

domu w/gplpv|   238.3  |   7.45 |        4.09       |        0         | 22.48
domu w/qemu |   157.4  |   4.92 |        6.35       |        0         | 20.51
dom0 w/4Gb  |    52.5  |   1.64 |       19.05       |     1590.0       |  0
dom0 w/4Gb  |    87.8  |   2.74 |       11.39       |     1286.4       |  0

> So, between the two processors, the new one gives qemu and dom0 numbers that 
> are modestly faster, and gplpv numbers that are 50% greater.

I never claimed to have the slickest hardware setup on the block. When I do 
benchmarks, it's the relative differences I'm stressing, eg -

- qemu vs. gplpv: I obviously expect gplpv to be faster

- one version of gplpv vs. the next: the trend has been that each version of 
gplpv is faster than the previous, especially for iperf, where Realtek gets 
10Mbits/s, 0.8.4 got 19.8 Mb/s, and 0.8.9 is getting 32.1 Mb/s. (But that 
last number is w/ the new processor - it was 25, but that's still better.)

- dom0 vs. domu: obviously, the standard to match is dom0 performance. (I 
suspect, tho', that non-xen kernel performance would be even better.) Looking 
at the 4k pattern numbers above, hvm severely lags dom0. Interestingly 
enough, for the 32k pattern, hvm is doing better than dom0.

That having been said, sure, my hardware setup could be better. My (Fedora) 
xen server's physical volume spans all of the physical disk, and I have no 
room left on that system for anything but my every day domus, leaving a few 
gig left over for kernel compiles. I currently store my backup & test domus 
on a SuSE system which does have lots of room. Currently, if I want to fire 
one of them up, I access it over samba (ouch!). I eventually plan to convert 
the SuSE box to an iscsi server, serving up lvm slices. As with the processor 
upgrade above, any change in my configuration will be benchmarked as well.

The iscsi conversion is on the back burner for now, tho', in favor of flashing 
my Fedora bios to support 64 bits, and then loading a 64bit dom0. I suspect 
that will get even better results than iscsi. As always, any significant 
changes will be posted.

> Could you try iometer on dom0 to see what kind of performance you get
> there.. or on linux pv domU?

As you can see above, I did do dom0. I could do a linux pv, but your next idea 
interests me more.

> And one more thing.. was your XP HVM single vcpu or more? Did you try
> binding both dom0 and hvm domU to their own dedicated cpu cores?

It was vcpu=2.

root@Insp6400 05/05/08  6:32PM:~
[977] > xm vcpu-pin Domain-0 0 0
root@Insp6400 05/05/08  6:33PM:~
[978] > xm vcpu-pin Domain-0 1 0
root@Insp6400 05/05/08  6:33PM:~
[979] > xm vcpu-pin fedora 0 0
root@Insp6400 05/05/08  6:33PM:~
[980] > xm vcpu-pin winxp 0 1
root@Insp6400 05/05/08  6:34PM:~
[981] > xm vcpu-pin winxp 1 1
root@Insp6400 05/05/08  6:34PM:~
[982] > xm vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU 
Affinity
Domain-0                             0     0     0   r--    5548.2 0
Domain-0                             0     1     0   ---    3392.1 0
fedora                               3     0     0   -b-    1444.0 0
winxp                                2     0     1   r--   14713.3 1
winxp                                2     1     1   ---   15013.8 1

The idle fedora domain shares the same pcpu as dom0. This unfortunately 
results in a very sluggish and useless domu & its desktop. It even took 10 
times as long to reboot. Dom0 seems unaffected. Restoring two pcpus to the 
domain eliminated the sluggishness. Next I tried booting with vcpus=1, with 
it pinned to pcpu 1. Now the new iometer results (qemu (booting w/o /gplpv) 
not tested):

pattern 4k, 50% read, 0% random

dynamo on?  |   io/s   |  MB/s  | Avg. i/o time(ms} | max i/o time(ms) | %CPU
domu w/gplpv|   115.6  |   0.45 |        8.65       |     1836.7       | 67.63
dom0 w/4Gb  |   501.2  |   1.96 |        1.99       |      739.2       |  0
(2nd dom0 numbers from when booted w/o /gplpv)

pattern 32k, 50% read, 0% random

domu w/gplpv|   115.3  |   3.65 |        8.67       |     1735.2       | 54.41
dom0 w/4Gb  |    53.0  |   1.66 |       18.86       |     1751.3       |  0

Yeeaaahh - everything tanked! MB/s down, Cpu % up, etc. Console was still a 
little sluggish. (I suppose pinning cpus might work better with more than one 
socket on the mobo.) I won't be trying that config again ;-)

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users