WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] redhat native vs. redhat on XCP

To: Grant McWilliams <grantmasterflash@xxxxxxxxx>
Subject: Re: [Xen-users] redhat native vs. redhat on XCP
From: Boris Quiroz <bquiroz.work@xxxxxxxxx>
Date: Mon, 17 Jan 2011 16:22:28 -0300
Cc: Henrik Andersson <henrik.j.andersson@xxxxxxxxx>, xenList <xen-users@xxxxxxxxxxxxxxxxxxx>, Javier Guerra Giraldez <javier@xxxxxxxxxxx>
Delivery-date: Mon, 17 Jan 2011 11:23:56 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=s6o4A6ZNeAVLju0BxNml3Aub/CyefC4KlvVYnGL4Ov8=; b=JNZweNn6aCwdAzgXnS5D0vQWrlqklRy1xHVVXYKQKiNDRleGrsimK9txWyVu3/N5Ws /6YiLllQNsXF2O2Kbcl4pqxkeeQVq5IU3yZCp/PKfLn9O5iDq3e+mkjwxkrKFmmSgUzU cnqUUArUME/hr88adgCJZBVa3+oDDeu12JM0s=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=puCMiCO7xV2kqu885m3de+WX/CLXzM7lmIGO05TD9FTZ/7vSYax0A13leO2cjNrlJd LZO+yygnhMMxD10SLlMGEN+s0vDtztJZ81bDSw0gLC5ZdY0m7nddsNMug/tkANsOOFaO QE+XVrfBjqPaUFio7x4uDaJovgxOnqbzjBjqc=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTi=PE2a-Lp9H2xTCAaq49RPgRHU1O8M5_iQx51ru@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTinXdOFE+wtNTOyv9C1C2_x7qKkLj-sh+Kby_yZU@xxxxxxxxxxxxxx> <AANLkTikiX+FgS+RgYrd-OASUUoryDcexnu37UwGS1g+2@xxxxxxxxxxxxxx> <4D2E0D51.9070700@xxxxxxxxxxx> <AANLkTi=UcX-Y+-ymujgtnkF9BDF=1FMcLqa6PLTZtFbB@xxxxxxxxxxxxxx> <AANLkTikFdpna7=u2rpLP99msn4TMYEPjnovfXOZ3m0Y7@xxxxxxxxxxxxxx> <AANLkTi=GFWyWwrnqGtMHYfbCzH1mRHKwU34wQTpQDPq7@xxxxxxxxxxxxxx> <AANLkTik+=EtqStD1hcyB3py+0gPmrh5ev46C4RutHNO1@xxxxxxxxxxxxxx> <AANLkTikFuBq9rqtC-PO5JAFP7w_pR-iw6LCFWKYXz7Rh@xxxxxxxxxxxxxx> <AANLkTi=PE2a-Lp9H2xTCAaq49RPgRHU1O8M5_iQx51ru@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
2011/1/16 Grant McWilliams <grantmasterflash@xxxxxxxxx>:
>
>
> On Sun, Jan 16, 2011 at 7:22 AM, Javier Guerra Giraldez <javier@xxxxxxxxxxx>
> wrote:
>>
>> On Sun, Jan 16, 2011 at 1:39 AM, Grant McWilliams
>> <grantmasterflash@xxxxxxxxx> wrote:
>> > As long as I use an LVM volume I get very very near real performance ie.
>> > mysqlbench comes in at about 99% of native.
>>
>> without any real load on other DomUs, i guess
>>
>> in my settings the biggest 'con' of virtualizing some loads is the
>> sharing of resources, not the hypervisor overhead.  Since it's easier
>> (and cheaper) to get hardware oversized on CPU and RAM than on IO
>> speed (specially on IOPS), that means that i have some database
>> servers that I can't virtualize on the near term.
>>
> But that is the same as just putting more than one service on one box. I
> believe he was wondering what the overhead was to virtualizing as apposed to
> bare metal. Anytime you have more than one process running on a box you have
> to think about the resources they use and how they'll interact with each
> other. This has nothing to do with virtualizing itself unless the hypervisor
> has a bad scheduler.
>
>> Of course, most of this would be solved by dedicating spindles instead
>> of LVs to VMs;  maybe when (if?) i get most boxes with lots of 2.5"
>> bays, instead of the current 3.5" ones.  Not using LVM is a real
>> drawback, but it still seems to be better than dedicating whole boxes.
>>
>> --
>> Javier
>
> I've moved all my VMs to running on LVs on SSDs for this purpose. The
> overhead of LV over just bare drives is very very little unless you're doing
> a lot of snapshots.
>
>
> Grant McWilliams
>
> Some people, when confronted with a problem, think "I know, I'll use
> Windows."
> Now they have two problems.
>
>

Hi list,

I did a preliminary test using [1], and the result was near to what I
expect. This was a very very small test, because I've a lot of things
to do before I can setup a good and representative test, but I think
it is a good start.

Using the tool stress I started with the default command: stress --cpu
8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s. Here's the output of
both xen and non-xen servers:

[root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
stress: info: [3682] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
stress: info: [3682] successful run completed in 10s

[root@non-xen ~]#  stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
stress: info: [5284] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
stress: info: [5284] successful run completed in 10s

As you can see, the result is the same, but what happen when I include
hdd i/o to the test? Here's the output:

[root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 10
--timeout 10s
stress: info: [3700] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
stress: info: [3700] successful run completed in 59s

[root@non-xen ~]#  stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd
10 --timeout 10s
stress: info: [5332] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
stress: info: [5332] successful run completed in 37s

Including some HDD stress, the result is different. Both servers (xen
and non-xen) are using LVM, but to be honest, I was expecting this
kind of result because of the disk access.

Later this week I'll continue with the tests (well designed tests :P)
and I'll share the results.

Cheers.

1. http://freshmeat.net/projects/stress/

-- 
@cereal_bars

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users