2011/1/16 Grant McWilliams <grantmasterflash@xxxxxxxxx>:
>
>
> On Sun, Jan 16, 2011 at 7:22 AM, Javier Guerra Giraldez <javier@xxxxxxxxxxx>
> wrote:
>>
>> On Sun, Jan 16, 2011 at 1:39 AM, Grant McWilliams
>> <grantmasterflash@xxxxxxxxx> wrote:
>> > As long as I use an LVM volume I get very very near real performance ie.
>> > mysqlbench comes in at about 99% of native.
>>
>> without any real load on other DomUs, i guess
>>
>> in my settings the biggest 'con' of virtualizing some loads is the
>> sharing of resources, not the hypervisor overhead. Since it's easier
>> (and cheaper) to get hardware oversized on CPU and RAM than on IO
>> speed (specially on IOPS), that means that i have some database
>> servers that I can't virtualize on the near term.
>>
> But that is the same as just putting more than one service on one box. I
> believe he was wondering what the overhead was to virtualizing as apposed to
> bare metal. Anytime you have more than one process running on a box you have
> to think about the resources they use and how they'll interact with each
> other. This has nothing to do with virtualizing itself unless the hypervisor
> has a bad scheduler.
>
>> Of course, most of this would be solved by dedicating spindles instead
>> of LVs to VMs; maybe when (if?) i get most boxes with lots of 2.5"
>> bays, instead of the current 3.5" ones. Not using LVM is a real
>> drawback, but it still seems to be better than dedicating whole boxes.
>>
>> --
>> Javier
>
> I've moved all my VMs to running on LVs on SSDs for this purpose. The
> overhead of LV over just bare drives is very very little unless you're doing
> a lot of snapshots.
>
>
> Grant McWilliams
>
> Some people, when confronted with a problem, think "I know, I'll use
> Windows."
> Now they have two problems.
>
>
Hi list,
I did a preliminary test using [1], and the result was near to what I
expect. This was a very very small test, because I've a lot of things
to do before I can setup a good and representative test, but I think
it is a good start.
Using the tool stress I started with the default command: stress --cpu
8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s. Here's the output of
both xen and non-xen servers:
[root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
stress: info: [3682] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
stress: info: [3682] successful run completed in 10s
[root@non-xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
stress: info: [5284] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
stress: info: [5284] successful run completed in 10s
As you can see, the result is the same, but what happen when I include
hdd i/o to the test? Here's the output:
[root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 10
--timeout 10s
stress: info: [3700] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
stress: info: [3700] successful run completed in 59s
[root@non-xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd
10 --timeout 10s
stress: info: [5332] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
stress: info: [5332] successful run completed in 37s
Including some HDD stress, the result is different. Both servers (xen
and non-xen) are using LVM, but to be honest, I was expecting this
kind of result because of the disk access.
Later this week I'll continue with the tests (well designed tests :P)
and I'll share the results.
Cheers.
1. http://freshmeat.net/projects/stress/
--
@cereal_bars
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|