WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] vcpu performance : 1 vcpu for all guets or 4 vpcu ?

To: Pascal <ml@xxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] vcpu performance : 1 vcpu for all guets or 4 vpcu ?
From: Tim Post <tim.post@xxxxxxxxxxx>
Date: Sun, 10 Jun 2007 16:03:23 +0800
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 10 Jun 2007 01:02:07 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <466A6E79.3020402@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Organization: Gridnix
References: <466A6E79.3020402@xxxxxxxxxxxxxxxxx>
Reply-to: tim.post@xxxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Sat, 2007-06-09 at 11:10 +0200, Pascal wrote:
> Hello all ;)
> 
> Tell I have a Xeon server with 4 vcpus
> 
> If on this box I have some guets, tell 10, what is the best
> solutions :
> - Set all guests to 1 vcpu ?
> - Set all guests to 4 vcpus ?
> 
> I well understand that if I set a guest with 4 vcpus and all others
> only with one vpcus, then the one with 4 vcpus will have more "cpu
> time" available than others guests.

It can be hard to contain and deal with sudden onslaughts of I/O
requests from guests. 

Our recipe has been to reserve at least 1/2 core for dom-0, then give
each guest 2, balancing them over the remaining 3 vcpu's.

I have a whole CD full of little 'nasties' written in perl that show up
in /tmp on shared servers, usually because of weak php scripts. While
you won't 'see' the strain a few unruly guests can put on I/O for all
guests on dom-0, you'll surely see it in the other guests, especially if
your using off the shelf cheap drives.

> But in case I'd like to have an equal cpu "share" between all guests,
> which is the best solution :
> - set 1 vcpu for all 
> - set 4 vcpu for all 

How much ram are you giving your guests on average? There's only but so
much someone can do with 256 MB regardless of how this value is set.
There *are* exceptions to this, especially if someone is determined to
cause a disruption.

Common disruptions tend not to bother any other guests but the one
running it, the isolation is very good, but things do happen.

If your guests have substantial amounts of memory (512 MB <), There
really isn't one "good" way to do it. 

Going by the domain name of your e-mail address, I *strongly* suggest
not using all equal credit, or reserving at least one core that the
guests can't touch. You just never know what someone is bound to upload.

It may work perfectly for you. It all depends on who your hosting. What
sucks is having to reboot dom-0 uncleanly and watch 30+ guests fsck
themselves, just because of a bot infestation on a few servers.

I don't see this nearly as much with Xen 3.1 as I did with 3.0 -> 3.0.4,
but it still does happen from time to time. I also stopped using ext3
and switched to jfs for all of my guests, this noticeably reduced
sluggishness and breakage due to many guests demanding I/O at once.

I wish I could give you a magic formula :) 

> Thanks a ton 
> Pascal   

Hope this is of some help.

--Tim

> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>