This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: CPU scheduling of domains and vcpus

To: Samuel Thibault <samuel.thibault@xxxxxxxxxxxxx>, Nauman Rafique <naumanr@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Re: CPU scheduling of domains and vcpus
From: "Mike D. Day" <ncmike@xxxxxxxxxx>
Date: Mon, 21 Apr 2008 16:32:18 -0400
Delivery-date: Mon, 21 Apr 2008 13:34:10 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20080421175630.GA6127@implementation>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: IBM Linux Technology Center
References: <1301abeb0804211052i2c498568ue1a761ae8a618029@xxxxxxxxxxxxxx> <20080421175630.GA6127@implementation>
Reply-to: ncmike@xxxxxxxxxx
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.15+20070412 (2007-04-11)
On 21/04/08 18:56 +0100, Samuel Thibault wrote:
> Hello,
> Nauman Rafique, le Mon 21 Apr 2008 13:52:21 -0400, a écrit :
> > In fact, wasted cycles can probably be avoided by doing opportunisitic
> > gang scheduling (i.e. gang schedule, unless there would be wasted
> > cycles)
> How do you detect that there would be wasted cycles?

The only way is for a very self-aware guest to use a paravirtual
feature to give a hint to the scheduler. 

Which is also the way to solve the original problem: the
paravirtualized guest can provide a hint to the scheduler that it is
holding a contended lock.

Alternatively, the scheduler can notify the guest that it is about to
be preempted by the hypervisor and now would be a good time to sleep
before gaining a contended spinlock.

In either case lock contention within a multi-vcpu guest is probably
not as bad a problem as the various solutions cause.

If a workload is suffering so much, give it multiple vcpus that are
mapped 1:1 to physical cpus. 


Mike D. Day
Cell: 919 412-3900
Sametime: ncmike@xxxxxxxxxx AIM: ncmikeday  Yahoo: ultra.runner
PGP key: http://www.ncultra.org/ncmike/pubkey.asc

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>