This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Linux spin lock enhancement on xen

To: George Dunlap <dunlapg@xxxxxxxxx>
Subject: Re: [Xen-devel] Linux spin lock enhancement on xen
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Tue, 24 Aug 2010 09:20:30 +0100
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 24 Aug 2010 01:21:19 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTin_HTtxL9wB9JcxDWFeGGYHKHfBxGW4dPrYKDGb@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: ActDY4MHhQ9YZ38NQ1alzyY+l63xswAAbPDf
Thread-topic: [Xen-devel] Linux spin lock enhancement on xen
User-agent: Microsoft-Entourage/
On 24/08/2010 09:08, "George Dunlap" <dunlapg@xxxxxxxxx> wrote:

> Jeremy, do you think that changes to the HV are necessary, or do you
> think that the existing solution is sufficient?  It seems to me like
> hinting to the HV to do a directed yield makes more sense than making
> the same thing happen via blocking and event channels.  OTOH, that
> gives the guest a lot more control over when and how things happen.
> Mukesh, did you see the patch by Xiantao Zhang a few days ago,
> regarding what to do on an HVM pause instruction?

I think there's a difference between providing some kind of yield_to as a
private interafce within the hypervisor as some kind of heuristic for
emulating something like PAUSE, versus providing such an operation as a
public guest interface.

It seems to me that Jeremy's spinlock implementation provides all the info a
scheduler would require: vcpus trying to acquire a lock are blocked, the
lock holder wakes just the next vcpu in turn when it releases the lock. The
scheduler at that point may have a decision to make as to whether to run the
lock releaser, or the new lock holder, or both, but how can the guest help
with that when its a system-wide scheduling decision? Obviously the guest
would presumably like all its runnable vcpus to run all of the time!

 - Keir

>  I thought the
> solution he had was interesting: when yielding due to a spinlock,
> rather than going to the back of the queue, just go behind one person.
>  I think an impleentation of "yield_to" that might make sense in the
> credit scheduler is:
> * Put the yielding vcpu behind one cpu
> * If the yield-to vcpu is not running, pull it to the front within its
> priority.  (I.e., if it's UNDER, put it at the front so it runs next;
> if it's OVER, make it the first OVER cpu.)

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>