This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Linux spin lock enhancement on xen

To: Jan Beulich <JBeulich@xxxxxxxxxx>
Subject: Re: [Xen-devel] Linux spin lock enhancement on xen
From: George Dunlap <dunlapg@xxxxxxxxx>
Date: Tue, 24 Aug 2010 10:09:52 +0100
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Tue, 24 Aug 2010 02:10:48 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=oHDFgz5Y4Xw/az0g6jH6dD+ZyKipDPeeAaUnDb2k+DI=; b=ANnRNtOeAwTCJJqKrAZnwfr+t5QDO+95etqgAEl3axsAykuOX+aSXWDIjOP1mTF1kC sQlk/EEcBV9hhzmnERCaUEtwfN/h2yObBqHF3gkNFXTUFblYAkKCxSLhrGgCBABs5/QF B6Y4M970JRO30A91lz0L4ivnHDiB3whzROYC0=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=aMp0BIzoDXD7xxZJXzfgdn8Yc6zMSwguprZTWqoIVyVOoGlqNw2x/7GspezrHuOktu H8MCmRG2pUgPg/dpbz7hxBR/Ls2h/cP14DxeMM+CVu3V6evl/Idbulgb3ObQ9fshAP0d l49YrU/YgJlUtMS/97yDWraECsxQNndlZ0xWA=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C73A37B0200007800011D97@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTin_HTtxL9wB9JcxDWFeGGYHKHfBxGW4dPrYKDGb@xxxxxxxxxxxxxx> <C8993F5E.1EEDE%keir.fraser@xxxxxxxxxxxxx> <4C73A37B0200007800011D97@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, Aug 24, 2010 at 9:48 AM, Jan Beulich <JBeulich@xxxxxxxxxx> wrote:
>>>  I thought the
>>> solution he had was interesting: when yielding due to a spinlock,
>>> rather than going to the back of the queue, just go behind one person.
>>>  I think an impleentation of "yield_to" that might make sense in the
>>> credit scheduler is:
>>> * Put the yielding vcpu behind one cpu
> Which clearly has the potential of burning more cycles without
> allowing the vCPU to actually make progress.

I think you may misunderstand; the yielding vcpu goes behind at least
one vcpu on the runqueue, even if the next vcpu is lower priority.  If
there's another vcpu on the runqueue, the other vcpu always runs.

I posted some scheduler patches implementing this yield a week or two
ago, and included some numbers.  The numbers were with Windows Server
2008, which has queued spinlocks (equivalent of ticketed spinlocks).
The throughput remained high even when highly over-committed.  So a
simple yield does have a significant effect.  In the unlikely even
that it is scheduled again, it will simply yield again when it sees
that it's still waiting for the spinlock.

In fact, undirected-yield is one of yield-to's competitors: I don't
think we should accept a "yield-to" patch unless it has significant
performance gains over undirected-yield.

> At the risk of fairness wrt other domains, or even within the
> domain. As said above, I think it would be better to temporarily
> merge the priorities and location in the run queue of the yielding
> and yielded-to vCPU-s, to have the yielded-to one get the
> better of both (with a way to revert to the original settings
> under the control of the guest, or enforced when the borrowed
> time quantum expires).

I think doing tricks with priorities is too complicated.  Complicated
mechanisms are very difficult to predict and prone to nasty,
hard-to-debug corner cases.  I don't think it's worth exploring this
kind of solution until it's clear that a simple solution cannot get
reasonable performance.  And I would oppose accepting any
priority-inheritance solution into the tree unless there were
repeatable measurements that showed that it had significant
performance gain over a simpler solution.


Xen-devel mailing list