This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Xen spinlock questions

To: "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Xen spinlock questions
From: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Date: Mon, 11 Aug 2008 13:22:06 +0100
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 11 Aug 2008 05:21:41 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C4BC9763.24F8E%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4896F39A.76E4.0078.0@xxxxxxxxxx> <C4BC9763.24F8E%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 04.08.08 12:24 >>>
>On 4/8/08 11:18, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>> 1) While the goal of the per-CPU kicker irq appears to be to avoid all CPUs
>> waiting for a particular lock to get kicked simultaneously, I think this
>> doesn't have the desired effect. This is because Xen doesn't track what
>> event channel you poll for (through SCHEDOP_poll), and rather kicks all CPUs
>> polling for any event channel.
>Yes, this is true. We could easily do something better for VCPUs polling a
>single event channel though, but there hasn't been a need up to now. I
>suppose it depends how often we have multiple VCPUs stuck waiting for
>spinlocks. I can sort out a Xen-side patch if someone wanted to measure the
>benefits from more selective wakeup from poll.

Running kernel builds on 8 vCPU-s competing for 4 pCPU-s shows a 10%
improvement in performance with the individual wakeup (patch attached
- probably sub-optimal, but I didn't seem to be able to think of a lock-less
mechanism to achieve the desired behavior), using ticket locks in the


Attachment: poll-single-port.patch
Description: Text document

Xen-devel mailing list