This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Xen spinlock questions

To: Jan Beulich <jbeulich@xxxxxxxxxx>
Subject: Re: [Xen-devel] Xen spinlock questions
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Mon, 18 Aug 2008 11:01:52 +0100
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 18 Aug 2008 03:02:20 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <48A5A98F.76E4.0078.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AckBGW/nrrTgBG0MEd2TNQAX8io7RQ==
Thread-topic: [Xen-devel] Xen spinlock questions
User-agent: Microsoft-Entourage/
On 15/8/08 15:06, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

>>> I can't really explain the results of testing with this version of the
>>> patch:
>>> While the number of false wakeups got further reduced by somewhat
>>> less than 20%, both time spent in the kernel and total execution time
>>> went up (8% and 4% respectively) compared to my original (and from
>>> all I can tell worse) version of the patch. Nothing else changed as far as
>>> I'm aware.
>> That is certainly odd. Presumably consistent across a few runs? I can't
>> imagine where extra time would be being spent though...
> Yes, I did at least five runs in each environment.

It might be worth retrying with the vcpu_unblock() changes removed. It'll
still work, but poll_mask may have bits spuriously left set for arbitrary
time periods. However, vcpu_unblock() is the only thing I obviously make
more expensive than in your patch.

We could also possibly make the vcpu_unblock() check cheaper by testing
v->poll_evtchn for non-zero, and zero it, and clear from poll_mask. Reading
a vcpu-local field may be cheaper than getting access to a domain struct
cache line.

 -- Keir

Xen-devel mailing list