WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time

To: "Wei, Gang" <gang.wei@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Tue, 20 Apr 2010 15:21:44 +0100
Cc:
Delivery-date: Tue, 20 Apr 2010 07:22:42 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <F26D193E20BBDC42A43B611D1BDEDE710270AE3EBC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcrgS919AC3RHXCET4295IJeXmmgRwAO/3swAAIUDKAAASh2Wg==
Thread-topic: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
User-agent: Microsoft-Entourage/12.24.0.100205
On 20/04/2010 15:04, "Wei, Gang" <gang.wei@xxxxxxxxx> wrote:

> On Tuesday, 2010-4-20 8:49 PM, Keir Fraser wrote:
>> Is this a measurable win? The newer locking looks like it could be
>> dodgy on 32-bit Xen: the 64-bit reads of timer_deadline_{start,end}
>> will be non-atomic and unsynchronised so you can read garbage. Even
>> on 64-bit Xen you can read stale values. I'll be surprised if you got
>> a performance win from chopping up critical regions in individual
>> functions like that anyway.
> 
> First of all, this is a measurable power win for cpu overcommitment idle case
> (vcpu:pcpu > 2:1, pcpu >= 32, guests are non-tickless kernels).

So lots of short sleep periods, and possibly only a very few HPET channels
to share? How prevalent is always-running APIC timer now, and is that going
to be supported in future processors?

> As to the non-atomic access to timer_deadline_{start,end}, it should already
> be there before this patch. It is not protected by the hpet lock. Shall we add
> rw_lock for each timer_deadline_{start,end}? This can be done separately.

The bug isn't previously there, since the fields will not be read unless the
cpu is in ch->cpumask, which (was) protected by ch->lock. That was
sufficient since a CPU would not modify timer_deadline_{start,end} between
hpet_broadcast_enter and hpet_broadcast_exit. After your patch,
handle_hpet_broadcast is no longer fully synchronised against those
functions.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel