WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: [PATCH] CPUIDLE: shorten hpet spin_lock holding time

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] RE: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
From: "Wei, Gang" <gang.wei@xxxxxxxxx>
Date: Tue, 20 Apr 2010 23:20:18 +0800
Accept-language: zh-CN, en-US
Acceptlanguage: zh-CN, en-US
Cc:
Delivery-date: Tue, 20 Apr 2010 08:21:25 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C7F37708.11E5E%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <F26D193E20BBDC42A43B611D1BDEDE710270AE3EBC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C7F37708.11E5E%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcrgS919AC3RHXCET4295IJeXmmgRwAO/3swAAIUDKAAASh2WgAArfDQ
Thread-topic: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
On Tuesday, 2010-4-20 10:22 PM, Keir Fraser wrote:
> On 20/04/2010 15:04, "Wei, Gang" <gang.wei@xxxxxxxxx> wrote:
> 
>> On Tuesday, 2010-4-20 8:49 PM, Keir Fraser wrote:
>>> Is this a measurable win? The newer locking looks like it could be
>>> dodgy on 32-bit Xen: the 64-bit reads of timer_deadline_{start,end}
>>> will be non-atomic and unsynchronised so you can read garbage. Even
>>> on 64-bit Xen you can read stale values. I'll be surprised if you
>>> got a performance win from chopping up critical regions in
>>> individual functions like that anyway.
>> 
>> First of all, this is a measurable power win for cpu overcommitment
>> idle case (vcpu:pcpu > 2:1, pcpu >= 32, guests are non-tickless
>> kernels). 
> 
> So lots of short sleep periods, and possibly only a very few HPET
> channels to share? 

Yes.

> How prevalent is always-running APIC timer now,
> and is that going to be supported in future processors?

Always-running APIC timer just started from Westmere. And I personally guess it 
should be supported in future processors. There are a lot of existing system 
still need hpet broadcast wakeup.

>> As to the non-atomic access to timer_deadline_{start,end}, it should
>> already be there before this patch. It is not protected by the hpet
>> lock. Shall we add rw_lock for each timer_deadline_{start,end}? This
>> can be done separately. 
> 
> The bug isn't previously there, since the fields will not be read
> unless the cpu is in ch->cpumask, which (was) protected by ch->lock.
> That was sufficient since a CPU would not modify
> timer_deadline_{start,end} between hpet_broadcast_enter and
> hpet_broadcast_exit. After your patch, handle_hpet_broadcast is no
> longer fully synchronised against those functions.

Ok, your are right. So we may just need to make sure the cpu is in ch->cpumask 
while reading the timer_deadline_{start,end}. I can make some changes to my 
patch.

Jimmy
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel