WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time

To: "Wei, Gang" <gang.wei@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Wed, 21 Apr 2010 09:09:59 +0100
Cc:
Delivery-date: Wed, 21 Apr 2010 01:10:47 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <F26D193E20BBDC42A43B611D1BDEDE710270AE3EE1@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcrgS919AC3RHXCET4295IJeXmmgRwAO/3swAAIUDKAAASh2WgADOhngACIUzXU=
Thread-topic: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
User-agent: Microsoft-Entourage/12.24.0.100205
On 20/04/2010 17:05, "Wei, Gang" <gang.wei@xxxxxxxxx> wrote:

> Resend.
> 
> CPUIDLE: shorten hpet spin_lock holding time
> 
> Try to reduce spin_lock overhead for deep C state entry/exit. This will
> benefit systems with a lot of cpus which need the hpet broadcast to wakeup
> from deep C state.
> 
> Signed-off-by: Wei Gang <gang.wei@xxxxxxxxx>

It fixes the unsafe accesses to timer_deadline_{start,end} but I still think
this optimisation is misguided and also unsafe. There is nothing to stop new
CPUs being added to ch->cpumask after you start scanning ch->cpumask. For
example, some new CPU which has a timer_deadline_end greater than
ch->next_event, so it does not reprogram the HPET. But handle_hpet_broadcast
is already mid-scan and misses this new CPU, so it does not reprogram the
HPET either. Hence no timer fires for the new CPU and it misses its
deadline.

Really I think a better approach than something like this patch would be to
better advertise the timer_slop=xxx Xen boot parameter for power-saving
scenarios. I wonder what your numbers look like if you re-run your benchmark
with (say) timer_slop=10000000 (i.e., 10ms slop) on the Xen command line?

 -- Keir




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel