[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH] CPUIDLE: shorten hpet spin_lock holding time


  • To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Wei, Gang" <gang.wei@xxxxxxxxx>
  • Date: Wed, 21 Apr 2010 17:06:43 +0800
  • Accept-language: zh-CN, en-US
  • Acceptlanguage: zh-CN, en-US
  • Cc:
  • Delivery-date: Wed, 21 Apr 2010 02:07:59 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcrgS919AC3RHXCET4295IJeXmmgRwAO/3swAAIUDKAAASh2WgADOhngACIUzXUAAMoQoA==
  • Thread-topic: [PATCH] CPUIDLE: shorten hpet spin_lock holding time

On Wednesday, 2010-4-21 4:10 PM, Keir Fraser wrote:
> It fixes the unsafe accesses to timer_deadline_{start,end} but I
> still think this optimisation is misguided and also unsafe. There is
> nothing to stop new CPUs being added to ch->cpumask after you start
> scanning ch->cpumask. For example, some new CPU which has a
> timer_deadline_end greater than ch->next_event, so it does not
> reprogram the HPET. But handle_hpet_broadcast is already mid-scan and
> misses this new CPU, so it does not reprogram the HPET either. Hence
> no timer fires for the new CPU and it misses its deadline.

This will not happen. ch->next_event has already been set as STIME_MAX before 
start scanning ch->cpumask, so the new CPU with smallest timer_deadline_end 
will reprogram the HPET successfully.

> Really I think a better approach than something like this patch would
> be to better advertise the timer_slop=xxx Xen boot parameter for
> power-saving scenarios. I wonder what your numbers look like if you
> re-run your benchmark with (say) timer_slop=10000000 (i.e., 10ms
> slop) on the Xen command line? 

I think it is another story. Enlarging timer_slop is one way to aligned & 
reduce breakevents, it do have effects to save power and possibly bring larger 
latency. What I am trying to address here is how to reduce spin_lock overheads 
in idel entry/exit path. The spin_lock overheads along with other overheads in 
the system with 32pcpu/64vcpu caused >25% cpu utilization while all guest are 
idle.

Jimmy
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.