WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time

To: "Wei, Gang" <gang.wei@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Thu, 22 Apr 2010 08:22:25 +0100
Cc:
Delivery-date: Thu, 22 Apr 2010 00:23:23 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <F26D193E20BBDC42A43B611D1BDEDE710270B11199@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcrgS919AC3RHXCET4295IJeXmmgRwAO/3swAAIUDKAAASh2WgADOhngACIUzXUAAMoQoAAB26RvAAAUX9AAANuwEgAAXeGYACOffbAACQ4yYg==
Thread-topic: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
User-agent: Microsoft-Entourage/12.24.0.100205
On 22/04/2010 04:59, "Wei, Gang" <gang.wei@xxxxxxxxx> wrote:

>>> Okay, one concern I still have is over possible races around
>>> cpuidle_wakeup_mwait(). It makes use of a cpumask
>>> cpuidle_mwait_flags, avoiding an IPI to cpus in the mask. However,
>>> there is nothing to stop the CPU having cleared itself from that
>>> cpumask before cpuidle does the write to softirq_pending. In that
>>> case, even assuming the CPU is now non-idle and so wakeup is
>>> spurious, a subsequent attempt to raise_softirq(TIMER_SOFTIRQ) will
>>> incorrectly not IPI because the flag is already set in
>>> softirq_pending?
> 
> If a CPU cleared itself from cpuidle_mwait_flags, then this CPU didn't need a
> IPI to be waken. And one useless write to softirq_pending doesn't have any
> side effect. So this case should be acceptable.

That's not totally convincing. The write to softirq_pending has one extra
side effect: it is possible that the next time TIMER_SOFTIRQ really needs to
be raised on that CPU, it will not receive notification via IPI, because the
flag is already set in its softirq_pending mask.

Hm, let me see if I can come up with a patch for this and post it for you.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel