WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-changelog

[Xen-changelog] [xen-unstable] cpuidle: add comments for hpet cpumask_lo

To: xen-changelog@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-changelog] [xen-unstable] cpuidle: add comments for hpet cpumask_lock usage
From: Xen patchbot-unstable <patchbot-unstable@xxxxxxxxxxxxxxxxxxx>
Date: Mon, 21 Jun 2010 10:45:27 -0700
Delivery-date: Mon, 21 Jun 2010 10:49:07 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-changelog-request@lists.xensource.com?subject=help>
List-id: BK change log <xen-changelog.lists.xensource.com>
List-post: <mailto:xen-changelog@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=unsubscribe>
Reply-to: xen-devel@xxxxxxxxxxxxxxxxxxx
Sender: xen-changelog-bounces@xxxxxxxxxxxxxxxxxxx
# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1276866584 -3600
# Node ID 9f257ab92ae4c07dc39b63aabd9867f179d6da25
# Parent  96c2178bd4488bc456b04193d8a7c1a62553343e
cpuidle: add comments for hpet cpumask_lock usage

Signed-off-by: Wei Gang <gang.wei@xxxxxxxxx>
---
 xen/arch/x86/hpet.c |   11 +++++++++++
 1 files changed, 11 insertions(+)

diff -r 96c2178bd448 -r 9f257ab92ae4 xen/arch/x86/hpet.c
--- a/xen/arch/x86/hpet.c       Fri Jun 18 14:09:29 2010 +0100
+++ b/xen/arch/x86/hpet.c       Fri Jun 18 14:09:44 2010 +0100
@@ -34,6 +34,17 @@ struct hpet_event_channel
     int           shift;
     s_time_t      next_event;
     cpumask_t     cpumask;
+    /*
+     * cpumask_lock is used to prevent hpet intr handler from accessing other
+     * cpu's timer_deadline_start/end after the other cpu's mask was cleared --
+     * mask cleared means cpu waken up, then accessing timer_deadline_xxx from
+     * other cpu is not safe.
+     * It is not used for protecting cpumask, so set ops needn't take it.
+     * Multiple cpus clear cpumask simultaneously is ok due to the atomic
+     * feature of cpu_clear, so hpet_broadcast_exit() can take read lock for 
+     * clearing cpumask, and handle_hpet_broadcast() have to take write lock 
+     * for read cpumask & access timer_deadline_xxx.
+     */
     rwlock_t      cpumask_lock;
     spinlock_t    lock;
     void          (*event_handler)(struct hpet_event_channel *);

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-changelog] [xen-unstable] cpuidle: add comments for hpet cpumask_lock usage, Xen patchbot-unstable <=