WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-changelog

[Xen-changelog] [xen-unstable] x86/hpet: eliminate cpumask_lock

To: xen-changelog@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-changelog] [xen-unstable] x86/hpet: eliminate cpumask_lock
From: Xen patchbot-unstable <patchbot@xxxxxxx>
Date: Sat, 26 Mar 2011 07:30:11 +0000
Delivery-date: Sat, 26 Mar 2011 00:32:28 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-changelog-request@lists.xensource.com?subject=help>
List-id: BK change log <xen-changelog.lists.xensource.com>
List-post: <mailto:xen-changelog@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=unsubscribe>
Reply-to: xen-devel@xxxxxxxxxxxxxxxxxxx
Sender: xen-changelog-bounces@xxxxxxxxxxxxxxxxxxx
# HG changeset patch
# User Jan Beulich <jbeulich@xxxxxxxxxx>
# Date 1301043797 0
# Node ID a65612bcbb921e98a8843157bf365e4ab16e8144
# Parent  941119d58655f2b2df86d9ecc4cb502bbc5e783c
x86/hpet: eliminate cpumask_lock

According to the (now getting removed) comment in struct
hpet_event_channel, this was to prevent accessing a CPU's
timer_deadline after it got cleared from cpumask. This can be done
without a lock altogether - hpet_broadcast_exit() can simply clear
the bit, and handle_hpet_broadcast() can read timer_deadline before
looking at the mask a second time (the cpumask bit was already
found set by the surrounding loop).

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
Acked-by: Gang Wei <gang.wei@xxxxxxxxx>
---


diff -r 941119d58655 -r a65612bcbb92 xen/arch/x86/hpet.c
--- a/xen/arch/x86/hpet.c       Fri Mar 25 09:01:37 2011 +0000
+++ b/xen/arch/x86/hpet.c       Fri Mar 25 09:03:17 2011 +0000
@@ -34,18 +34,6 @@
     int           shift;
     s_time_t      next_event;
     cpumask_t     cpumask;
-    /*
-     * cpumask_lock is used to prevent hpet intr handler from accessing other
-     * cpu's timer_deadline after the other cpu's mask was cleared --
-     * mask cleared means cpu waken up, then accessing timer_deadline from
-     * other cpu is not safe.
-     * It is not used for protecting cpumask, so set ops needn't take it.
-     * Multiple cpus clear cpumask simultaneously is ok due to the atomic
-     * feature of cpu_clear, so hpet_broadcast_exit() can take read lock for 
-     * clearing cpumask, and handle_hpet_broadcast() have to take write lock 
-     * for read cpumask & access timer_deadline.
-     */
-    rwlock_t      cpumask_lock;
     spinlock_t    lock;
     void          (*event_handler)(struct hpet_event_channel *);
 
@@ -199,17 +187,18 @@
     /* find all expired events */
     for_each_cpu_mask(cpu, ch->cpumask)
     {
-        write_lock_irq(&ch->cpumask_lock);
+        s_time_t deadline;
 
-        if ( cpu_isset(cpu, ch->cpumask) )
-        {
-            if ( per_cpu(timer_deadline, cpu) <= now )
-                cpu_set(cpu, mask);
-            else if ( per_cpu(timer_deadline, cpu) < next_event )
-                next_event = per_cpu(timer_deadline, cpu);
-        }
+        rmb();
+        deadline = per_cpu(timer_deadline, cpu);
+        rmb();
+        if ( !cpu_isset(cpu, ch->cpumask) )
+            continue;
 
-        write_unlock_irq(&ch->cpumask_lock);
+        if ( deadline <= now )
+            cpu_set(cpu, mask);
+        else if ( deadline < next_event )
+            next_event = deadline;
     }
 
     /* wakeup the cpus which have an expired event. */
@@ -602,7 +591,6 @@
         hpet_events[i].shift = 32;
         hpet_events[i].next_event = STIME_MAX;
         spin_lock_init(&hpet_events[i].lock);
-        rwlock_init(&hpet_events[i].cpumask_lock);
         wmb();
         hpet_events[i].event_handler = handle_hpet_broadcast;
     }
@@ -729,9 +717,7 @@
     if ( !reprogram_timer(per_cpu(timer_deadline, cpu)) )
         raise_softirq(TIMER_SOFTIRQ);
 
-    read_lock_irq(&ch->cpumask_lock);
     cpu_clear(cpu, ch->cpumask);
-    read_unlock_irq(&ch->cpumask_lock);
 
     if ( !(ch->flags & HPET_EVT_LEGACY) )
     {

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-changelog] [xen-unstable] x86/hpet: eliminate cpumask_lock, Xen patchbot-unstable <=