I know I had asked about this before, prompting you to add an
extensive comment. Nevertheless, looking over the (few) rwlock
users, I'm now puzzled by the need for a lock here.
According to the comment in struct hpet_event_channel, this is to
prevent accessing a CPU's timer_deadline after it got cleared from
cpumask. I wonder, however, whether the whole thing couldn't
be done without lock altogether - hpet_broadcast_exit() could
just clear the bit, and handle_hpet_broadcast() could read
timer_deadline before looking at the mask a second time (the
cpumask bit was already found set by the surrounding loop, and
by the time "mask" and "next_event" get actually used they may
have become stale also with the current implementation).
Draft patch below.
@@ -34,18 +34,6 @@ struct hpet_event_channel
- * cpumask_lock is used to prevent hpet intr handler from accessing other
- * cpu's timer_deadline after the other cpu's mask was cleared --
- * mask cleared means cpu waken up, then accessing timer_deadline from
- * other cpu is not safe.
- * It is not used for protecting cpumask, so set ops needn't take it.
- * Multiple cpus clear cpumask simultaneously is ok due to the atomic
- * feature of cpu_clear, so hpet_broadcast_exit() can take read lock for
- * clearing cpumask, and handle_hpet_broadcast() have to take write lock
- * for read cpumask & access timer_deadline.
- rwlock_t cpumask_lock;
void (*event_handler)(struct hpet_event_channel *);
@@ -199,17 +187,18 @@ again:
/* find all expired events */
+ s_time_t deadline;
- if ( cpu_isset(cpu, ch->cpumask) )
- if ( per_cpu(timer_deadline, cpu) <= now )
- cpu_set(cpu, mask);
- else if ( per_cpu(timer_deadline, cpu) < next_event )
- next_event = per_cpu(timer_deadline, cpu);
+ deadline = per_cpu(timer_deadline, cpu);
+ if ( !cpu_isset(cpu, ch->cpumask) )
+ if ( deadline <= now )
+ cpu_set(cpu, mask);
+ else if ( deadline < next_event )
+ next_event = deadline;
/* wakeup the cpus which have an expired event. */
@@ -602,7 +591,6 @@ void __init hpet_broadcast_init(void)
hpet_events[i].shift = 32;
hpet_events[i].next_event = STIME_MAX;
hpet_events[i].event_handler = handle_hpet_broadcast;
@@ -729,9 +717,7 @@ void hpet_broadcast_exit(void)
if ( !reprogram_timer(per_cpu(timer_deadline, cpu)) )
if ( !(ch->flags & HPET_EVT_LEGACY) )
Xen-devel mailing list