WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: Is hpet's cpumask_lock really needed?

To: Jan Beulich <JBeulich@xxxxxxxxxx>
Subject: [Xen-devel] RE: Is hpet's cpumask_lock really needed?
From: "Wei, Gang" <gang.wei@xxxxxxxxx>
Date: Thu, 24 Mar 2011 23:14:57 +0800
Accept-language: zh-CN, en-US
Acceptlanguage: zh-CN, en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Wei, Gang" <gang.wei@xxxxxxxxx>
Delivery-date: Thu, 24 Mar 2011 08:15:31 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4D8B5B5202000078000380FF@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4D8B5B5202000078000380FF@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcvqKv7oXIQk09aQRkqa9Zp+T5/sZAACcFqQ
Thread-topic: Is hpet's cpumask_lock really needed?
Jan Beulich wrote on 2011-03-24:
> Jimmy,
> 
> I know I had asked about this before, prompting you to add an
> extensive comment. Nevertheless, looking over the (few) rwlock users,
> I'm now puzzled by the need for a lock here.
> 
> According to the comment in struct hpet_event_channel, this is to
> prevent accessing a CPU's timer_deadline after it got cleared from
> cpumask. I wonder, however, whether the whole thing couldn't be done
> without lock altogether -
> hpet_broadcast_exit() could just clear the bit, and
> handle_hpet_broadcast() could read timer_deadline before looking at
> the mask a second time (the cpumask bit was already found set by the

You are right. Do the second time check on the mask after read timer_deadline 
could guarantee only safely fetched timer_deadline will be used. I didn't 
realize this point before. Great simplification. It should have significant 
performance improving on many core systems without ARAT.

Ack and applaud for this idea.

Jimmy

> surrounding loop, and by the time "mask" and "next_event" get actually
> used they may have become stale also with the current implementation).
> 
> Draft patch below.
> 
> Jan
> 
> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -34,18 +34,6 @@ struct hpet_event_channel
>      int           shift;
>      s_time_t      next_event;
>      cpumask_t     cpumask;
> -    /* -     * cpumask_lock is used to prevent hpet intr handler from
> accessing other -     * cpu's timer_deadline after the other cpu's mask
> was cleared -- -     * mask cleared means cpu waken up, then accessing
> timer_deadline from -     * other cpu is not safe. -     * It is not
> used for protecting cpumask, so set ops needn't take it. -     *
> Multiple cpus clear cpumask simultaneously is ok due to the atomic -    
> * feature of cpu_clear, so hpet_broadcast_exit() can take read lock for
> -     * clearing cpumask, and handle_hpet_broadcast() have to take write
> lock -     * for read cpumask & access timer_deadline. -     */ -   
> rwlock_t      cpumask_lock;
>      spinlock_t    lock;
>      void          (*event_handler)(struct hpet_event_channel *);
> @@ -199,17 +187,18 @@ again:
>      /* find all expired events */
>      for_each_cpu_mask(cpu, ch->cpumask)
>      {
> -        write_lock_irq(&ch->cpumask_lock);
> +        s_time_t deadline;
> 
> -        if ( cpu_isset(cpu, ch->cpumask) )
> -        {
> -            if ( per_cpu(timer_deadline, cpu) <= now )
> -                cpu_set(cpu, mask);
> -            else if ( per_cpu(timer_deadline, cpu) < next_event )
> -                next_event = per_cpu(timer_deadline, cpu);
> -        }
> +        rmb();
> +        deadline = per_cpu(timer_deadline, cpu);
> +        rmb();
> +        if ( !cpu_isset(cpu, ch->cpumask) )
> +            continue;
> 
> -        write_unlock_irq(&ch->cpumask_lock);
> +        if ( deadline <= now )
> +            cpu_set(cpu, mask);
> +        else if ( deadline < next_event )
> +            next_event = deadline;
>      }
>      
>      /* wakeup the cpus which have an expired event. */ @@ -602,7
> +591,6 @@ void __init hpet_broadcast_init(void)
>          hpet_events[i].shift = 32; hpet_events[i].next_event =
>          STIME_MAX; spin_lock_init(&hpet_events[i].lock); -       
>          rwlock_init(&hpet_events[i].cpumask_lock); wmb();
>          hpet_events[i].event_handler = handle_hpet_broadcast;
>      } @@ -729,9 +717,7 @@ void hpet_broadcast_exit(void) if (
>      !reprogram_timer(per_cpu(timer_deadline, cpu)) )
>          raise_softirq(TIMER_SOFTIRQ);
> -    read_lock_irq(&ch->cpumask_lock);
>      cpu_clear(cpu, ch->cpumask);
> -    read_unlock_irq(&ch->cpumask_lock);
> 
>      if ( !(ch->flags & HPET_EVT_LEGACY) )
>      {



Jimmy



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>