WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Fix locking bug in vcpu_migrate

To: John Weekes <lists.xen@xxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Fix locking bug in vcpu_migrate
From: Keir Fraser <keir.xen@xxxxxxxxx>
Date: Fri, 22 Apr 2011 11:30:32 +0100
Cc: George Dunlap <george.dunlap@xxxxxxxxxx>
Delivery-date: Fri, 22 Apr 2011 03:31:41 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:user-agent:date:subject:from:to:cc:message-id :thread-topic:thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; bh=kteL+It79Klkv6Zh0+I8iMnkZvfLtUQN0GL+BrPjeq4=; b=flQ+7Zlei5C6pO55gPr6kETNOy6nIlvmeZa5ixv5WF8FNHse+E4BKHEvxugedQPJij lXFuKLCg/UKNRZpZqEk68JfTF7AAStC0N9xRTZxTqUIoJ2x+9tbZKAjFr56gJdX6ShJa QR2RS3ELCrIUHdL5mcr7ORwOQWx/kZWk0rlpQ=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=Wa1i1yrezUAooe9EfCUANO3KKFQzky5Cek1vBVi4wVhvuNu/mWLn9lap+kY3jWjwMi 2J7TA+8ViWEtZi7HRSHfvmc9XNl65YUh+7GV5Lzm7n12QgNgCZDG9MRc1ZE4Gvbep1Bl Bh20HmxTo0Jwzz2FdbpF5TA8wlHH/C/CtiLto=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4DB151BA.9010006@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcwA2E6w3NuoNYetPEOOCMzI2iNnUA==
Thread-topic: [Xen-devel] [PATCH] Fix locking bug in vcpu_migrate
User-agent: Microsoft-Entourage/12.29.0.110113
On 22/04/2011 11:00, "John Weekes" <lists.xen@xxxxxxxxxxxxxxxxxx> wrote:

> c/s 22957:c5c4688d5654 (unstable) changed the locking scheme for
> vcpu_migrate by adjusting the order so that the lock with the lowest
> lock address is obtained first. However, the code does not release them
> in the correct reverse order; it removes new_lock first if it differs
> from old_lock, but that is not the last lock obtained when old_lock >
> new_lock.
> 
> As a result of this bug, under credit2, domUs would sometimes take a
> long time to start, and there was an occasional panic.
> 
> This fix should be applied to both xen-unstable and xen-4.1-testing. I
> have tested and seen the problem with both, and also tested to confirm
> an improvement for both.

Nice that empirical evidence supports the patch, however, I'm being dense
and don't understand why order of lock release matters. This matters because
this idiom of ordering lock acquisition by lock address, but not caring
about release order, is seen elsewhere (in common/timer.c:migrate_timer()
for example). Perhaps the release order matters there too, and I never
realised. Or perhaps you've merely perturbed a fragile pos code that's
broken in some other way.

So, I would need an explanation of how this improves guest startup so
dramatically, before I could apply it. Something beyond "it is bad to
release locks in a different order to that in which they were acquired".

Also the last hunk of your patch is broken -- in the final else clause you
call spin_unlock_irqrestore on the wrong lock. This is very definitely a
bug, as irqs should never be enabled while any schedule_lock is held.

 Thanks,
 Keir

> Signed-off-by: John Weekes <lists.xen@xxxxxxxxxxxxxxxxxx>
> 
> diff -r eb4505f8dd97 xen/common/schedule.c
> --- a/xen/common/schedule.c     Wed Apr 20 12:02:51 2011 +0100
> +++ b/xen/common/schedule.c     Fri Apr 22 03:46:00 2011 -0500
> @@ -455,9 +455,20 @@
>               pick_called = 0;
>           }
> 
> -        if ( old_lock != new_lock )
> +        if ( old_lock == new_lock )
> +        {
> +            spin_unlock_irqrestore(old_lock, flags);
> +        }
> +        else if ( old_lock < new_lock )
> +        {
>               spin_unlock(new_lock);
> -        spin_unlock_irqrestore(old_lock, flags);
> +            spin_unlock_irqrestore(old_lock, flags);
> +        }
> +        else
> +        {
> +            spin_unlock(old_lock);
> +            spin_unlock_irqrestore(new_lock, flags);
> +        }
>       }
> 
>       /*
> @@ -468,9 +479,20 @@
>       if ( v->is_running ||
>            !test_and_clear_bit(_VPF_migrating, &v->pause_flags) )
>       {
> -        if ( old_lock != new_lock )
> +        if ( old_lock == new_lock )
> +        {
> +            spin_unlock_irqrestore(old_lock, flags);
> +        }
> +        else if ( old_lock < new_lock )
> +        {
>               spin_unlock(new_lock);
> -        spin_unlock_irqrestore(old_lock, flags);
> +            spin_unlock_irqrestore(old_lock, flags);
> +        }
> +        else
> +        {
> +            spin_unlock(old_lock);
> +            spin_unlock_irqrestore(new_lock, flags);
> +        }
>           return;
>       }
> 
> @@ -491,9 +513,20 @@
>        */
>       v->processor = new_cpu;
> 
> -    if ( old_lock != new_lock )
> +    if ( old_lock == new_lock )
> +    {
> +        spin_unlock_irqrestore(old_lock, flags);
> +    }
> +    else if ( old_lock < new_lock )
> +    {
>           spin_unlock(new_lock);
> -    spin_unlock_irqrestore(old_lock, flags);
> +        spin_unlock_irqrestore(old_lock, flags);
> +    }
> +    else
> +    {
> +        spin_unlock_irqrestore(old_lock, flags);
> +        spin_unlock(new_lock);
> +    }
> 
>       if ( old_cpu != new_cpu )
>           evtchn_move_pirqs(v);
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel