On 22/04/2011 11:00, "John Weekes" <lists.xen@xxxxxxxxxxxxxxxxxx> wrote:
> c/s 22957:c5c4688d5654 (unstable) changed the locking scheme for
> vcpu_migrate by adjusting the order so that the lock with the lowest
> lock address is obtained first. However, the code does not release them
> in the correct reverse order; it removes new_lock first if it differs
> from old_lock, but that is not the last lock obtained when old_lock >
> new_lock.
>
> As a result of this bug, under credit2, domUs would sometimes take a
> long time to start, and there was an occasional panic.
>
> This fix should be applied to both xen-unstable and xen-4.1-testing. I
> have tested and seen the problem with both, and also tested to confirm
> an improvement for both.
Nice that empirical evidence supports the patch, however, I'm being dense
and don't understand why order of lock release matters. This matters because
this idiom of ordering lock acquisition by lock address, but not caring
about release order, is seen elsewhere (in common/timer.c:migrate_timer()
for example). Perhaps the release order matters there too, and I never
realised. Or perhaps you've merely perturbed a fragile pos code that's
broken in some other way.
So, I would need an explanation of how this improves guest startup so
dramatically, before I could apply it. Something beyond "it is bad to
release locks in a different order to that in which they were acquired".
Also the last hunk of your patch is broken -- in the final else clause you
call spin_unlock_irqrestore on the wrong lock. This is very definitely a
bug, as irqs should never be enabled while any schedule_lock is held.
Thanks,
Keir
> Signed-off-by: John Weekes <lists.xen@xxxxxxxxxxxxxxxxxx>
>
> diff -r eb4505f8dd97 xen/common/schedule.c
> --- a/xen/common/schedule.c Wed Apr 20 12:02:51 2011 +0100
> +++ b/xen/common/schedule.c Fri Apr 22 03:46:00 2011 -0500
> @@ -455,9 +455,20 @@
> pick_called = 0;
> }
>
> - if ( old_lock != new_lock )
> + if ( old_lock == new_lock )
> + {
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else if ( old_lock < new_lock )
> + {
> spin_unlock(new_lock);
> - spin_unlock_irqrestore(old_lock, flags);
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else
> + {
> + spin_unlock(old_lock);
> + spin_unlock_irqrestore(new_lock, flags);
> + }
> }
>
> /*
> @@ -468,9 +479,20 @@
> if ( v->is_running ||
> !test_and_clear_bit(_VPF_migrating, &v->pause_flags) )
> {
> - if ( old_lock != new_lock )
> + if ( old_lock == new_lock )
> + {
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else if ( old_lock < new_lock )
> + {
> spin_unlock(new_lock);
> - spin_unlock_irqrestore(old_lock, flags);
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else
> + {
> + spin_unlock(old_lock);
> + spin_unlock_irqrestore(new_lock, flags);
> + }
> return;
> }
>
> @@ -491,9 +513,20 @@
> */
> v->processor = new_cpu;
>
> - if ( old_lock != new_lock )
> + if ( old_lock == new_lock )
> + {
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else if ( old_lock < new_lock )
> + {
> spin_unlock(new_lock);
> - spin_unlock_irqrestore(old_lock, flags);
> + spin_unlock_irqrestore(old_lock, flags);
> + }
> + else
> + {
> + spin_unlock_irqrestore(old_lock, flags);
> + spin_unlock(new_lock);
> + }
>
> if ( old_cpu != new_cpu )
> evtchn_move_pirqs(v);
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|