This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: Xen spinlock questions

To: Jan Beulich <jbeulich@xxxxxxxxxx>
Subject: [Xen-devel] Re: Xen spinlock questions
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Tue, 05 Aug 2008 11:09:46 -0700
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Tue, 05 Aug 2008 11:10:59 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <48981005.76E4.0078.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4896F39A.76E4.0078.0@xxxxxxxxxx> <48975A30.2080408@xxxxxxxx> <48981005.76E4.0078.0@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird (X11/20080501)
Jan Beulich wrote:
2) While on native not re-enabling interrupts in __raw_spin_lock_flags()
may be tolerable (but perhaps questionable), not doing so at least on
the slow path here seems suspicious.
I wasn't sure about that. Is it OK to enable interrupts in the middle of a spinlock? Can it be done unconditionally?

That used to be done in the pre-ticket lock implementation, but of course
conditional upon the original interrupt flag.

Right, I see. The spin_lock_flags stuff which the current lock implementation just ignores. I'll add a new lock op for it.

Later yesterday I noticed another issue: The code setting lock_spinners
isn't interruption safe - you'll need to return the old value from
spinning_lock() and restore it in unspinning_lock().

Good catch.

Also I'm considering doing it ticket-based nevertheless, as "mix(ing) up
next cpu selection" won't really help fairness in xen_spin_unlock_slow().

Why's that? An alternative might be to just wake all cpus waiting for the lock up, and then let them fight it out. It should be unusual that there's a significant number of waiters anyway, since contention is (should be) rare.

The main reason for ticket locks is to break the egregious unfairness that (some) bus protocols implement. That level of fairness shouldn't be necessary here because once the cpus fall to blocking in the hypervisor, it's up to Xen to tie-break.

Apart from definitely needing the wakeup to happen for just the target
CPU (Keir, I'd want the necessary support in Xen done for that to work
regardless of performance measurements with the traditional locking,
as it's known that with ticket locks performance suffers from wrong-
order CPU kicking), one thing we'd be in even more need for here than
old-style spin locks had been would be a directed yield (sub-)hypercall.
Has that ever been considered to become a schedop?

Considered. But the point of this exercise was to come up with something that would work with an unmodified hypervisor.


Xen-devel mailing list