WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH RFC 4/4] xen: implement Xen-specific spinlocks

To: Johannes Weiner <hannes@xxxxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH RFC 4/4] xen: implement Xen-specific spinlocks
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Tue, 08 Jul 2008 00:15:21 -0700
Cc: Jens Axboe <axboe@xxxxxxxxx>, Nick Piggin <nickpiggin@xxxxxxxxxxxx>, Xen devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>, Christoph Lameter <clameter@xxxxxxxxxxxxxxxxxxxx>, Petr Tesarik <ptesarik@xxxxxxx>, LKML <linux-kernel@xxxxxxxxxxxxxxx>, Virtualization <virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx>, Thomas Friebel <thomas.friebel@xxxxxxx>, Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>, Ingo Molnar <mingo@xxxxxxx>
Delivery-date: Tue, 08 Jul 2008 00:15:51 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <87tzf0q3te.fsf@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20080707190749.299430659@xxxxxxxx> <20080707190838.710151521@xxxxxxxx> <87tzf0q3te.fsf@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.14 (X11/20080501)
Johannes Weiner wrote:
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);

The plural is a bit misleading, as this is a single pointer per CPU.

Yeah. And it's wrong because it's specifically *not* spinning, but blocking.

+static noinline void xen_spin_unlock_slow(struct xen_spinlock *xl)
+{
+       int cpu;
+
+       for_each_online_cpu(cpu) {

Would it be feasible to have a bitmap for the spinning CPUs in order to
do a for_each_spinning_cpu() here instead?  Or is setting a bit in
spinning_lock() and unsetting it in unspinning_lock() more overhead than
going over all CPUs here?

Not worthwhile, I think. This is a very rare path: it will only happen if 1) there's lock contention, that 2) wasn't resolved within the timeout. In practice, this gets called a few thousand times per cpu over a kernbench, which is nothing.

My very original version of this code kept a bitmask of interested CPUs within the lock, but there's only space for 24 cpus if we still use a byte for the lock itself. It all turned out fairly awkward, and this version is a marked improvement.

   J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>