WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: Xen spinlock questions

To: Jan Beulich <jbeulich@xxxxxxxxxx>
Subject: [Xen-devel] Re: Xen spinlock questions
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Mon, 04 Aug 2008 12:36:16 -0700
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 04 Aug 2008 12:36:48 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4896F39A.76E4.0078.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4896F39A.76E4.0078.0@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.14 (X11/20080501)
Jan Beulich wrote:
Jeremy,

considering to utilize your pv-ops spinlock implementation for our kernels,
I'd appreciate your opinion on the following thoughts:

1) While the goal of the per-CPU kicker irq appears to be to avoid all CPUs
waiting for a particular lock to get kicked simultaneously, I think this
doesn't have the desired effect. This is because Xen doesn't track what
event channel you poll for (through SCHEDOP_poll), and rather kicks all CPUs
polling for any event channel.

There's no problem with kicking all cpus waiting for a given lock, but it was intended to avoid kicking cpus waiting for some other lock. I hadn't looked at the poll implementation that closely. I guess using the per-cpu interrupt gives Xen some room to live up to the expectations we have for it ;)

2) While on native not re-enabling interrupts in __raw_spin_lock_flags()
may be tolerable (but perhaps questionable), not doing so at least on
the slow path here seems suspicious.

I wasn't sure about that. Is it OK to enable interrupts in the middle of a spinlock? Can it be done unconditionally?

3) Introducing yet another per-CPU IRQ for this purpose further
tightens scalability. Using a single, IRQF_PER_CPU IRQ should be
sufficient here, as long as it gets properly multiplexed onto individual
event channels (of which we have far more than IRQs). I have a patch
queued for the traditional tree that does just that conversion for the
reschedule and call-function IPIs, which I had long planned to get
submitted (but so far wasn't able to due to lack of testing done on the
migration aspects of it), and once successful was planning on trying to
do something similar for the timer IRQ.

There's two lines of work I'm hoping to push to mitigate this:

One is the unification of 32 and 64-bit interrupt handling, so that they both have an underlying notion of a vector, which is what we map event channels to. Since vectors can be mapped to a (irq,cpu) tuple, it would allow multiple per-cpu vectors/event channels to be mapped to a single irq, and do so generically for all event channel types. That would mean we'd end up allocating one set of interrupts for time, function calls, spinlocks, etc, rather than percpu.

The other is eliminating NR_IRQ, and making irq allocation completely dynamic.

 I am attaching that (2.6.26 based) patch just for reference.

From a quick look, you're thinking along similar lines.

   J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel