WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [GIT PULL] Fix lost interrupt race in Xen event channels

To: "Jeremy Fitzhardinge" <jeremy@xxxxxxxx>
Subject: Re: [Xen-devel] [GIT PULL] Fix lost interrupt race in Xen event channels
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Thu, 26 Aug 2010 07:46:09 +0100
Cc: "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>, Tom Kopec <tek@xxxxxxx>, Daniel Stodden <daniel.stodden@xxxxxxxxxx>
Delivery-date: Wed, 25 Aug 2010 23:47:07 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C7558E0.1060806@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4C743B2C.8070208@xxxxxxxx> <4C74E7C802000078000120C0@xxxxxxxxxxxxxxxxxx> <4C7558E0.1060806@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
 >>> On 25.08.10 at 19:54, Jeremy Fitzhardinge <jeremy@xxxxxxxx> wrote:
> Note that this patch is specifically for upstream Xen, which doesn't
> have any pirq support in it at present.

I understand that, but saw that you had paralleling changes to the
pirq handling in your Dom0 tree.

> However,  I did consider using fasteoi, but I couldn't see how to make
> it work.  The problem is that it only does a single call into the
> irq_chip for EOI after calling the interrupt handler, but there is no
> call beforehand to ack the interrupt (which means clear the event flag
> in our case).  This leads to a race where an event can be lost after the
> interrupt handler has returned, but before the event flag has been
> cleared (because Xen won't set pending or call the upcall function if
> the event is already set).  I guess I could pre-clear the event in the
> upcall function, but I'm not sure that's any better.

That's precisely what we're doing.

> In the dom0 kernels, I followed the example of the IOAPIC irq_chip
> implementation, which does the hardware EOI in the ack call at the start
> of handle_edge_irq, can did the EOI hypercall (when necessary) there.  I
> welcome a reviewer's eye on this though.

This I didn't actually notice so far.

That doesn't look right, at least in combination with ->mask() being
wired to disable_pirq(), which is empty (and btw., if the latter was
right, you should also wire ->mask_ack() to disable_pirq() to avoid
a pointless indirect call).

But even with ->mask() actually masking the IRQ I'm not certain
this is right. At the very least it'll make Xen see a potential
second instance of the same IRQ much earlier than you will
really be able to handle it. This is particularly bad for level
triggered ones, as Xen will see them right again after you
passed it the EOI notification - as a result there'll be twice as
many interrupts seen by Xen on the respective lines.

The native I/O APIC can validly do this as ->ack() only gets
called for edge triggered interrupts (which is why ->eoi() is
wired to ack_apic_level()).

> I was thinking specifically of the timer, debug and console virqs.  The
> last is the only one which could conceivably be non-percpu, but in
> practice I think it would be a bad idea to put it on anything other than
> cpu0.  What other global virqs are there?  Nothing that's high-frequency
> enough to be worth migrating?

Once supported in your tree, oprofile could have high interrupt
rates, and the trace buffer ones might too. Admittedly both are
debugging aids, but I don't think it'd be nice for them to influence
performance more than necessary.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel