Hi, Steven,
This does solve the problem, however it adds unnecessary overhead
(one more trap into xen at the end of each event handler). IMO, the real
cause should be in pic_intack, where pending irr is converted into isr. For
normal interrupts issued from qemu, only edge triggered interrupt gets irr
bit cleared here and level triggered case waits until de-assert occurring on
virtual interrupt line. However for callback irq triggered by xen itself,
there's no need to emulate such wave because we inject to vpic/vioapic
directly instead of by virq line. It's just an event, or a message though
described as level by virtual platform. So how about following alternative:
diff -r 874cc0ff214d xen/arch/x86/hvm/i8259.c
--- a/xen/arch/x86/hvm/i8259.c Wed Nov 01 09:55:43 2006 +0000
+++ b/xen/arch/x86/hvm/i8259.c Wed Nov 01 23:16:24 2006 +0800
@@ -208,6 +208,9 @@ static inline void pic_intack(PicState *
/* We don't clear a level sensitive interrupt here */
if (!(s->elcr & (1 << irq)))
s->irr &= ~(1 << irq);
+
+ /* Clear xen issued interrupt */
+ s->irr_xen &= ~(1 << irq);
}
static int pic_read_irq(struct hvm_virpic *s)
diff -r 874cc0ff214d xen/arch/x86/hvm/vmx/io.c
--- a/xen/arch/x86/hvm/vmx/io.c Wed Nov 01 09:55:43 2006 +0000
+++ b/xen/arch/x86/hvm/vmx/io.c Wed Nov 01 23:16:24 2006 +0800
@@ -116,8 +116,8 @@ asmlinkage void vmx_intr_assist(void)
int callback_irq;
callback_irq =
v->domain->arch.hvm_domain.params[HVM_PARAM_CALLBACK_IRQ];
- if ( callback_irq != 0 )
- pic_set_xen_irq(pic, callback_irq, local_events_need_delivery());
+ if ( callback_irq != 0 && local_events_need_delivery())
+ pic_set_xen_irq(pic, callback_irq, 1);
}
if ( vlapic && vlapic_enabled(vlapic) && vlapic->flush_tpr_threshold )
By this way, we can also avoid bunch of unnecessary pic operations
when there's no event pending when resuming to guest.
Actually irr_xen can be extended to support future hvm driver domain,
which is the same case that xen injects to vpic/vioapic directly when
receiving hardware interrupt. There's even no chance to emulate a
de-assert in such case.
Thanks,
Kevin
>>From: Xen patchbot-unstable
>Sent: 2006年11月1日 1:40
>
># HG changeset patch
># User Steven Smith <ssmith@xxxxxxxxxxxxx>
># Node ID 79a40acadb41fbe5e5b88b20de5fe53f4dd6b413
># Parent b2371c9e05f5146767464db8504214ae2b77c25c
>[PV-ON-HVM] Don't generate lots of spurious interrupts when using
>event
>channel upcalls.
>
>The issue here was that the Xen platform PCI interrupt is only updated
>when you return from the hypervisor into guest context, and so remained
>asserted for a short interval after the interrupt handler ran. If
>it happened that the first subsequent trap to the hypervisor was
>for unmasking the 8259 interrupt again, the unmasking caused the
>interrupt
>to be reinjected. This caused an edge on the chaining interrupt from
>the slave PIC to the master. The platform interrupt on the slave
>would then be cleared as we returned to the guest, and so you
>eventually end up injecting an interrupt on the master chained
>interrupt with nothing pending on the slave, which shows up as
>a spurious interrupt in the guest.
>
>Signed-off-by: Steven Smith <sos22@xxxxxxxxx>
>---
> unmodified_drivers/linux-2.6/platform-pci/evtchn.c | 8 +++++++-
> 1 files changed, 7 insertions(+), 1 deletion(-)
>
>diff -r b2371c9e05f5 -r 79a40acadb41
>unmodified_drivers/linux-2.6/platform-pci/evtchn.c
>--- a/unmodified_drivers/linux-2.6/platform-pci/evtchn.c Tue Oct 31
>11:31:34 2006 +0000
>+++ b/unmodified_drivers/linux-2.6/platform-pci/evtchn.c Tue Oct 31
>11:38:55 2006 +0000
>@@ -167,11 +167,17 @@ irqreturn_t evtchn_interrupt(int irq, vo
> l2 = s->evtchn_pending[l1i] & ~s->evtchn_mask[l1i];
> }
> }
>+
>+ /* Make sure the hypervisor has a chance to notice that the
>+ upcall_pending condition has been cleared, so that we don't
>+ try and reinject the interrupt again. */
>+ (void)HYPERVISOR_xen_version(0, NULL);
>+
> return IRQ_HANDLED;
> }
>
> void force_evtchn_callback(void)
> {
>- evtchn_interrupt(0, NULL, NULL);
>+ (void)HYPERVISOR_xen_version(0, NULL);
> }
> EXPORT_SYMBOL(force_evtchn_callback);
>
>_______________________________________________
>Xen-changelog mailing list
>Xen-changelog@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-changelog
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|