This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Interrupt to CPU routing in HVM domains - again

To: Steve Ofsthun <sofsthun@xxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Interrupt to CPU routing in HVM domains - again
From: John Levon <levon@xxxxxxxxxxxxxxxxx>
Date: Fri, 5 Sep 2008 16:25:27 +0100
Cc: James Harper <james.harper@xxxxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, bart brooks <bart_brooks@xxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Fri, 05 Sep 2008 08:26:11 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <48C14D1F.1060700@xxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D01490563@trantor> <48C14D1F.1060700@xxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.9i
On Fri, Sep 05, 2008 at 11:15:43AM -0400, Steve Ofsthun wrote:

> While the event channel delivery code "binds" HVM event channel interrupts 
> to VCPU0, the interrupt is delivered via the emulated IOAPIC.  The guest OS 
> may program this "hardware" to deliver the interrupt to other VCPUs.  For 
> linux, this gets done by the irqbalance code among others.  Xen overrides 
> this routing for the timer 0 interrupt path in vioapic.c under the #define 
> IRQ0_SPECIAL_ROUTING.  We hacked our version of Xen to piggyback on this 
> code to force all event channel interrupts for HVM guests to also avoid any 
> guest rerouting:
>    /* Force round-robin to pick VCPU 0 */
>    if ( ((irq == hvm_isa_irq_to_gsi(0)) && pit_channel0_enabled()) ||
>         is_hvm_callback_irq(vioapic, irq) )
>        deliver_bitmask = (uint32_t)1;
> #endif

Yes, please - Solaris 10 PV drivers are buggy in that they use the
current VCPUs vcpu_info. I just found this bug, and it's getting fixed,
but if this makes sense anyway, it'd be good.

> This routing override provides a significant performance boost [or rather 
> avoids the performance penalty] for SMP PV drivers up until the time that 
> VCPU0 is saturated with interrupts.  You can probably achieve the same 

Of course there's no requirement that the evtchn is actually dealt with
on the same CPU, just the callback IRQ and the evtchn "ack" (clearing


Xen-devel mailing list