This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Interrupt to CPU routing in HVM domains - again

To: John Levon <levon@xxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Interrupt to CPU routing in HVM domains - again
From: Steve Ofsthun <sofsthun@xxxxxxxxxxxxxxx>
Date: Fri, 05 Sep 2008 13:11:41 -0400
Cc: James Harper <james.harper@xxxxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, bart brooks <bart_brooks@xxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Fri, 05 Sep 2008 10:12:04 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20080905152527.GC22002@xxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D01490563@trantor> <48C14D1F.1060700@xxxxxxxxxxxxxxx> <20080905152527.GC22002@xxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird (X11/20080421)
John Levon wrote:
On Fri, Sep 05, 2008 at 11:15:43AM -0400, Steve Ofsthun wrote:

While the event channel delivery code "binds" HVM event channel interrupts to VCPU0, the interrupt is delivered via the emulated IOAPIC. The guest OS may program this "hardware" to deliver the interrupt to other VCPUs. For linux, this gets done by the irqbalance code among others. Xen overrides this routing for the timer 0 interrupt path in vioapic.c under the #define IRQ0_SPECIAL_ROUTING. We hacked our version of Xen to piggyback on this code to force all event channel interrupts for HVM guests to also avoid any guest rerouting:

   /* Force round-robin to pick VCPU 0 */
   if ( ((irq == hvm_isa_irq_to_gsi(0)) && pit_channel0_enabled()) ||
        is_hvm_callback_irq(vioapic, irq) )
       deliver_bitmask = (uint32_t)1;

Yes, please - Solaris 10 PV drivers are buggy in that they use the
current VCPUs vcpu_info. I just found this bug, and it's getting fixed,
but if this makes sense anyway, it'd be good.

I can submit a patch for this, but we feel this is something of a hack.  We'd like to 
provide a more general mechanism for allowing event channel binding to "work" 
for HVM guests.  But to do this, we are trying to address conflicting goals.  Either we 
honor the event channel binding by circumventing the IOAPIC emulation, or we faithfully 
emulate the IOAPIC and circumvent the event channel binding.

Our driver writers would like to see support for multiple callback IRQs.  Then 
particular event channel interrupts could be bound to particular IRQs.  This 
would allow PV device interrupts to be distributed intelligently.  It would 
also allow net and block interrupts to be disentangled for Windows PV drivers.

We deal pretty much exclusively with HVM guests, do SMP PV environments 
selectively bind device interrupts to different VCPUs?


This routing override provides a significant performance boost [or rather avoids the performance penalty] for SMP PV drivers up until the time that VCPU0 is saturated with interrupts. You can probably achieve the same

Of course there's no requirement that the evtchn is actually dealt with
on the same CPU, just the callback IRQ and the evtchn "ack" (clearing


Xen-devel mailing list