WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Interrupt to CPU routing in HVM domains - again

To: John Levon <levon@xxxxxxxxxxxxxxxxx>, Steve Ofsthun <sofsthun@xxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Interrupt to CPU routing in HVM domains - again
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Sat, 06 Sep 2008 08:59:12 +0100
Cc: James Harper <james.harper@xxxxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, bart brooks <bart_brooks@xxxxxxxxxxx>
Delivery-date: Sat, 06 Sep 2008 00:59:18 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C4E7F460.1CD88%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AckP9PVhNA/0tnvoEd29dwAWy6hiGQAAX15p
Thread-topic: [Xen-devel] Interrupt to CPU routing in HVM domains - again
User-agent: Microsoft-Entourage/11.4.0.080122
On 6/9/08 08:48, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:

>> You could do a bunch of that just by distributing them from the single
>> callback IRQ. But I suppose it would be nice to move to a
>> one-IRQ-per-evtchn model. You'd need to keep the ABI of course, so you'd
>> need a feature flag or something.
> 
> Yes, it should work as follows:
> one-IRQ-per-evtchn and turn off the usual PV per-VCPU selector word and
> evtchn_pending master flag (as we have already disabled the evtchn_mask master
> flag). Then all evtchn re-routing would be handled through the IO-APIC like
> all other emulated IRQs.

Oh, another way I like (on the Xen side of the interface, at least) is to
keep the PV evtchn VCPU binding mechanisms and per-VCPU selector word and
evtchn_pending master flag. But then do 'direct FSB injection' to specific
VCPU LAPICs. That is, each VCPU would receive its event-channel
notifications on a pre-arranged vector via a simulated message to its LAPIC.

This is certainly easier to implement on the Xen side, and I would say
neater too. However, its significant drawback is that this doesn't likely
fit very well with existing OS IRQ subsystems:
 * OS still needs to demux interrupts and hence essentially has a nested
layer of interrupt delivery effectively (if we want the specific event
channels to be visible as distinct interrupts to the OS).
 * There is a PV-specific way of reassigning 'IRQs' to VCPUs. The usual OS
methods of tickling the IO-APIC will not apply.
 * The OS may well not even have a way of allocating an interrupt vector or
otherwise registering interest in, and receiving CPU-specific interrupts on,
a specific interrupt vector.

Obviously this could all be worked through for Linux guests by extending our
existing pv_ops implementation a little. I think it would fit well. But I
have doubts this could work well for other OSes where we can make less
far-reaching changes (Windows for example).

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel