WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH] [RFC] MSI and interrupt mapping

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] [PATCH] [RFC] MSI and interrupt mapping
From: Espen Skoglund <espen.skoglund@xxxxxxxxxxxxx>
Date: Wed, 4 Feb 2009 17:47:04 +0000
Delivery-date: Wed, 04 Feb 2009 09:48:15 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Consider the following scenario: A dom0 driver registers a number of
MSI vectors with Xen.  However, it does not bind to the pirqs.  dom0
then maps the interrupts to different domUs which in turn bind to
them.

Such a scenario is very useful when you, e.g., have a multi-queue NIC
with MSI-X and each queue has its own MSI-X entry.  It enables
frontend drivers (for different queues) in domUs to have the MSI
interupts delivered to them directly.

However, mapping single MSI vectors to guests does not currently work
in Xen.  Mapping of single pirqs is only allowd for vectors which are
registered in the global irq_vector[] array.  MSI interrupts are not
registered here.  The patch below fixes the problem by getting the
vector from the per-domain pirq_to_vector array instead of the global
array.  This works perfectly fine.  An alternative solution would be
to register MSIs in the global irq_vector[] array (but only if the MSI
mapping is done in the dom0 space).


More generally speaking, the current state of affairs with regards to
interrupt management in Xen can be a bit confusing from the source
code point of view.  Essentially, all irqs except some legacy
interrupts are assumed to be IOAPIC irqs.  This assumption breaks down
with the introduction of MSIs and per-domain pirq tables.  It might
therefore be worthwile to rename a few things and slightly restructure
the Xen code to make the distinction between IOAPIC irqs and other
interrupts clearer.  I could volunteer to have a stab at such a
cleanup if this is something people want.

Comments?


        eSk



--
Use per-domain irq-to-vector array when mapping GSIs

Using the per-domain array enables single MSI vectors to be mapped.

Signed-off-by: Espen Skoglund <espen.skoglund@xxxxxxxxxxxxx>
--
diff -r 2269079b8d09 xen/arch/x86/physdev.c
--- a/xen/arch/x86/physdev.c    Tue Feb 03 16:02:59 2009 +0000
+++ b/xen/arch/x86/physdev.c    Wed Feb 04 15:39:53 2009 +0000
@@ -62,7 +62,7 @@ static int physdev_map_pirq(struct physd
                 ret = -EINVAL;
                 goto free_domain;
             }
-            vector = IO_APIC_VECTOR(map->index);
+            vector = domain_irq_to_vector(current->domain, map->index),
             if ( !vector )
             {
                 dprintk(XENLOG_G_ERR, "dom%d: map irq with no vector %d\n",

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel