WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH] bind passthroug pci device interrupt pins to INTA

To: "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH] bind passthroug pci device interrupt pins to INTA
From: "He, Qing" <qing.he@xxxxxxxxx>
Date: Tue, 20 May 2008 12:34:11 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, "He, Qing" <qing.he@xxxxxxxxx>
Delivery-date: Mon, 19 May 2008 21:35:27 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Aci6Mr/R/n520XTdTZqyrk6Z5fIzOw==
Thread-topic: [PATCH] bind passthroug pci device interrupt pins to INTA
This patch changes the virtual pci configuration space of passthrough
PCI devices so that INTA is unconditionally used, thus minimizes the
situation of guest gsi sharing, which doesn't work. It also adds a
warning when such sharing is detected.

Signed-off-by: Qing He <qing.he@xxxxxxxxx>
---

The original scheme is to use the interrupt pin in the physical pci
configuration space. However, the use of interrupt pins other than INTA
will likely cause problem when the number of assigned devices exceeds 8,
e.g. dev 3, INTB and dev 11, INTA share the same girq. In this case, one
machine may be left untracked and masked, any devices using the same
machine irq (including those owned by other domains) is then blocked.

Just wonder if there is any need to expose multifunction devices (i.e.
have to use INTB, etc.) to the guest in the future.

All comments and suggestions are welcomed.


 tools/ioemu/hw/pass-through.c |    6 ++++--
 xen/drivers/passthrough/io.c  |    9 +++++++++
 2 files changed, 13 insertions(+), 2 deletions(-)

diff -r 86587698116d tools/ioemu/hw/pass-through.c
--- a/tools/ioemu/hw/pass-through.c     Wed May 14 14:12:53 2008 +0100
+++ b/tools/ioemu/hw/pass-through.c     Tue May 20 01:57:31 2008 +0800
@@ -563,9 +563,11 @@ struct pt_dev * register_real_device(PCI
     /* Handle real device's MMIO/PIO BARs */
     pt_register_regions(assigned_device);
 
-    /* Bind interrupt */
+    /* Bind interrupt to INTA to minimize guest irq sharing */
     e_device = (assigned_device->dev.devfn >> 3) & 0x1f;
-    e_intx = assigned_device->dev.config[0x3d]-1;
+    if (assigned_device->dev.config[0x3d] > 0)
+        assigned_device->dev.config[0x3d] = 1;
+    e_intx = 0;
 
     if ( PT_MACHINE_IRQ_AUTO == machine_irq )
     {
diff -r 86587698116d xen/drivers/passthrough/io.c
--- a/xen/drivers/passthrough/io.c      Wed May 14 14:12:53 2008 +0100
+++ b/xen/drivers/passthrough/io.c      Tue May 20 18:52:26 2008 +0800
@@ -91,6 +91,15 @@ int pt_irq_create_bind_vtd(
         guest_gsi = hvm_pci_intx_gsi(device, intx);
         link = hvm_pci_intx_link(device, intx);
         hvm_irq_dpci->link_cnt[link]++;
+
+        if (hvm_irq_dpci->girq[guest_gsi].valid) {
+            gdprintk(XENLOG_WARNING VTDPREFIX,
+                     "pt_irq_create_bind_vtd: guest_gsi %d already in
use, "
+                     "device,intx = %d,%d\n",
+                     guest_gsi, hvm_irq_dpci->girq[guest_gsi].device,
+                     hvm_irq_dpci->girq[guest_gsi].intx);
+            return -EEXIST;
+        }
 
         digl = xmalloc(struct dev_intx_gsi_link);
         if ( !digl )

Attachment: pt-irq-pci-bind-inta.patch
Description: pt-irq-pci-bind-inta.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel