WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [RFC][PATCH 0/6] HVM PCI Passthrough (non-IOMMU)

To: "John Byrne" <john.l.byrne@xxxxxx>
Subject: RE: [Xen-devel] [RFC][PATCH 0/6] HVM PCI Passthrough (non-IOMMU)
From: "Guy Zana" <guy@xxxxxxxxxxxx>
Date: Fri, 8 Jun 2007 14:23:13 -0400
Cc: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, "Kay, Allen M" <allen.m.kay@xxxxxxxxx>
Delivery-date: Fri, 08 Jun 2007 11:25:49 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <4668C4A8.4000208@xxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcepeDhD6aTmNXOdS6iCPqdlULx54QAfAIEg
Thread-topic: [Xen-devel] [RFC][PATCH 0/6] HVM PCI Passthrough (non-IOMMU)
Hi Jhon,

Thanks for testing out our patches!
My comments below.

> -----Original Message-----
> From: John Byrne [mailto:john.l.byrne@xxxxxx] 
> Sent: Friday, June 08, 2007 5:53 AM
> To: Guy Zana
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [RFC][PATCH 0/6] HVM PCI Passthrough 
> (non-IOMMU)
> 
> 
> Guy,
> 
> I tried your patches with a bnx2 NIC on SLES10 and they didn't work.
> 
> The first reason was that you mask off the capabilities bit 
> in the PCI status. If I got rid of this, I could at least get 
> the NIC to configure, but it didn't work and the dropped 
> packets looked to be random garbage, so I don't think it was 
> talking to the device properly. (But I understand almost 
> nothing about PCI device configuration, so I don't know what 
> to look for.)
> 

The released patches are considered to be "developmental", there are still work 
needed to be done (not too much though :) ) in order to make it usable for 
everyone. Are you sure you mapped the right IRQ? Please post the qemu-dm log 
file / xm dmesg. The capabilities bits are masked-off so we won't need to 
handle MSIs yet and power management (ACPI) related stuff, that could be quite 
a pain when trying to do pass-through for integrated devices.

Another thing, 
Does this NIC card has an expansion ROM?

> I haven't noticed the merge tree springing into existence 
> into on xenbits, so is there any progress on making into a 
> real feature? It sounds like most of the work needs to be 
> done between you and Intel, but I could certainly help with testing.
> 

That would be great!

I think that both patches (ours' and Intel's) need some more work before we can 
start merging.
Neocleus already merged some parts from the Intel patches (mmio & pio 
handling). We are also aiming for 64bits (x86) support on the next release.

> One thing I am interested in is, with the 1:1 mapping, could 
> we disable the VT page-fault handling? I've found that the 
> page-fault overhead for VT is horrible and would probably 
> affect fork-exec benchmarks significantly.

Cool idea! Our CTO thought about it as well :)
It's kind of hard not to use the VT page-fault handler at all, there are some 
issues with memory protection (security), and memory-remapping that we would 
want to do in the future (In order to support bios & expansion ROM 
duplication). I agree that you can make it faster though! it may require some 
drastic changes in the hypervisor.

Thanks,
Guy.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel