Guy,
Things are working at least somewhat, now. Answers/comments below.
Guy Zana wrote:
> Hi Jhon,
>
> Thanks for testing out our patches!
> My comments below.
>
>> -----Original Message-----
>> From: John Byrne [mailto:john.l.byrne@xxxxxx]
>> Sent: Friday, June 08, 2007 5:53 AM
>> To: Guy Zana
>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Subject: Re: [Xen-devel] [RFC][PATCH 0/6] HVM PCI Passthrough
>> (non-IOMMU)
>>
>>
>> Guy,
>>
>> I tried your patches with a bnx2 NIC on SLES10 and they didn't work.
>>
>> The first reason was that you mask off the capabilities bit
>> in the PCI status. If I got rid of this, I could at least get
>> the NIC to configure, but it didn't work and the dropped
>> packets looked to be random garbage, so I don't think it was
>> talking to the device properly. (But I understand almost
>> nothing about PCI device configuration, so I don't know what
>> to look for.)
>>
>
>The released patches are considered to be "developmental", there are
>still work needed to be done (not too much though :) ) in order to make
>it usable for everyone. Are you sure you mapped the right IRQ? Please
>post the qemu-dm log file / xm dmesg. The capabilities bits are
>masked-off so we won't need to handle MSIs yet and power management
>(ACPI) related stuff, that could be quite a pain when trying to do
>pass-through for integrated devices.
I'd missed the line in your patch zero e-mail about pass-through.c. Once
I'd fixed that and with your hint about MSI-interrupts, I passed the
disable_msi option to the bnx2 driver and things worked, at least for a
while. I could get a ssh connection going through the interface, but the
machine locked up. My 32-bit machine doesn't have a lot of memory, so
things are sluggish and it is hard to tell lock-ups from thrashing. I
will reinstall one of my 64-bit machines that has more memory as 32-bits
and try it there.
> Another thing,
> Does this NIC card has an expansion ROM?
Not according to lspci.
>
>> I haven't noticed the merge tree springing into existence
>> into on xenbits, so is there any progress on making into a
>> real feature? It sounds like most of the work needs to be
>> done between you and Intel, but I could certainly help with testing.
>>
>
> That would be great!
Just let me know what you need tested and I'll see what I can do.
>
> I think that both patches (ours' and Intel's) need some more work before we
> can start merging.
> Neocleus already merged some parts from the Intel patches (mmio & pio
> handling). We are also aiming for 64bits (x86) support on the next release.
64-bits would be nice as that as what I usually run.
>> One thing I am interested in is, with the 1:1 mapping, could
>> we disable the VT page-fault handling? I've found that the
>> page-fault overhead for VT is horrible and would probably
>> affect fork-exec benchmarks significantly.
>
> Cool idea! Our CTO thought about it as well :)
> It's kind of hard not to use the VT page-fault handler at all, there are
> some issues with memory protection (security), and memory-remapping that
> we would want to do in the future (In order to support bios & expansion
> ROM duplication). I agree that you can make it faster though! it may
> require some drastic changes in the hypervisor.
Without an IOMMU, you forfeit memory protection, anyway, so I am willing
to handwave security for the moment. For VT, at the moment, it looks
like I might be able to just hack something to set the VMCS to disable
page faults after the domain is running. Setting CR3 will still generate
a fault, but all you need to do is set the real CR3, as far as I can
tell. It may not really work out, but I'm going to try.
Thanks,
John
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|