|
|
|
|
|
|
|
|
|
|
xen-users
RE: [Xen-users] PCI Passthrough to VMX Guest
> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
> M.A. Williamson
> Sent: 21 March 2006 17:35
> To: Petersson, Mats
> Cc: mark.williamson@xxxxxxxxxxxx;
> xen-users@xxxxxxxxxxxxxxxxxxx; David Goodlad
> Subject: RE: [Xen-users] PCI Passthrough to VMX Guest
>
>
> >One of the problems with this is that the OS/Driver that
> supports the
> >nVidia (or other graphics adapter) will need to actually know it's
> >physical addresses in memory - something that it doesn't,
> because the
> >HVM solution may well tell the OS that it's got 512MB of
> memory from 0
> >to 512M, but it's ACTUALLY living at 512M to 1G. So when the
> graphics
> >driver says "You have a bitmap at 128MB", it should actually
> say "You
> >have a bitmap at 640MB". Until there's an IOMMU
> implementation, there's
> >nothing we can do about this.
>
> OK, good point! Although IIRC, you guys have a solution for
> this on the way
> ;-)
Yes, the GART IOMMU work is being done by Mark Langsdorf, and we'll have
a "proper" IOMMU for next generation processors.
>
> >So even if you COULD assign your PCI device to the DomU, it still
> >wouldn't do the right thing... :-(
> >
> >So until then, there's a bit of a problem implementing any complex
> >hardware support in a virtual machine. There may be ways to
> solve this,
> >but they are non-trivial (and most like specific to the particular
> >hardware...).
>
> I guess in principle we could port the PCI frontend to run in
> an unmodified guest... (?) It could then perform IOMMU
> functionality in software by hooking the right places in the
> DMA API (and arranging bounce buffering if necessary).
>
> It's not entirely clear to me that this would be worth it, though.
Not for Windows, that's for sure. I'm not that familiar with how the
drivers for Linux works, but essentially, in Windows you can get a call
to BLT a bitmap from system memory into graphics memory. The way that
works is that the driver would ask the OS for the physical address
(which the OS think is somewhere between 0 and 512MB, say), and with
suitable math put that into the command stream of the graphics
processor, together with the "MOVE these pixels from here to there with
the following BLTmode" command.
There will then be DMA operations performed by the graphics processor to
read the physical memory. But I don't think there's any way for the
hypervisor to understand that the OS is asking for the physical address
of the bitmap, and thus the DMA operation would happen from the wrong
address [unless the Device Exclusion Vector is set up to prevent that
from happening].
--
Mats
>
> Cheers,
> Mark
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>
>
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|