[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Nvidia, Xen, and Vt-d



Hi Michael,

I have found that the lspci output changes once you have successfully loaded a 
driver against it.

In both cases, when used by pci-stub/pciback in DomU or starting X on Dom0 will 
make the NV card appear differently in the lspci output. 

I would try and use the 2.6.18 kernel with the shipped "nv" X driver - I have 
found this combination to be the most compatible. In the xen-unstable tree, run 
"make linux-2.6-xen0-build" (from memory).

So far I have only had success with passing through the Primary display 
adapter, I can`t get my secondary to work.. (I dont think it`s anything to do 
with the models of card, more that the secondary passthru doesnt work..)

My Pri = GTX260 (512mb+sharedmem=864mb)
Sec: 9500 GT (512mb)

The main complication and issue that needs to be overcome is the support for 
FLR. Without this PCIe capability, the GPU cannot be reset after the DomU has 
initialised it. I have to perform a hard-reset of Dom0 for GPU Passthrough to 
work a second time.

Tim

-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Michael J Coss
Sent: 10 September 2009 10:46
To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Nvidia, Xen, and Vt-d

Jeremy Fitzhardinge wrote:
> On 09/09/09 01:56, Keir Fraser wrote:
>   
>> On 09/09/2009 09:47, "Michael J Coss" <mjcoss@xxxxxxxxxxxxxxxxxx> wrote:
>>
>>   
>>     
>>> I've tried 3.4.1, and the lastest xen-unstable.  I've tried the pv-ops
>>> git tree as I really need a 2.6.31 dom0 kernel for other reasons, but
>>> for the moment I'd just like to get to the point where I have Xen, and
>>> dom0 and Nvidia playing nicely with one another, so I can move on to
>>> working on Vt-d and graphic pass through.
>>>
>>> Any suggestions?
>>>     
>>>       
>> Unless you really must have 2.6.30+, I'd recommend the 2.6.27 tree and
>> patchqueue from http://xenbits.xensource.com/XCI. Otherwise you are likely
>> to have to get your hands fairly dirty with pv_ops. For example, afaik
>> starting an X server on pv_ops is still pretty ambitious on some systems.
>>   
>>     
>
> Starting X in dom0 seems to work OK for Intel and ATI systems, at least;
> I expect most DRM drivers would work OK if they're well-behaved because
> we're hooking AGP memory accesses, etc.  However, the proprietary Nvidia
> drivers are problematic, though I gather there are some patches floating
> around for them.
>
> Unfortunately the AGP hooks are being removed (some years after Keir
> first added them, and just as they have a user according to their
> original intent) in favour of making each driver use the DMA API to do
> the appropriate phys<->bus conversions.  So far, only the Intel driver
> has been converted, and only when Intel IOMMU is enabled.  However, I
> didn't get any objection from the DRM folks about making it
> unconditional or adding it to new drivers as needed.
>
>     J
>   
I suspected as much, although I don't understand the origin of the lspci 
discrepancies between booting with/without the hypervisor.  It seems to 
me that there is some problem with Xen's view of the PCI bus, as well as 
the fact that the Nvidia driver is trying to access something outside of 
the hooked APIs.  The graphics cards in the system are the dual GPU, 
dual slot cards, and maybe this is contributing to the problem.  I'm 
going to see about getting some other single slot Nvidia card and see if 
the same issue happens.  I may pick up some ATI cards as well.

---Michael J Coss

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.