On Sun, Mar 07, 2010 at 05:26:12AM +0530, Arvind R wrote:
> On Sun, Mar 7, 2010 at 2:29 AM, Arvind R <arvino55@xxxxxxxxx> wrote:
> > On Sat, Mar 6, 2010 at 1:46 PM, Arvind R <arvino55@xxxxxxxxx> wrote:
> >> On Sat, Mar 6, 2010 at 1:53 AM, Konrad Rzeszutek Wilk
> >> <konrad.wilk@xxxxxxxxxx> wrote:
> >>> On Fri, Mar 05, 2010 at 01:16:13PM +0530, Arvind R wrote:
> >>>> On Thu, Mar 4, 2010 at 11:55 PM, Konrad Rzeszutek Wilk
> >>>> <konrad.wilk@xxxxxxxxxx> wrote:
> >>>> > On Thu, Mar 04, 2010 at 02:47:58PM +0530, Arvind R wrote:
> >>>> >> On Wed, Mar 3, 2010 at 11:43 PM, Konrad Rzeszutek Wilk
> >>>> >> <konrad.wilk@xxxxxxxxxx> wrote:
>
> >>> (FYI, look at
> >>> http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=commit;h=e84db8b7136d1b4a393dbd982201d0c5a3794333)
>
> THAT SOLVED THE FAULTING; OUT_RING now completes under Xen.
That is great! Thanks for doing all the hard-work in digging through the
code.
>
> My typo and testing mistakes.
> Patched ttm_bo_mmap
> vma->vm_flags |= VM_RESERVED | VM_MIXEDMAP | VM_DONTEXPAND;
> if (bo->type != ttm_bo_type_device)
> vma->vm_flags |= VM_IO;
>
> Then, put sleep and exit in libdrm OUT_RING.
> The fault-handler worked fine!
So this means you got graphics on the screen? Or at least that Kernel
Mode Setting and the DRM parts show fancy graphics during boot?
>
> One question - How to get DMA addresses for user-buffers under Xen.
This is the X part right? Where the X driver takes control of the GPU
and starts having fun? I am not that familiar with how the drm_nouvou
module hands over the pointers and such to the X driver? Does it reset
it and start from scratch (as if you had no KMS enabled?) Or does it use
the allocated buffers and such and then asks for more using ioctl such
as DRM_ALLOCATE_SCATTER_GATHER (don't remember if that was the right
name).
But to answer your question, the DMA address is actually the MFN
(machine frame number) which is bitshifted by twelve and an offset
added. The debug patch I provided gets that from the
PTE value:
if (xen_domain()) {
+ phys = (pte_mfn(*pte) << PAGE_SHIFT) + offset;
The 'phys' now has the physical address that PCI bus (and the video
card) would utilize to request data to. Please keep in mind that the
'pte_mfn' is a special Xen function. Normally one would do 'pte'.
There is a layer of indirection in the Linux pvops kernel that makes
this a bit funny. Mainly most of the time you get something called GPFN
which is a psedu-physical MFN. Then there is a translation of PFN to
MFN (or vice-versa). For pages that are being utilized for PCI devices
(and that have _PAGE_IOMAP PTE flag set), the GPFN is actually the MFN,
while for the rest (like the pages allocated by the mmap and then
stitched up in the ttm_bo_fault handler), it is the PFN.
.. back to the DMA part. When kernel subsystems do DMA they go through a
PCI DMA API. This API has things such as 'dma_map_page', which through
layers of indirection calls the Xen SWIOTLB layer. The Xen SWIOTLB is
smart enough (actually, the enligthen.c) to distinguish if the page has
_PAGE_IOMAP set or not and to figure out if the PTE has a MFN or PFN.
Either way, the PCI DMA API _always_ return the DMA address for pages.
So as long as a user-buffer has 'struct page' backing it it should be
possible to get the DMA address.
Hopefully I've not confused this matter :-(
> Will work on that.
>
> HUGE THANKS!
Oh, thank you!
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|