On Wed, Mar 3, 2010 at 3:04 AM, Arvind R <arvino55@xxxxxxxxx> wrote:
> On Mon, Mar 1, 2010 at 9:31 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@xxxxxxxxxx> wrote:
>> On Fri, Feb 26, 2010 at 09:04:33PM +0530, Arvind R wrote:
>>> On Thu, Feb 25, 2010 at 11:14 PM, Konrad Rzeszutek Wilk
>>> <konrad.wilk@xxxxxxxxxx> wrote:
>>> > On Thu, Feb 25, 2010 at 09:01:48AM -0800, Arvind R wrote:
>>> >> On Thu, Feb 25, 2010 at 6:25 PM, Konrad Rzeszutek Wilk
>>> >> <konrad.wilk@xxxxxxxxxx> wrote:
>>> >> > On Thu, Feb 25, 2010 at 02:16:07PM +0530, Arvind R wrote:
>>> >> >> I merged the drm-tree from 2.6.33-rc8 into jeremy's 2.6.31.6 master
>>> >> >> and
>>> >> ======= snip =======
>>> >> > is not. Would it be possible to trace down who allocates that *chan?
>>> >> > You
>>> >> > say it is 'PRAMIN' - is that allocated via pci_alloc_* call?
>>> ======= snip =======
>>> >> So, there must be a mmap call somewhere to map the area to user-space
>>> >> for that problem write to work on non-Xen boots. Will try track down
>>> >> some more
>>> >> and post. With mmaps and PCIGARTs - it will be some hunt!
>>> ======= snip =======
>>> > to the drm_radeon driver which used it as a ring buffer. Took a bit of
>>> > hoping around to find who allocated it in the first place.
>>> >
>>> The pushbuf (FIFO/RING) is the only means of programming the card DMA
>> the 'ttm_bo_init'. I remember Pasi having an issue with this on Radeon
>> and I provided a hack to see if it would work. Take a look at this
>> e-mail:
>>
>> http://lists.xensource.com/archives/cgi-bin/extract-mesg.cgi?a=xen-devel&m=2010-01&i=20100115071856.GD17978%40reaktio.net
>>
>>>
>> It looks to be using 'ioremap' which is Xen safe. Unless your card has
>> an AGP bridge on it, at which point it would end up using
>> dma_alloc_coherent in all likehood.
Can't do that - some later allocations are huge.
>>>
>>> As of now, accelerator on Xen stops right at the initialisation stage - when
>> I think that the ttm_bo calls set up pages in the 4KB size, but the
>> initial channel requests a 64KB one. I think it also sets up
> Your ttm patch using dma_alloc_coherent instead of alloc_page resulted in
> the same problem as with the Radeon report - leaking pages, erroneous page
> count
>> page-table directory so that when the GPU accesses the addresses, it
>> gets the real bus address. I wonder if it fails at that thought -
>> meaning that the addresses that are written to the page table are
>> actually the guest page numbers (gpfn) instead of the machine page numbers
>> (mfn).
>
> No, I don't think thats how it works. The user-space write triggers an
> aio-write -
which triggers do_page_fault, handle_mm_fault, do_linear_fault, __do_fault
and finally ttm_bo_vm_fault.
ttm_bo_fault returns VM_FAULT_NOPAGE
- but xen-boot keeps on re-triggering the same fault.
when vm_fault calls ttm_tt_get_page, the page is already there, and
the handler does another vm_insert_page (i changed vm_insert_mixed
vm_insert_page/pfn based on io_mem, now the only patch, and it works on
bare machine) on and on and on.
What can possibly cause the fault-handler to repeat endlessly?
If a wrong page is backed at the user-address, it should create bad_access or
some other subsequent events - but the system is running fine minus all local
consoles! If the insertion is to a wrong place, this can happen; but
the top-level
trap is the only provider of the address - and the fault addres and
vma address match,
and the same code works fine on bare-boot.
ttm_tt_get_page calls alloc in a loop - so it may allocate multiple pages from
start/end depending on Highmem memory or not - implying asynchronous allocation
and mapping.
All I want now is *ptr = (uint32_t)data to work as of now!
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|