WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] PCI DMA Limitations

>> Again, Xen doesn't limit the order significantly, and statically bumping
>> MAX_CONTIG_ORDER doesn't seem like too good an idea.
>> Xen can reserve memory for DMA purposes via the dma_emergency_pool
>> command line option.
>
>Would it be possible to implement something less expensive, but with larger
>than page granularity, perhaps similar to HugeTLB? This could keep the
>static state requirements down while allowing larger regions to be used.

Everything you really want is possible. But this one's certainly not strait
forward.

>>Is there a simpler/more direct way to provide DMA access to large buffers
>> in
>> >guest VMs? I was curious about how RDMA cards (e.g. Infiniband) are
>> >supported, are they required to use DAC and scatter-gather in some way?
>>
>> Yes, s/g is certainly very much preferable here, due to the possibly huge
>> amounts of data otherwise needing copying. Also, RDMA is (hopefully) not
>> restricted to 32-bit machine addresses, as that would be another reason
>> to force it though the swiotlb.
>
>My device really wants large (.5-2 GB) contiguous bus address ranges with a
>32-bit DMA mask limitation, and zero copy access from VM user spaces. This
>doesn't seem to be possible with the current architecture.

Up to 2Gb? That is half of the theoretical maximum, and on some machines
this may be *all* memory below 4Gb.

>If the 32-bit DMA mask limitation was lifted (allowing full 64-bit
>addressing), does this help? The VMs would still need to be able to allocate
>and map large physically contiguous address regions somehow?

Lifting the limitation would not only be desirable, but necessary as per the
above comment. But it wouldn't get you much closer without s/g I'm afraid.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>