This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] PCI DMA Limitations

>>> "Stephen Donnelly" <sfdonnelly@xxxxxxxxx> 26.03.07 05:35 >>>
>I've been reading the XenLinux code from 3.0.4 and would appreciate
>clarification of the limitations on PCI DMA under Xen. I'm considering how
>to deal with a peripheral that requires large DMA buffers.
>All 'normal Linux' PCI DMA from Driver Domains (e.g. dom0) occurs via the
>SWIOTLB code via a restricted window. e.g. when booting:
>Software IO TLB enabled:
> Aperture:     64 megabytes
> Kernel range: 0xffff880006ea2000 - 0xffff88000aea2000
> Address size: 30 bits
>PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
>The size of the aperture is configurable when the XenLinux kernel boots. The
>maximum streaming DMA allocation (via dma_map_single) is is limited by
>IO_TLB_SIZE to 128 slabs  * 4k = 512kB. Synchronisation is explicit via
>dma_sync_single and involves the CPU copying pages via these 'bounce
>buffers'. Is this correct?


>If the kernel is modified by increasing IO_TLB_SIZE, will this allow larger
>mappings, or is there a matching limitation in the hypervisor?

Not a siginficant one in the hypervisor: 4Gb chunks/2**20 pages (and of
course being bound to available memory in the requested range). But the
setup here is also going through xen_create_contiguous_region(), so the
2Mb limitation there would also apply.

>Coherent mappings via dma_alloc_coherent exchange VM pages for contiguous
>low hypervisor pages. The allocation size is limited by MAX_CONTIG_ORDER = 9
>in xen_create_contiguous_region to 2^9 * 4k = 2MB?

Yes, though for very special cases (AGP aperture) extending this limit is
being considered, though not likely by just bumping MAX_CONTIG_ORDER
(due to the effect this would have on static variables' sizes).

>Is it possible to increase MAX_CONTIG_ORDER in a guest OS unilaterally, or
>is there a matching limitation in the hypervisor? I didn't see any options
>to Xen to configure the amount of memory reserved for coherent DMA mappings.

Again, Xen doesn't limit the order significantly, and statically bumping
MAX_CONTIG_ORDER doesn't seem like too good an idea.
Xen can reserve memory for DMA purposes via the dma_emergency_pool
command line option.

>Is there a simpler/more direct way to provide DMA access to large buffers in
>guest VMs? I was curious about how RDMA cards (e.g. Infiniband) are
>supported, are they required to use DAC and scatter-gather in some way?

Yes, s/g is certainly very much preferable here, due to the possibly huge
amounts of data otherwise needing copying. Also, RDMA is (hopefully) not
restricted to 32-bit machine addresses, as that would be another reason
to force it though the swiotlb.


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>