|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] PCI DMA Limitations
I've been reading the XenLinux code from 3.0.4 and would appreciate clarification of the limitations on PCI DMA under Xen. I'm considering how to deal with a peripheral that requires large DMA buffers.
All 'normal Linux' PCI DMA from Driver Domains (
e.g. dom0) occurs via the SWIOTLB code via a restricted window. e.g. when booting:
Software IO TLB enabled: Aperture: 64 megabytes Kernel range: 0xffff880006ea2000 - 0xffff88000aea2000
Address size: 30 bits PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
The size of the aperture is configurable when the XenLinux kernel boots. The maximum streaming DMA allocation (via dma_map_single) is is limited by IO_TLB_SIZE to 128 slabs * 4k = 512kB. Synchronisation is explicit via dma_sync_single and involves the CPU copying pages via these 'bounce buffers'. Is this correct?
If the kernel is modified by increasing IO_TLB_SIZE, will this allow larger mappings, or is there a matching limitation in the hypervisor?
Coherent mappings via dma_alloc_coherent exchange VM pages for contiguous low hypervisor pages. The allocation size is limited by MAX_CONTIG_ORDER = 9 in xen_create_contiguous_region to 2^9 * 4k = 2MB?
Is it possible to increase MAX_CONTIG_ORDER in a guest OS unilaterally, or is there a matching limitation in the hypervisor? I didn't see any options to Xen to configure the amount of memory reserved for coherent DMA mappings.
Is there a simpler/more direct way to provide DMA access to large buffers in guest VMs? I was curious about how RDMA cards (e.g. Infiniband) are supported, are they required to use DAC and scatter-gather in some way?
Thanks, Stephen.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] PCI DMA Limitations,
Stephen Donnelly <=
|
|
|
|
|