WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] PCI DMA Limitations

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] PCI DMA Limitations
From: "Stephen Donnelly" <sfdonnelly@xxxxxxxxx>
Date: Mon, 26 Mar 2007 15:35:24 +1200
Delivery-date: Sun, 25 Mar 2007 20:34:14 -0700
Dkim-signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:mime-version:content-type; b=Lp0yDILAISLl5a0kHDtNeA57nCfKreKcox2WOzybPpBvXhrlIn8ECr9OYTAllwB5L6BNydrVbtftwA23ICp+SOkTbrFaJJKkSnUHo16BPo03mSKxOkA27C2ECMXnBVaxP0HneXanHqM+am6HSX1mxt5B69vc9bSFxDIvIy6faWA=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:mime-version:content-type; b=KW9JZd93eBBnStqk5pC07/FHtVgrI52ONf7iKfRxCEfa6iPtmDwp/WtMWwBKV3+Mq/Dv4CFIlypHdJXXSl84KQLWzjD2vSye8kZ7PxVOI5QWjb7k9d6DnFMt2TVaVH4Pn1q1/jiT04MDYrXZGOionoVSSCHbVFSbQp7pQy2t9/0=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
I've been reading the XenLinux code from 3.0.4 and would appreciate clarification of the limitations on PCI DMA under Xen. I'm considering how to deal with a peripheral that requires large DMA buffers.

All 'normal Linux' PCI DMA from Driver Domains ( e.g. dom0) occurs via the SWIOTLB code via a restricted window. e.g. when booting:

Software IO TLB enabled:
 Aperture:     64 megabytes
 Kernel range: 0xffff880006ea2000 - 0xffff88000aea2000
 Address size: 30 bits
PCI-DMA: Using software bounce buffering for IO (SWIOTLB)

The size of the aperture is configurable when the XenLinux kernel boots. The maximum streaming DMA allocation (via
dma_map_single) is is limited by IO_TLB_SIZE to 128 slabs  * 4k = 512kB. Synchronisation is explicit via dma_sync_single and involves the CPU copying pages via these 'bounce buffers'. Is this correct?

If the kernel is modified by increasing IO_TLB_SIZE, will this allow larger mappings, or is there a matching limitation in the hypervisor?

Coherent mappings via dma_alloc_coherent exchange VM pages for contiguous low hypervisor pages. The allocation size is limited by MAX_CONTIG_ORDER = 9 in xen_create_contiguous_region to 2^9 * 4k = 2MB?

Is it possible to increase MAX_CONTIG_ORDER in a guest OS unilaterally, or is there a matching limitation in the hypervisor? I didn't see any options to Xen to configure the amount of memory reserved for coherent DMA mappings.

Is there a simpler/more direct way to provide DMA access to large buffers in guest VMs? I was curious about how RDMA cards (e.g. Infiniband) are supported, are they required to use DAC and scatter-gather in some way?

Thanks,
Stephen.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>