This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] PCI DMA Limitations

To: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Subject: Re: [Xen-devel] PCI DMA Limitations
From: "Stephen Donnelly" <sfdonnelly@xxxxxxxxx>
Date: Tue, 27 Mar 2007 16:01:39 +1200
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 26 Mar 2007 21:00:32 -0700
Dkim-signature: a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references; b=ILEBaqgBmxqb6Q9ZMxv9q3xpoGM77Rb/nZKcl5FQcEsH5fTH/d7W2fzaA95Ol65eM1XKvOBdZ5yp0xH3HwVHThDRqNxzTaGiPans+7iMjAD6LnRxNEHffZn4yjWypfuOJI9UwSGhq/Rxz9dzGTVPqCUWCxK/fbJhoB78f16HbyI=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references; b=kVF0URjiT9AswJb8hHWSr/k4nj/Wrn+gcdUIAideaPrPywxJqKey5aaKfAI3czQnxjzWxm5OCbqVic9rDmju2m4y5R9gWWZQljcDgsDPh7Nw22v+zfqvvTtylblSFZUEx0xgzw1GiUCww5Cmp422hI+bkoICDsns+WE/1BRykF8=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <4607958A.76E4.0078.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <5f370d430703252035i62091c0drebc7e375703c5ca7@xxxxxxxxxxxxxx> <4607958A.76E4.0078.0@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On 3/26/07, Jan Beulich <jbeulich@xxxxxxxxxx> wrote:
>>> "Stephen Donnelly" < sfdonnelly@xxxxxxxxx> 26.03.07 05:35 >>>

>Coherent mappings via dma_alloc_coherent exchange VM pages for contiguous
>low hypervisor pages. The allocation size is limited by MAX_CONTIG_ORDER = 9
>in xen_create_contiguous_region to 2^9 * 4k = 2MB?

Yes, though for very special cases (AGP aperture) extending this limit is
being considered, though not likely by just bumping MAX_CONTIG_ORDER
(due to the effect this would have on static variables' sizes).

>Is it possible to increase MAX_CONTIG_ORDER in a guest OS unilaterally, or
>is there a matching limitation in the hypervisor? I didn't see any options
>to Xen to configure the amount of memory reserved for coherent DMA mappings.

Again, Xen doesn't limit the order significantly, and statically bumping
MAX_CONTIG_ORDER doesn't seem like too good an idea.
Xen can reserve memory for DMA purposes via the dma_emergency_pool
command line option.

Would it be possible to implement something less expensive, but with larger than page granularity, perhaps similar to HugeTLB? This could keep the static state requirements down while allowing larger regions to be used.

>Is there a simpler/more direct way to provide DMA access to large buffers in
>guest VMs? I was curious about how RDMA cards (e.g. Infiniband) are
>supported, are they required to use DAC and scatter-gather in some way?

Yes, s/g is certainly very much preferable here, due to the possibly huge
amounts of data otherwise needing copying. Also, RDMA is (hopefully) not
restricted to 32-bit machine addresses, as that would be another reason
to force it though the swiotlb.

My device really wants large (.5-2 GB) contiguous bus address ranges with a 32-bit DMA mask limitation, and zero copy access from VM user spaces. This doesn't seem to be possible with the current architecture.

If the 32-bit DMA mask limitation was lifted (allowing full 64-bit addressing), does this help? The VMs would still need to be able to allocate and map large physically contiguous address regions somehow?


Xen-devel mailing list
<Prev in Thread] Current Thread [Next in Thread>