This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] high memory dma update: up against a wall

To: "Scott Parish" <srparish@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] high memory dma update: up against a wall
From: "Nakajima, Jun" <jun.nakajima@xxxxxxxxx>
Date: Tue, 12 Jul 2005 21:36:10 -0700
Delivery-date: Wed, 13 Jul 2005 04:34:55 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcWHDld6PqTVWNiTS9OLf+9RCCTRfwAUcmqA
Thread-topic: [Xen-devel] high memory dma update: up against a wall
Scott Parish wrote:
> I've been slowly working on the dma problem i ran into; thought i was
> making progress, but i think i'm up against a wall, so more discussion
> and ideas might be helpful.

I think porting swiotlb (arch/ia64/lib/swiotlb.c) is one of other
approaches for EM64T as we are using it in the native x86_64 Linux. We
need at least 64MB physically contiguous memory below 4GB for that. For
dom0, I think we can find such area at boot time. 

We have a plan to work on that, but it will be after OLS...

Basically, the io_tlb_start is the starting address of the buffer. You
need to ensure that the memory is physically contiguous in machine
physical. I think it's easy to find such an area in dom0.
alloc_bootmem_low_pages() may not work, so you may need to write a new
(simple) function.

swiotlb_init_with_default_size (size_t default_size)
        unsigned long i;

        if (!io_tlb_nslabs) {
                io_tlb_nslabs = (default_size >> PAGE_SHIFT);
                io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);

         * Get IO TLB memory from the low pages
        io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
                                               (1 << IO_TLB_SHIFT));

Other thing is to use virt_to_bus() not virt_to_phys(). See below.

void *
swiotlb_alloc_coherent(struct device *hwdev, size_t size,
                       dma_addr_t *dma_handle, int flags)
        unsigned long dev_addr;
        void *ret;
        int order = get_order(size);

         * XXX fix me: the DMA API should pass us an explicit DMA mask
         * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a
         * bit range instead of a 16MB one).
        flags |= GFP_DMA;

        ret = (void *)__get_free_pages(flags, order);
        if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
                 * The allocated memory isn't reachable by the device.
                 * Fall back on swiotlb_map_single().
            free_pages((unsigned long) ret, order);
                ret = NULL;

The baisc idea of swiotlb is if the memory allocate is lower than 4GB,
then just use it. If not, allocate memory chunk from the buffer:

        if (!ret) {
                 * We are either out of memory or the device can't DMA
                 * to GFP_DMA memory; fall back on
                 * swiotlb_map_single(), which will grab memory from
                 * the lowest available address range.
                dma_addr_t handle;
                handle = swiotlb_map_single(NULL, NULL, size,
                if (dma_mapping_error(handle))
                        return NULL;

                ret = phys_to_virt(handle);

Intel Open Source Technology Center

Xen-devel mailing list