WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] upstream-qemu: cpu_physical_memory_map() behavior difference

To: Anthony PERARD <anthony.perard@xxxxxxxxxx>
Subject: [Xen-devel] upstream-qemu: cpu_physical_memory_map() behavior difference
From: Takeshi HASEGAWA <hasegaw@xxxxxxxxx>
Date: Tue, 10 May 2011 03:59:21 +0900
Cc: Xen Devel <Xen-devel@xxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>, Wei Liu <liuw@xxxxxxxxx>
Delivery-date: Mon, 09 May 2011 12:01:18 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:date:message-id:subject:from:to:cc :content-type; bh=P8Q0+E0HnpR6pf5pKbo7CjGjb5x5HZ/axS0oUyji3NE=; b=aQ1rJGNg7oQRCRbFn62HzWhuGQUFGQNIJhnfrbjRmQJFggbZkYwrLg4CcWk+CdZP5/ eNgO0SIa5vmFZ88/zUpTqXxQnqcFp47YRw/2EoV6PF4+Ur/Pd0gtRmZ/ulMuON2xYzeX k5UBVz8fRJhsIMRNLns0e5E0WD0M+oWWyf3bI=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:cc:content-type; b=u8Zv518tY2okBswvB13GGVaU8DIHdJBA/A9e1Sa6H7nGIF5mndn72EcKxrcRbDAkJt bZf9FPISlp+6/oJsHYa8nSsXh3wdZmdd6mFZqqelKaxMCvUIg4coDEW5jZfm2IfMySGq Z+ndG7QSDV/x5ZXR+qoqLQCHjcu8LmaEd27rg=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
FYI

While tracing virtio-blk crash issues, I have found cpu_physical_memory_map()
in upstream-qemu+xen behaves different with others. This prevents proper work
of virtio-ring bundled in qemu.

When a caller requests more than 2 guest-physical pages, the function
will map the pages in host-virtual, as possible.
In kvm+qemu, the region residents always sequential in host-virtual,
so it will work perfectly.
However, in xen+qemu, the region mapping is sometimes fragmented and partial.

According to the comment of cpu_physical_memory_map(), it does not
grantee mapping
of whole range that caller requested. However, virtio backend drivers in qemu
expect all requested guest-physical pages are mapped to host-virtual
sequentially.

# Sorry for no patch; I have no good idea to fix this right now.


Thanks,
Takeshi


qemu-dm-v14/hw/virtio.c:

void virtqueue_map_sg(struct iovec *sg, target_phys_addr_t *addr,
    size_t num_sg, int is_write)
{
    unsigned int i;
    target_phys_addr_t len;

    for (i = 0; i < num_sg; i++) {
        len = sg[i].iov_len;
        sg[i].iov_base = cpu_physical_memory_map(addr[i], &len, is_write);
        if (sg[i].iov_base == NULL || len != sg[i].iov_len) {
            error_report("virtio: trying to map MMIO memory");
            exit(1); // BOMB!!
        }
    }
}

qemu-dm-v14/exec.c:
   3978     while (len > 0) {
                (snip)
   3990         if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) {
   3991             if (done || bounce.buffer) {
   3992                 break;
   3993             }
   3994             bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE,
TARGET_PAGE_SIZE);
   3995             bounce.addr = addr;
   3996             bounce.len = l;
   3997             if (!is_write) {
   3998                 cpu_physical_memory_read(addr, bounce.buffer, l);
   3999             }
   4000             ptr = bounce.buffer;
   4001         } else {
   4002             addr1 = (pd & TARGET_PAGE_MASK) + (addr &
~TARGET_PAGE_MASK);
   4003             ptr = qemu_get_ram_ptr(addr1);
                    // KVM returns virtual address sequentially, but
Xen does not.
   4004         }
   4005         if (!done) {
   4006             ret = ptr;
   4007         } else if (ret + done != ptr) {
                    // This break triggered especially in xen+upstream-qemu
   4008             break;
   4009         }
   4010
   4011         len -= l;
   4012         addr += l;
   4013         done += l;
   4014     }
   4015     *plen = done;
   4016     return ret;
   4017 }

-- 
Takeshi HASEGAWA <hasegaw@xxxxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>