WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH] [IOEMU]: fix the crash of HVM live migration with in

To: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH] [IOEMU]: fix the crash of HVM live migration with intensive disk access
From: "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>
Date: Tue, 11 Aug 2009 20:12:51 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, edwin.zhai@xxxxxxxxx
Delivery-date: Tue, 11 Aug 2009 05:16:24 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.16 (2007-06-09)
 [IOEMU]: fix the crash of HVM live migration with intensive disk access

Intensive disk access, e.g. sum of big file, during HVM live migration would 
cause guest error even file system crash. Guest dmesg said
"attempt to access beyond end of device
hda1: rw=0, want=10232032112, limit=10474317"

Current map cache used by qemu dma doesn't mark the page dirty, so that these 
pages(probably holding DMA data struct) are not transferred in the last 
iteration during live migration.

This patch fixes it, and also merges the qemu's original dirty bitmap used by 
other devices such as vga.

Signed-Off-By: Zhai Edwin <edwin.zhai@xxxxxxxxx>


Index: hv/tools/ioemu-remote/cpu-all.h
===================================================================
--- hv.orig/tools/ioemu-remote/cpu-all.h
+++ hv/tools/ioemu-remote/cpu-all.h
@@ -975,6 +975,16 @@ static inline int cpu_physical_memory_ge
 static inline void cpu_physical_memory_set_dirty(ram_addr_t addr)
 {
     phys_ram_dirty[addr >> TARGET_PAGE_BITS] = 0xff;
+
+#ifndef CONFIG_STUBDOM
+    if (logdirty_bitmap != NULL) {
+        addr >>= TARGET_PAGE_BITS;
+        if (addr / 8 < logdirty_bitmap_size) {
+            logdirty_bitmap[addr / HOST_LONG_BITS]
+                |= 1UL << addr % HOST_LONG_BITS;
+        }
+    }
+#endif
 }
 
 void cpu_physical_memory_reset_dirty(ram_addr_t start, ram_addr_t end,
Index: hv/tools/ioemu-remote/i386-dm/exec-dm.c
===================================================================
--- hv.orig/tools/ioemu-remote/i386-dm/exec-dm.c
+++ hv/tools/ioemu-remote/i386-dm/exec-dm.c
@@ -806,6 +806,24 @@ void *cpu_physical_memory_map(target_phy
     if ((*plen) > l)
         *plen = l;
 #endif
+#ifndef CONFIG_STUBDOM
+    if (logdirty_bitmap != NULL) {
+        /* Record that we have dirtied this frame */
+        unsigned long pfn = addr >> TARGET_PAGE_BITS;
+        do {
+            if (pfn / 8 >= logdirty_bitmap_size) {
+                fprintf(logfile, "dirtying pfn %lx >= bitmap "
+                        "size %lx\n", pfn, logdirty_bitmap_size * 8);
+            } else {
+                logdirty_bitmap[pfn / HOST_LONG_BITS]
+                    |= 1UL << pfn % HOST_LONG_BITS;
+            }
+
+            pfn++;
+        } while ( (pfn << TARGET_PAGE_BITS) < addr + *plen );
+
+    }
+#endif
     return qemu_map_cache(addr, 1);
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel