WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Best way to migrate Xen Disk IMG w/LVM's to a block-device?

To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] Best way to migrate Xen Disk IMG w/LVM's to a block-device? (e.g. DRBD)
From: Daniel Kao <dkao@xxxxxxxxxxxx>
Date: Tue, 03 Feb 2009 17:47:36 -0800
Delivery-date: Tue, 03 Feb 2009 17:48:52 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.19 (Windows/20081209)
Hi All,

I was hoping to pick someone's brain about the best way of moving a Xen disk image (which has LVM's inside of it) to a new block-device?  I have the target DRBD device ready but unsure what's the best & safest way to do it.

I was actually tempted to mount the Xen disk image in a dom0, and make it visible via loop devices and promote that DRBD node for the Xen disk image to primary and flip overwrite-data-of-peer and make DRBD sync it but there must be an easier, faster, safer way.

Thanks in advance!
-- 
Daniel Kao
Übermind, Inc.
dkao@xxxxxxxxxxxx
Seattle, WA, U.S.A.
+1.206.412.5765 
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>