WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] Recursive LVM

To: <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] Recursive LVM
From: "Roger Lucas" <roger@xxxxxxxxxxxxx>
Date: Tue, 24 Oct 2006 16:35:54 +0100
Delivery-date: Tue, 24 Oct 2006 08:36:37 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <453E4B7C.13168.723E6B0@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acb3gDtVUpJvdR+VTmm0S5p9SZ+fNQAAFSCg
Hi Ulrich,

Use use "recursive LVM" as you describe it here with 3.0.2 and it works fine.  
You need to be careful with your Dom0 LVM
configuration, however, and this may be related to your problem (but I an not 
sure).

If you are using a typical LVM config, then the Dom0 LVM will scan all the 
block devices in Dom0 looking for LVM data at startup as
part of the "vgchange -ay" command which is typically in a startup script 
somewhere.  If you have an LVM in a DomU on top of an
exported LVM block device from Dom0, you may have the situation where the Dom0 
finds the LVM belonging to the DomU during the Dom0
LVM scan.  This is _bad_.

The simple solution is to use the "filter" option in /etc/lvm/lvm.conf to 
explicitly include only the drives on which you know you
have the Dom0 LVMs present.  We use:

/etc/lvm/lvm.conf:
<snip>
  filter =[ "a|/dev/hd*|", "a|/dev/sd*|", "a|/dev/md0|", "r|.*|" ]
<snip>

This allows /dev/hda, /dev/hdb, ..., /dev/sda, /dev/sdb,... and /dev/md0 but 
removes everything else.  You will now not get both the
Dom0 and the DomU trying to activate the same LVM volumes and all should run 
smoothly.

We can then map the disk partitions in a DomU as:

/etc/xen/harpseal:
<snip>
        disk = 
['phy:/dev/bigraid/harpseal,hda1,w','phy:/dev/bigraid/harpseal-lvm,hda2,w']
<snip>

"/dev/bigraid" is a large LVM VG on Dom0.  "/dev/bigraid/harpseal" is a small 
LVM volume holding the DomU boot disk image.
"/dev/bigraid/harpseal-lvm" is a larger LVM volume that the DomU considers as 
an LVM PV.

Like I said, I don't know if this is the root cause of your problem, but you 
definitely need to get this config right anyhow or you
are storing a whole world of pain for a later date...

Best regards,

Roger

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On
> Behalf Of Ulrich Windl
> Sent: 24 October 2006 16:21
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-users] Recursive LVM
> 
> Hi!
> 
> People are saying you shouldn't do it, but as it seems nice by cincept I did 
> it:
> 
> Dom0 uses LVM to manage disk space, creating a large locical volume (LV) for 
> each
> VM. I use the LV as a block device in DomU then, and DomU itself manages disk
> space using LVM, too.
> 
> People warned me that the VG (volume group) names must be unique, so I did 
> that. I
> installed the system just fine, but when rebooting DomU, no root filesystem 
> can be
> found:
> 
> ...
> bootloader = '/usr/lib/xen/boot/domUloader.py'
> bootentry = 'hda1:/vmlinuz-xen,/boot/initrd-xen'
> ...
> 
> Traceback (most recent call last):
>   File "/usr/lib/xen/boot/domUloader.py", line 505, in ?
>     main(sys.argv)
>   File "/usr/lib/xen/boot/domUloader.py", line 499, in main
>     sxpr = copyKernelAndInitrd(fs, kernel, initrd)
>   File "/usr/lib/xen/boot/domUloader.py", line 404, in copyKernelAndInitrd
>     raise RuntimeError("domUloader: Filesystem %s not exported\n" % fs)
> RuntimeError: domUloader: Filesystem hda1 not exported
> 
> Error: Boot loader didn't return any data!
> 
> When mounting the LV to another bootable VM, "fdisk -l" reports three 
> partitions:
> hda1 (/boot)
> hda2 (swap)
> hda3 (LVM, rest of the filesystems)
> 
> However the partitions have names like hda1p1 hda1p2 hda1p3
> 
> When I boot using another kernel, I get:
> 
> kernel = '/boot/vmlinuz-2.6.16.21-0.25-xen'
> ramdisk = '/boot/initrd-2.6.16.21-0.25-xen'
> root = '/dev/as1/root'
> 
> ...
> 
> Loading xenblk
> Registering block device major 3
>  hda:Loading dm-mod
>  unknown partition table
> device-mapper: 4.5.0-ioctl (2005-10-04) initialised: dm-devel@xxxxxxxxxx
>  hdb:Loading dm-snapshot
> ata_id[473]: main: HDIO_GET_IDENTITY failed for '/dev/.tmp-3-0'
> Waiting for /dev/mapper/control to appear:  ok
> Loading reiserfs
>   Unable to find volume group "as1"
> Waiting for device /dev/as1/root to appear: .. unknown partition table
> ata_id[497]: main: HDIO_GET_IDENTITY failed for '/dev/.tmp-3-64'
> ............................not found -- exiting to /bin/sh
> [...]
> 
> Besides those problems, I wonder how I can loop-mount the LV the way XEN () 
> does
> it:
>   PID TTY      STAT   TIME COMMAND
> 20219 ?        S<     0:00 [xvd 14 07:00]
> 20220 ?        S<     0:00 [xvd 14 fd:00]
> 
> With 3.0.2 I made the bad experience that I can attach a block device to the 
> Dom0,
> but I cannot detach it. Thus no DomU can use it until reboot. Kind of nasty 
> bug...
> 
> Any cool hints? System is SLES10 on x86_64...
> 
> Regards,
> Ulrich
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>