WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] guest boots sucessfully only one time then lvm complai

To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] guest boots sucessfully only one time then lvm complains no uuid found [ SOLVED]
From: Vu Pham <vu@xxxxxxxxxx>
Date: Sun, 18 Jan 2009 23:17:05 -0600
Delivery-date: Sun, 18 Jan 2009 21:18:04 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4973810E.7070901@xxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4972AC2B.8030306@xxxxxxxxxx><4f89b72afddfbcb42c7254398b032047.squirrel@xxxxxxxxxxxxxxxxx> <4973810E.7070901@xxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.19 (Windows/20081209)
There were two problems I had in my configuration:

1. When the guest was first installed, it was not set up with raid, ( I added it later ) therefore initrd does not have raid1.ko and the initrd's script init does not load raid1.ko as well as not run mdautorun. So when the system boots /dev/md0 is never detected therefore the VG cannot be loaded. I am not sure if this applies to other distros, in my case, it is RHEL5. I fixed this by adding those changes into initrd.img.

2. I used whole disks /dev/xvdb and /dev/xvdc for raid1. This makes /dev/md0 work only *after* the real partition / is mounted, and then /etc/mdadm.conf is applied. At boot time, before / is mounted, mdautorun does not detect these two devices as raid devices. I fixed this by using partitions with their types FD ( auto raid detect ) instead. This makes /dev/md0 is detected at boot time.

I guess I can still use whole disks for raid 1 to be detected at boot time if I change the init script in initrd to rebuild md0.

What I still do not understand is why my original guest image works the first time after boot and only that time. It should not work at all. Will check it later, though. :)

Anyway, it feels good when this is fixed, and feels stupid when looking back at it :)

Thanks,
Vu


Vu Pham wrote:
christoffer@xxxxxxxxx wrote:
It should be possible - I do precisely that: LVM on software raid1. I
think more detail is needed. What are your device names and what do you
export to your domU?

Cheers,
/Chris

I set up a guest ( under RHEL5 dom0) that has a root volume group
extends onto /dev/md which is a raid 1.

The first time after setting it up, that guest boots just fine but the
next reboot will panic with the root lvm cannot the uuid of the /dev/md0.

Because I have a backup of that image, so I can restore and restart it
again but then even without doing anything, the next reboot will panic.


I do not have problem when my main VG ( the VG that has the LV containing the / partition ) is built directly on the /dev/md0. I have another guest with that configuration and it runs just fine.

This guest with problem has the main VG built just on one drive and it boots normally. Then I build the /dev/md0 on two other drives, and *extend* the main VG into this /dev/md0 's PV : just extend the VG, not yet resize the LV inside it. I did run mdadm --detail --scan > /etc/mdadm.conf.

The first time I boot, the LVM can see /dev/md0 and its PV, so the VG is found ok. The 2nd time I boot, it complains no UUID ( for /dev/md0 )is found. All of the below configuation output is from the first boot:

Below is my guest config:

[root@xen2 ~]# more /etc/xen/g1
name = "g1"
uuid = "2731f3ac-3bde-7105-46b9-794004cb3894"
maxmem = 1024
memory = 1024
vcpus = 1
bootloader = "/usr/bin/pygrub"
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "restart"
vfb = [  ]
disk = [ "tap:aio:/data/xen/images/g1.img,xvda,w", "tap:aio:/data/xen/images/g1_
b.img,xvdb,w","tap:aio:/data/xen/images/g1_c.img,xvdc,w" ]
vif = [ "mac=00:16:3e:6e:35:77,bridge=xenbr0" ]

[root@g1 ~]# vgdisplay -v
    Finding all volume groups
    Finding volume group "g1volgroup00"
  --- Volume group ---
  VG Name               g1volgroup00
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  25
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               10.04 GB
  PE Size               8.00 MB
  Total PE              1285
  Alloc PE / Size       1266 / 9.89 GB
  Free  PE / Size       19 / 152.00 MB
  VG UUID               EFxLNW-31H9-lyLD-oprg-UvLB-lVNF-3Qv6zG

  --- Logical volume ---
  LV Name                /dev/g1volgroup00/lvroot
  VG Name                g1volgroup00
  LV UUID                y8C8j8-Ar8C-py5x-OL2E-K5zS-yqft-4vXDWU
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                8.89 GB
  Current LE             1138
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/g1volgroup00/lvswap
  VG Name                g1volgroup00
  LV UUID                UTPDI8-LAGf-Tvp6-lLfD-vi6p-0RUv-S0rdKo
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.00 GB
  Current LE             128
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Physical volumes ---
  PV Name               /dev/xvda2
  PV UUID               H0owWs-k8Li-w0Mb-3CgN-5gsM-haJy-BYYkrh
  PV Status             allocatable
  Total PE / Free PE    1266 / 0

  PV Name               /dev/md0
  PV UUID               4m2GFx-QDv0-Gb8q-qxAG-QMIc-If9i-b1ge12
  PV Status             allocatable
  Total PE / Free PE    19 / 19

[root@g1 ~]# pvscan
  PV /dev/xvda2   VG g1volgroup00   lvm2 [9.89 GB / 0    free]
  PV /dev/md0     VG g1volgroup00   lvm2 [152.00 MB / 152.00 MB free]
  Total: 2 [10.04 GB] / in use: 2 [10.04 GB] / in no VG: 0 [0   ]

[root@g1 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/xvda2
  VG Name               g1volgroup00
  PV Size               9.90 GB / not usable 6.76 MB
  Allocatable           yes (but full)
  PE Size (KByte)       8192
  Total PE              1266
  Free PE               0
  Allocated PE          1266
  PV UUID               H0owWs-k8Li-w0Mb-3CgN-5gsM-haJy-BYYkrh

  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               g1volgroup00
  PV Size               156.19 MB / not usable 4.19 MB
  Allocatable           yes
  PE Size (KByte)       8192
  Total PE              19
  Free PE               19
  Allocated PE          0
  PV UUID               4m2GFx-QDv0-Gb8q-qxAG-QMIc-If9i-b1ge12

[root@g1 ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Sat Jan 17 11:11:37 2009
     Raid Level : raid1
     Array Size : 159936 (156.21 MiB 163.77 MB)
  Used Dev Size : 159936 (156.21 MiB 163.77 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Jan 18 13:10:00 2009
          State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : f38cb367:f475b06c:d2f4e2c1:3c452b16
         Events : 0.6

    Number   Major   Minor   RaidDevice State
       0     202       16        0      active sync   /dev/xvdb
       1     202       32        1      active sync   /dev/xvdc
[root@g1 ~]#




Thanks,

Vu

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users