WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] what's correct way of shrinking LVM based domU?

To: Rudi Ahlers <Rudi@xxxxxxxxxxx>
Subject: Re: [Xen-users] what's correct way of shrinking LVM based domU?
From: John Haxby <john.haxby@xxxxxxxxxx>
Date: Tue, 24 Jun 2008 12:32:42 +0100
Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 24 Jun 2008 04:39:02 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <4860CD7A.4000701@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4860BACE.8090205@xxxxxxxxxxx> <4860BD4C.6050305@xxxxxxxxxx> <4860BF2C.9060701@xxxxxxxxxxx> <4860C950.1090300@xxxxxxxxxx> <4860CD7A.4000701@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.14 (X11/20080501)
Rudi Ahlers wrote:
Sorry John, but what you're saying is very very confusing. I don't know LVM at all, and what I've done is purely by chance, and using the defaults when installing CentOS.

Sorry, yes, I know it's confusing.

What makes it worse is that there are two systems interacting here: the guest OS (whatever it is) and the host OS (whatever it is).

LVM first. There's good discussions of this in the Red Hat documentation and you should probably read that. In a nut shell though, the building blocks are "physical volumes": these typically partitions on disks and you can see them with fdisk or similar. A "volume group" is built from one or more physical volumes. "Logical volumes" are allocated from the space in a volume group.

If you're starting with a couple of fresh disks that you've just installed in a computer you need to run fdisk, pvcreate, vgcreate and lvcreate in that order. fdisk is used to create a partition and you usually create a single partition spanning the whole disk (there are loads of caveats here but I'm skipping them for simplicity). So if your two new disks appear as /dev/sdb and /dev/sdc then you'll create a single parition on each to get /dev/sdb1 and /dev/sdc1. Next run "pvcreate /dev/sdb1" and "pvcreate /dev/sdc1" to create some on-disk data structures. Now you can create a volume group to hold them: "vgcreate mygroup /dev/sdb1 /dev/sdc1" and finally you can create some logical volumes, for example, "lvcreate -L 20G -n mydisk /dev/mygroup". And finally finally you can, for example, create a file system in the newly created logical volume: "mke2fs -j /dev/mygroup/mydisk". The man pages for pvcreate, vgcreate and lvcreate will tell you quite a lot, you should read them.

Now, suppose we discover that we need a parition on /dev/sdb for something other than LVM. I don't know what and it doesn't matter. What you need to do is run fdisk and re-parition the disk so that part of it is for LVM and part of it is for the other partition. Of course, when you ran "pvcreate /dev/sdb1" part of what got written into the on-disk data structures is the size of /dev/sdb1 and if you make /dev/sdb1 smaller then the on-disk data structures will be corrupt. Even worse, any logical volume that happened to be using a part of /dev/sdb1 that is no longer there (because it's part of /dev/sdb2) will be missing and any file system that's in that logical volume will be corrupt. You get the same sorts of problems if you put a file system in /dev/sdb1 and then using /dev/sdb1 smaller -- the end of the file system is missing and you're in trouble.

So with LVM if you're going to shrink /dev/sdb1 a bit you need to do some work to prepare for that: you need to shrink the physical volume. That's easy -- check the man page for pvresize. Now that the physical volume isn't using all of /dev/sdb1 you can use fdisk to make /dev/sdb1 a bit smaller.

pvresize will refuse to shrink the physical volume if there isn't enough space. "vgdisplay" (check the man page again) will tell you how much free space (free "physical extents", PE) there are in the volume group and "lvdisplay -m <logical volume>" will tell you exactly which physical extents any particular logical volume is using. This is confusing. Check the man pages, look at the output of the commands, read the documentation. It will make sense after a while.

Once is makes sense you can see that to shrink a physical volume you must free up physical extents at the end of the volume -- "pvdisplay -vm /dev/sdb1" (for example) will show you where physical extents are allocated. So if you want to shrink a physical volume you may have to remove or shrink logical volumes and, of course, before you shrink a logical volume you need to shrink the file system contained within it.


Now we can get on to this issue of a domU's "disk" being contained in a logical volume in dom0. In this case, the "disk" that domU sees, typically /dev/xvda, is a logical volume in dom0. In domU, fdisk /dev/xvda sees the parition table on the disk, pvcreate creates physical volumes within a parition on that disk and so on. What I've said above applies just as well to virtual disks called /dev/xvda and /dev/xvdb as it does to real disks called /dev/sdb and /dev/sdc.

The difference is that the disk as seen by domU is just a logical volume seen by dom0. So if your dom0 "exports" /dev/data/cpanel1 as xvda to the domU then, to all intents and purposes, you should be able to treat /dev/data/cpanel1 as though it was a disk. And you can. If you run "fdisk -l /dev/data/cpanel1" then you can see a partition table and you can edit that partition table just as you would if it were a real disk. And, of course, if you shrink a partition then you had better make sure that you have shrunk what goes inside it first.

This is where the magic "kpartx" comes in. This allows you to make the partitions within a logical volume visible as devices. Run "kpartx /dev/data/cpanel1". It tells you that there are two partitions. Now run "ls /dev/mapper" and take note of what's in there. And now run "kpartx -va /dev/data/cpanel1" and "ls /dev/mapper" again. You now have some new device files in /dev/mapper corresponding to the partitions inside that logical volume. These are exactly the paritions that the guest OS sees inside /dev/xvda. You need to try this and see what you've got. And check the man page for kpartx so you know what you're doing.

Now, for a standard CentOS installation in the guest, there'll be two partitions in /dev/xvda: a 100MB one for /boot and one for everything else in a single volume group. We can see the two partitions in /dev/mapper in dom0 now (after you've run kpartx appropriately) and you can mount the first one as a file system and see that it is indeed the CentOS /boot. The second partition, however, is a physical volume. You can run "pvscan" to find all the physical volumes on a system and it will, indeed, find this one. What's more, you can run "vgscan" to find (and activate) all the defined volume groups. If you do that then you'll find the /dev/data and /dev/system volume groups that you already have in dom0 and one new one: VolGroup00, the volume group that holds the logical volumes for the CentOS guest. There's a man page for vgscan that explains it. "vgdisplay /dev/VolGroup00" will now tell you things about the new volume group (man page again).

Now you can go through all that resizing stuff that you would go through if you were using a physical disk.

I'm sorry, I know this is confusing, but U think you're just going to have to plug your way through all the various commands I've mentioned until you know what they all do. Then you'll be able to just sit down and do the resizing stuff.

jch

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users