This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] lvm

On Jul 14, 2004, at 4:47 AM, Christian Limpach wrote:

I don't think you'd want to export an LVM logical volume as a
whole-disk to another doain since then you won't have the ability
to resize the partitions in that whole-disk.  I expect that would
have been the reason why one would use LVM in the first place.

It's exactly why I would like to use LVM. And XFS resizes very nicely with LVMs. It also gives you the ability to create up to 256 LVs on a single Volume Group, as opposed to the I think 16 partition limit if you DOS-like partition a physical disk. And you get things like snapshotting and migration of LVs between volume groups/physical disks that are really nice. And it can ride on top of the software RAID layer for those of us out here with cheap IDE subsystems, so you make a single RAID1 or RAID5 and LVM-partition that into as many volumes as you need, instead of having to make a zillion RAID devices. so yeah, LVM is nice for tossing lots of VMs around on a single machine. :)

And I was going to post some more questions about it today because I'm not able to get the domain to recognize the exported LVs. (Is there a xen-users list or something so I don't have to bog up the -dev list with noob user questions? Or are those just as well posted here anyway?)

I've currently created LVs with LVM2 for swap and root for the new domain and changed the xmdefaults file to point to these partitions:

disk = [ 'phy:/dev/vg_pool/lv_vm0%d_swap,hda1,w' % (vmid),
         'phy:/dev/vg_pool/lv_vm0%d_root,hda2,w' % (vmid) ]

root = /dev/hda2

lv_vm01_root already has a filesystem created on it with gentoo already installed, ready for the new domain to boot right up. But the new domain comes up and doesn't find the root volume. The status messages seem to indicate that the LVs are getting exported correctly, but either I've built my xenU kernel wrong and/or I just don't know what I'm doing, since it doesn't seem to see any block devices at all. (Probably I should be exporting them as something other than hda1/2 ?)

I haven't gone back and simply fdisk'ed the drive to try to export "normal" partitions to the new domain; I've kind of picked a "hard" way to do it right off the bat I guess. I'm hoping I just have the syntax wrong. (Is there another type than "phy" ?) I dug through the python for "xm" but didn't see anything that screamed out at any "right" way to do it.

(FYI and FWIW - this is a spare IDE dual-celeron machine I built from stuff scrounged out of old servers, not anything fancy with iSCSI or anything like that. I work for a hosting/consulting company who currently have somewhere on the order of 1000 UML-based virtual-hosts deployed all over the place. I'm mostly learning Xen for myself because I need to deploy a couple new servers of my own to replace my aging mail/web server and want to start off using UML or Xen to deploy everything as VMs and prefer the Xen approach, but I'd also like to try to push at least a few Xen servers into work to compare against the UML machines. UML has a lot of mindshare right now though from the amount of work we've put in to support/maintain it, but I personally think the Xen approach is a lot cleaner/easier to maintain than UML, and live migration would be a huge bonus.)

# xm create -n -c vmid=1 dhcp=on
Using config file /etc/xen/xmdefaults

    (name 'This is VM 1')
    (memory '64')
    (cpu '1')
            (kernel /boot/bzImage-2.4.26-xenU)
            (ip :::::eth0:on)
            (root '/dev/hda2 ro')
(device (vbd (uname phy:/dev/vg_pool/lv_vm01_swap) (dev hda1) (mode w))) (device (vbd (uname phy:/dev/vg_pool/lv_vm01_root) (dev hda2) (mode w)))
    (device (vif (mac aa:0:0:1c:c8:7)))

# xm create -c vmid=1 dhcp=on

Started domain 28, console on port 9628
************ REMOTE CONSOLE: CTRL-] TO QUIT ********
Linux version 2.4.26-xeno-xenU (root@vargas) (gcc version 3.3.3 20040412 (Gentoo Linux 3.3.3-r6, ssp-3.3.2-2, pie-8.7.6)) #9 Tue Jul 13 22:54:03 EDT 2004
On node 0 totalpages: 16384
zone(0): 4096 pages.
zone(1): 12288 pages.
zone(2): 0 pages.
Kernel command line:  ip=:::::eth0:on root=/dev/hda2 ro
Initializing CPU#0
Xen reported: 451.031 MHz processor.
Calibrating delay loop... 4508.87 BogoMIPS
Memory: 62828k/65536k available (1368k kernel code, 2708k reserved, 165k data, 44k init, 0k highmem)
Dentry cache hash table entries: 8192 (order: 4, 65536 bytes)
Inode cache hash table entries: 4096 (order: 3, 32768 bytes)
Mount cache hash table entries: 512 (order: 0, 4096 bytes)
Buffer cache hash table entries: 4096 (order: 2, 16384 bytes)
Page-cache hash table entries: 16384 (order: 4, 65536 bytes)
CPU: L1 I cache: 16K, L1 D cache: 16K
CPU: L2 cache: 128K
CPU: Intel Celeron (Mendocino) stepping 05
POSIX conformance testing by UNIFIX
Linux NET4.0 for Linux 2.4
Based upon Swansea University Computer Society NET3.039
Initializing RT netlink socket
Starting kswapd
Journalled Block Device driver loaded
devfs: v1.12c (20020818) Richard Gooch (rgooch@xxxxxxxxxxxxx)
devfs: boot_options: 0x1
SGI XFS with no debug enabled
Event-channel device installed.
Xen virtual console successfully installed as tty
Starting Xen Balloon driver
pty: 2048 Unix98 ptys configured
Initialising Xen virtual block device
RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
Initializing Cryptographic API
Initialising Xen virtual ethernet frontend driver<6>NET4: Linux TCP/IP 1.0 for NET4.0
IP Protocols: ICMP, UDP, TCP, IGMP
IP: routing cache hash table of 512 buckets, 4Kbytes
TCP: Hash tables configured (established 4096 bind 8192)
Linux IP multicast router 0.06 plus PIM-SM
NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.
VFS: Cannot open root device "hda2" or 03:02
Please append a correct "root=" boot option
Kernel panic: VFS: Unable to mount root fs on 03:02
 <0>Rebooting in 1 seconds..
************ REMOTE CONSOLE EXITED *****************

"We all enter this world in the    | Support Electronic Freedom
same way: naked; screaming; soaked |        http://www.eff.org/
in blood. But if you live your     |  http://www.anti-dmca.org/
life right, that kind of thing     |---------------------------
doesn't have to stop there." -- Dana Gould

This SF.Net email sponsored by Black Hat Briefings & Training.
Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital self defense, top technical experts, no vendor pitches, unmatched networking opportunities. Visit www.blackhat.com
Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>