> Yes, I aggree with You... Any sharing should be made through some
> network filesystems or similar services only and I would never share one
> LVM directly from more instances... But as far I understood Markus
> correctly, he wanted to use LVM partitions in this way...
Well, to clarify:
The real problem with sharing is *writing* to a *filesystem* whilst other
domains are able to read or write *that filesystem*. Operating systems
mustn't be allowed to see a filesystem they have mounted change underneath
them.
Hence, for instance, sharing an LVM volume that's formatted with ext3 between
multiple domains whilst modifying the contents is a Bad
Thing. "Modification" in this case can basically mean "mounting writeably"
even if you don't explicitly write to files - the filesystem driver may well
update the disk silently even if you don't initiate IO.
However, it's worth noting that the real problem is *not* that the domains are
all accessing the LVM volume. It's specifically that they're all accessing
the same filesystem.
If you were to ext3 format an LVM volume and fill it with file-backed VBDs and
then give a different file-backed VBD to each domain then that would be fine.
The VBDs are all stored within the same *host filesystem* but the guests are
storing their filesystems *within the VBDs*. As long as the *contents* of
*those VBDs* don't change then the guests will not perceive a change to their
filesystems, so it's all good. The guests are not aware of what's going on
outside their own VBDs, so it's safe.
So the real problem that I'm getting at is that you can't mount the same
filesystem in more than one place unless it's mounted readonly everywhere.
As long as guests are mounting distinct filesystems, even if they're on the
same device or the same *host* filesystem, then this is OK.
A specific case of multiple domains accessing different file-backed VBDs that
are all stored in the same host filesystem is if you're just storing your
VBDs under /, as many people do.
Hope that helps clarify the discussion for folks! I should really add this to
the wiki...
Cheers,
Mark
> Cheers, Archie
>
> -----Original Message-----
> From: M.A. Williamson [mailto:maw48@xxxxxxxxxxxxxxxx] On Behalf Of Mark
> Williamson
> Sent: Saturday, December 08, 2007 8:19 PM
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Cc: Artur Linhart - Linux communication; 'Markus Gerber'; 'Emre Erenoglu';
> Stephan Seitz
> Subject: Re: [Xen-users] dom0's LVM partition in domU
>
> Archie,
>
> I don't think that the sharing issues you mentioned is the source of this
> particular problem but you do make an excellent point they are worth
>
> emphasizing again:
> > I do not if I understand correctly the situation, but I think
> > this is the problem to share the LVM partition. The partition should be
> > used exclusively from Dom0 or DomU, You cannot mount the LVM in Dom0 and
> > hen use if also in DomU (or, you should use it in DomU as read only, but
> > I'm not sure if this is supposed to work).
>
> In fact, you should never mount a partition read-write *anywhere* if it's
> mounted readably *in any other domain*, ever. Doing so will cause the
> domain
> with the readonly mount to read corrupted data and may crash.
>
> You should definitely never, never, never mount something writeably in two
> domains at once because it is guaranteed to corrupt the on-disk filesystem.
>
> > From my "point of knowledge" LVM
> > cannot handle the concurrent accesses of such multiply mounted
> > partitions.
>
> If you mount read-only in *every* domain that is accessing the device, then
> it
> should be safe. To play it extra safe you can export a virtual block
> device
>
> as read-only to the guests but you still need to not modify it in dom0.
>
> > Also if You use a HVM DomU then You cannot either start the DomU if the
> > partition is mounted in Dom0. Maybe it is a little bit different
> > situation in PV domain,
>
> Situation is the same for both HVM and PV domU - sharing is dangerous
> unless
>
> *every* domain involved is accessing as read-only.
>
> > but I would never try it to share one LVM block device to
> > multiple mounts unless there is only one write-enabled and all other
> > would be read-only...
>
> As described above, that could also be dangerous... better to unmount the
> device in every domain (or shut the domains down) before you modify it,
> then
>
> unmount the writeable copy before you let the other domains read it again.
>
> A corollary of these restrictions is: never modify a mounted filesystem of
> a
>
> saved or paused guest. The effect is the same modifying a mounted
> filesystem
> whilst the domain is running. Either unmount the filesystem properly, or
> shut down the guest for safety!
>
> Sorry to go on about this, folks, but it's really very important and it's
> something that most people don't encounter when administering physical
> machines.
>
> The exceptions to these rules are when network filesystems or cluster
> filesystems are involved, where it can be safe to have writeable sharing.
>
> Cheers,
> Mark
>
> > With regards, Archie
> >
> >
> >
> > _____
> >
> > From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> > [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Markus Gerber
> > Sent: Thursday, December 06, 2007 2:50 PM
> > To: Emre Erenoglu
> > Cc: xen-users@xxxxxxxxxxxxxxxxxxx
> > Subject: Re: [Xen-users] dom0's LVM partition in domU
> >
> >
> >
> > #
> >
> > # Kernel + memory size
> >
> > #
> >
> > kernel = '/boot/vmlinuz-2.6.22-14-xen'
> >
> > ramdisk = '/boot/initrd.img-2.6.22-14-xen'
> >
> > memory = '512'
> >
> >
> >
> > #
> >
> > # Disk device(s).
> >
> > #
> >
> > root = '/dev/sda1 ro'
> >
> > disk = [ 'phy:data/dom1-disk,sda1,w', 'phy:data/dom1-swap,sda2,w',
> > 'phy:/dev/mapper/data-share,xvda1,w' ]
> >
> >
> >
> >
> >
> > The modules are all in /lib/modules. 'depmod -a' and a reboot didn't
>
> change
>
> > anything.
> >
> >
> >
> > Markus
> >
> >
> >
> >
> >
> >
> >
> > On 06.12.2007, at 14:40, Emre Erenoglu wrote:
> >
> >
> >
> >
> >
> > It's getting strange. Normally, the DomU kernel shall detect the xvda
> > that we are exporting. Last point, can you paste your kernel and
> > initramfs
>
> lines
>
> > in DomU config file as well as the disk= line ?
> >
> > Another: do you have the modules of 2.6.22-14-xen kernel in /lib/modules
>
> of
>
> > the DomU system? have you issued a depmod -a even though these modules
> > shall be in the kernel or initramfs anyway (just to make sure).
> >
> > Emre
> >
> > On Dec 6, 2007 2:36 PM, Markus Gerber < markus@xxxxxxxxxxxxx> wrote:
> >
> > dom1:
> >
> > uname -a
> >
> > Linux dom1 2.6.22-14-xen #1 SMP Mon Oct 15 00:35:38 GMT 2007 i686
>
> GNU/Linux
>
> > dom0:
> >
> > uname -a
> >
> > Linux dom0 2.6.22-14-xen #1 SMP Mon Oct 15 00:35:38 GMT 2007 i686
>
> GNU/Linux
>
> > While installing xen, I only used official released XEN packages - no
> > backports etc. I installed dom1 using bootstrap.
> >
> >
> >
> > Markus
> >
> >
> >
> >
> >
> >
> >
> > On 06.12.2007, at 14:32, Emre Erenoglu wrote:
> >
> >
> >
> >
> >
> > Which kernel are you using? can you tell us the output of uname -a ?
> >
> > If /dev/xvd* does not exist even though you're put disk= line in the DomU
> > config file, it may mean that your kernel is not a PV-enabled one.
>
> Usually,
>
> > what we do is to use DomU kernel of the specific distribution, or worst
> > case, use the dom0 kernel for domU also.
> >
> > Emre
> >
> >
> >
> > On Dec 6, 2007 2:29 PM, Markus Gerber <markus@xxxxxxxxxxxxx> wrote:
> >
> > ls -la /dev/xv*
> >
> > ls: /dev/xv*: No such file or directory
> >
> >
> >
> > (in dom1 and dom0)
> >
> >
> >
> >
> >
> > I'm using para virtualization.
> >
> >
> >
> > When I uncomment the line in dom1's fstab, I can boot without any error,
> > but I don't get another output calling ls -la /dev/xv*
> >
> >
> >
> > Markus
> >
> >
> >
> >
> >
> > On 06.12.2007, at 14:20, Emre Erenoglu wrote:
> >
> >
> >
> >
> >
> > Can you please give the output of the following command:
> >
> > ls -la /dev/xv*
> >
> > Are you using a PV domain or a HVM domain? (full or para virtualization?)
> >
> > Emre
> >
> > On Dec 6, 2007 2:17 PM, Markus Gerber < markus@xxxxxxxxxxxxx > wrote:
> >
> > What I did so far:
> >
> > - Added the LVM-partition to the dom1's config
> > ('phy:/dev/mapper/data-share,xvda1,w') in dom0
> >
> > - Added /dev/xvda1 /mnt/share ext3 defaults 0
> > 1 to /etc/fstab in dom1
> >
> >
> >
> > I still get the very same error while booting.
> >
> >
> >
> > Are there more steps to do? Mounting /dev/mapper/data-share in dom0 works
> > and i can copy files to it.
> >
> >
> >
> > Markus
> >
> >
> >
> >
> >
> > On 06.12.2007, at 13:57, Emre Erenoglu wrote:
> >
> >
> >
> > As Sadique indicated, try using xvda instead of sda or hda.
> >
> > Emre
> >
> > On Dec 6, 2007 1:49 PM, Markus Gerber wrote:
> >
> > Hello Emre,
> >
> >
> >
> > These devices do not exist in dom1.
> >
> >
> >
> > Can I create them manually? Or do I need to install additional packages.
> >
> >
> >
> > Uups: I've just seen that there is a typo in my first post - dom1 is
> > running Debian Etch and not Ubuntu. Sorry about that!
> >
> >
> >
> > Thanks and regards,
> >
> > Markus
> >
> >
> >
> >
> >
> >
> >
> > On 06.12.2007, at 11:08, Emre Erenoglu wrote:
> >
> >
> >
> >
> >
> > Markus,
> >
> > Does it really not exist or just not formatted?
> >
> > mkfs.ext3 /dev/sda3
> >
> > and instead of sda, I think you shall consider using xvda (if you're on a
> > PV domU or HVM with PV drivers)
> >
> > can you see these devices in /dev?
> >
> > Emre
> >
> > On Dec 6, 2007 10:51 AM, Markus Gerber <markus@xxxxxxxxxxxxx> wrote:
> >
> > Hello,
> >
> >
> >
> > Thank you for your tipp. Unfortunatly, I get an error when booting dom1
> > saying:
> >
> >
> >
> > Loading device-mapper support.
> >
> > Checking file systems...fsck 1.40-WIP (14-Nov-2006)
> >
> > fsck.ext3: No such file or directory while trying to open /dev/sda3
> >
> > /dev/sda3:
> >
> > The superblock could not be read or does not describe a correct ext2
> >
> > filesystem. If the device is valid and it really contains an ext2
> >
> > filesystem (and not swap or ufs or something else), then the superblock
> >
> > is corrupt, and you might try running e2fsck with an alternate
> > superblock:
> >
> > e2fsck -b 8193 <device>
> >
> >
> >
> > fsck died with exit status 8
> >
> > failed (code 8).
> >
> > * File system check failed.
> >
> >
> >
> >
> >
> > In my dom1's config file I added the last 'phy' to the disk
> >
> > disk = [ 'phy:data/dom1-disk,sda1,w', 'phy:data/dom1-swap,sda2,w',
> > 'phy:/dev/mapper/data-share,sda3,w' ]
> >
> >
> >
> >
> >
> > How does the /etc/fstab in dom1 has to look like? I added:
> >
> > /dev/sda3 /mnt/share ext3 defaults 0
> > 1
> >
> >
> >
> > With the line above it does not work. /dev/sda3 does not exist.
> >
> >
> >
> > Thank you for some more hints.
> >
> >
> >
> > Regards,
> >
> > Markus
> >
> >
> >
> >
> >
> > On 06.12.2007, at 10:13, Emre Erenoglu wrote:
> >
> >
> >
> >
> >
> > Just a disk= line would suffice. example:
> >
> > disk = [ 'phy:/dev/volume-group/volume-name,hda1,w' ,
>
> 'phy:/dev/md1,hda2,w'
>
> > , 'phy:/dev/volume-group/volume-2,hda3,w' ]
> >
> > /dev/volume-group/volume-name being one of your LVM volume. (or
> > /dev/mapper/volume_group-volume_name
> >
> > Br,
> >
> > Emre
> >
> >
> >
> > On Dec 6, 2007 9:37 AM, Markus Gerber < <mailto:markus@xxxxxxxxxxxxx>
> > markus@xxxxxxxxxxxxx> wrote:
> >
> > Hello,
> >
> > In my dom0 (Ubuntu 7.10) I have several LVM partitions (for mp3s,
> > photos, ...).
> > Since I do not want to have any services in dom0, I have a domU (also
> > Ubuntu 7.10) with Samba. So my users connect to that domU. How can I
> > export and import these LVM partitions from dom0 into domU?
> >
> > Using nfs could work, but I prefer a solution where my LVM partitions
> > are 'natively' in domU.
> >
> > Thank you for some hints and tipps.
> >
> > Regards,
> > Markus
--
Dave: Just a question. What use is a unicyle with no seat? And no pedals!
Mark: To answer a question with a question: What use is a skateboard?
Dave: Skateboards have wheels.
Mark: My wheel has a wheel!
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|