On Mon, Apr 20, 2009 at 9:19 AM, Jan Kalcic <jandot@xxxxxxxxxxxxxx> wrote:
> cLVM does not provide a lock manager which manages *access* (I know
> EVMS2 does for instance) to a LV or VG from the nodes of a cluster. As
> you said, It just keeps LVM changes consistent. Let me say it provides a
> DLM at volume level. To prevent split-brain I need a DLM at file system
> level, of course, so a cluster-aware file system like OCFS2 is needed.
you're mixing levels a bit.
- dm: (kernelspace) device mapper, creates 'virtual block devices'.
any access on these is mapped to some access on a real device.
- LVM: (mostly userspace) uses dm to create LVs. it manages some
metadata blocks on the block devices, and loads the mapping tables
into the kernel. when running on a single node it can do changes
while online because dm supports a 'suspend' operation; the sequence
is: change metadata blocks, suspend logical devices, change mapping
tables in the kernel, unsuspend.
in theory, you can use LVM as is with shared block devices and
several nodes; with two precautions: never mount a single LV on more
than one node, and never do any change to the VG while other nodes are
running. for that, you have to disconnect the VG from all nodes, do
the change on the only node still connected, and reconnect all the
- lock manager: there are several, the one from the GFS stack is now
based on openAIS. locks are an abstract service in the sense that can
be used for several different things.
- cLVM: (userspace) a small extension to LVM utilities, and a daemon
(clvmd). any operation that would change a shared VG first acquires
a lock. for that lock operation to succeed, the clvmd of all other
nodes do a 'suspend'. when the lock is acquired, the first node can
be sure that no write operation will happen on the device. it changes
the metadata blocks and notifies all other nodes. the other nodes
reload the mapping tables from the changed metadata blocks, and
unsuspend the volume group to release the lock.
in short, the only thing cLVM adds to 'bare' LVM is to extend the
online management capabilities of LVM to the whole cluster.
on top of these, there can be several uses of these LVs. you can put
non-cluster filesystems, as long as you never do a double mount; or
you can use cluster filesystems, which will use a lock manager (may be
the same as cLVM, or may be another one) to assure consistency at that
level, or you can use them for Xen, with similar limitations,
depending on the filesystems used by the DomUs.
Xen-users mailing list