Tijl,
Jo De Baer has been adding some information about shared
cluster containers, for OCFS2 and Xen, at this Wiki
http://wiki.novell.com/index.php/SUSE_Linux_Enterprise_Server#High_Availability_Storage_Foundation
==
First preview release - November 15 2007 : download here
This preview release shows how to integrate a EVMS2 shared cluster container
==
In response to your question about Xen using EVMS cluster containers...
EVMS itself doesn't do any locking that would protect a container on one node
from concurrent access by a different node. Cluster containers work by
masking access - the EVMS devnodes should only be visible on one node at
a time - as a function of the failover manager managing EVMS to mask /
unmask the devnodes across nodes (for private containers). Shared cluster
containers are visible on all nodes, with the expectation highler layer
software (e.g. a cluster file system) is sharing and coordinating access
to that shared storage.
Whether or not the same Xen VM is active on more than one cluster
node at a time, is also provided by Heartbeat's management of Xen VMs
as cluster resources - it's Heartbeat's job to ensure the same resource
(VM) isn't running in two places at the same time.
(Xen knows whether the same block device is being accessed by more
than one VM on the same server - and protects against that).
Hth,
Robert
>>> "Tijl Van den Broeck" <subspawn@xxxxxxxxx> 01/04/07 3:04 AM >>>
Vini,
You have a good point, this should be on a site somewhere. I took some
time to make it a little bit more readable and added a page to the
Xensource wiki:
http://wiki.xensource.com/xenwiki/EVMS-HAwSAN-SLES10
Perhaps it can be linked from the Linux-HA site and from the EVMS
site? In order to keep the documentation centralised on one location.
greetings
Tijl Van den Broeck
On 1/3/07, vini.bill@xxxxxxxxx <vini.bill@xxxxxxxxx> wrote:
> Great! I was thinking about implementing somthing like that here at the
> company, now I got some guidance.
>
> Thank you very much.
>
> p.s.: Isn't that worth being the website?
>
> On 1/3/07, Tijl Van den Broeck <subspawn@xxxxxxxxx> wrote:
> >
> > The following is a mini-howto, skip to the end of this mail for some
> > EVMS specific questions/problems.
> >
> > So far I've not seen anybody yet on the Xen list talking about a
> > succesfull setup of EVMS with heartbeat 2. Setting up EVMS-HA is not
> > that terribly difficul.
> >
> > If you're not experienced with heartbeat 2, perhaps read some basic
> > material on it at www.linux-ha.org especially on the administration
> > tools. Now you've 2 direct options you'd want to use EVMS-HA for:
> > 2-node, DRDB-sync + EVMS-HA resource failover (possibly with HA-Xen
> > scripts): I haven't tested this yet, but it should be possible as far
> > as I've read.
> > n-node, iSCSI/FC-SAN + EVMS-HA; my current testing environment.
> >
> > Notice there's a big difference, afaik when using DRDB you must
> > actually failover your resources to the other node. In a SAN based env
> > this is not the case as all nodes constantly have full I/O access,
> > which is why EVMS-HA should be usefull (at least I thought, read my
> > remarks on that at the end of this mail).
> >
> > I installed plain SLES-10 copies (NOT using EVMS at installation
> > time), booted into Xen kernel in which all is configured. My intention
> > is/was only to use EVMS for Xen DomU volume management, not for local
> > disk management. Just for the ease of administration to keep those
> > strictly separated.
> >
> > So to begin, make sure you've got all I/O access on all nodes to the
> > same resources (preferably with a unique name on all nodes for the
> > ease of administration, but this isn't necessary as EVMS can fix
> > this).
> >
> > As local disk configuration is not EVMS aware, exclude the local disks
> > from EVMS management (in this case cciss mirrored disks, but this
> > could well be your hda/sda disks).
> > /etc/evms.conf:
> > sysfs_devices {
> > include = [ * ]
> > exclude = [ cciss* ]
> > ...
> > }
> > Make sure admin_mode is off on all nodes, admin_mode has little to do
> > with admininistration but more with recovery/maintenance if things
> > have gone bad. More on this in the evms user guide:
> > http://evms.sourceforge.net/user_guide/
> >
> > Setup heartbeat on both clusters to enable EVMS cluster awareness,
> > next to the usual ip and node configuration (which can be done using
> > yast in SLES10), just adding 2 lines to /etc/ha.d/ha.cf will do:
> > respawn root /sbin/evmsd
> > apiauth evms uid=hacluster,root
> >
> > Start the cluster node by node:
> > /etc/init.d/hearbeat start
> >
> > Make sure both sync and come up (keep an eye on /var/log/messages).
> > You can use the crmadmin tool to query the state of master and nodes.
> > Also usefull is cl_status for checking link & daemon status.
> > Note: If you're using bonding, you can run into some trouble here. Use
> > unicast for node sync, not multicast as somehow the Xen software
> > bridge doesn't fully cope with that yet (at least I didn't get it to
> > work, perhaps someone who did?).
> >
> > When all nodes are up and running, start evmsgui (or evmsn, whichever
> > you prefer) on one of the nodes. If you click the settings menu and
> > find the option "Node administered" enabled, congratulations you've
> > got an cluster aware EVMS). Be sure to know some essentials of EVMS
> > (it's a little different from plain LVM2).
> >
> > Create a Containter with the Cluster Segm Manager, select your
> > attached SAN storage objects (could be named sdb, sdc ...), choose
> > whichever node name, type shared storage and name the container
> > "c_sanstorage" for example.
> >
> > You can patch through the SAN disks to EVMS volumes (see the disk list
> > in available objects). Don't do that, as these volumes will be fixed
> > in size (as they were originally presented from the SAN), instead use
> > EVMS for storage management. For this, create another container, this
> > time a LVM2 Region Manager, in which you store all the objects from
> > the CCM c_sanstorage (objects will have a name like c_sanstorage/sdb,
> > c_sanstorage/sdc, ...). Choose the extent size at will and name it
> > (vgsan for example).
> >
> > Go into the Region tab, and create a region from the LVM2 freespace
> > named and sized at your will, for example domu01stor, domu02stor, ...
> >
> > Save the configuration, all settings will now be applied and you will
> > find your correctly sized+named volumes in /dev/evms/c_sanstorage/
> >
> > Now partition (if wanted) the evms disks like you used to
> > (fdisk),format,place domU's on it and launch.
> >
> >
> >
> >
> >
> > As for the problems & remarks I've seen with this setup.
> > For the EVMS configuration to be updated at all nodes, you have to
> > select each node in "node administered" and select save for each node
> > (as only then the correct devices will be created on the node).
> >
> > This could be a structural question, but... being cluster aware,
> > shouldn't the EVMS-HA combination (with CCM) provide locking on
> > volumes created beneath CCM? It is perfectly possible for me to
> > corrupt data on an EVMS volume on node 2 which volume is also mounted
> > on node 1. I expected some kind of locking to step up:
> > dom0_2# mount /dev/evms/c_sanstorage/domu01stor /mnt
> > failure: volume domu01stor is already mounted on node dom0_1
> >
> > Or something amongst those lines. My initial thoughts: it had to do
> > with my CCM being "shared". But when creating a CCM as "private", the
> > same issues were possible! And even more remarkable, if I create a CCM
> > as private on node dom0_1 and I launch evmsgui om dom0_2 it recognizes
> > the CCM as private owned by dom0_2 ?!? This strikes me as very odd.
> > Are these problems due to faults in my procedure, if so let me know
> > please, or are they of a more structural nature (or perhaps SLES10
> > specific)?
> > They are kind of essential with Xen domains, you wouldn't want to boot
> > the same domain twice (one copy on dom0_1 and another running on
> > dom0_2) as data corruption is garanteed.
> >
> > That is why this mail is crossposted at all 3 lists:
> > information for xen-users
> > technical questions for evms and linux-ha.
> >
> > greetings
> >
> > Tijl Van den Broeck
> > _______________________________________________
> > Linux-HA mailing list
> > Linux-HA@xxxxxxxxxxxxxxxxxx
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> >
>
>
>
> --
> ... Vinicius Menezes ...
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@xxxxxxxxxxxxxxxxxx
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
Linux-HA@xxxxxxxxxxxxxxxxxx
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|