Hello,
 
I have a new 10 node XEN cluster I've built out using 3.3,
and a dedicated iSCSI SAN.  The XEN configuration files are located on a volume
on the SAN, which is exported to all Dom0s via NFS.  The system works
great, but there is one major issue, and two minor issues which I have to deal
with before I take this system into production.
 
The major issue:
I have not found a method that I like for preventing a DomU
from being started on two nodes at the same time and clobbering the data. 
I've been considering writing a script to manage lock files which are placed
into the configuration store directory; these files could contain the hostname
of the Dom0 where the DomU is running, which would solve my first minor issue.
 
First minor issue:
I haven't yet found a method for figuring out where DomU's
are, short of running lots of xm lists or scripting something silly
together.  This hasn't been a problem in my 2-4 node clusters, but it won't
work as this cluster scales out to its eventual size (20 nodes).  If I
implement the lock file idea, this problem is solvable.
 
Second minor issue:
I recall that there was a way to change the behavior of the XEN
daemon so that it would migrate DomUs on shutdown, rather than suspend them to
disk.  However, I can't figure out what that was.
 
Third minor issue:
Has anyone developed a mechanism for ensuring that VMs are
distributed evenly throughout the cluster?  IE, if I have 10 Dom0s, and
100 of the same DomUs with the same memory size and the same load, the
mechanism should ensure that I have approximately 10 DomUs per Dom0.  If a
host dies, it'd be nice to have something that figures out how to distribute
the 10 DomUs throughout the cluster evenly, so that each Dom0 has 11-12 DomUs
on it.
 
A lot of this is probably just scripting, but I suspect this
is a road others have had to walk, and I'd just as soon not reinvent a wheel
(especially knowing how bad my scripts usually are :-) )
 
I apologize for the length of this post!
 
Best Regards
Nathan Eisenberg
Sr. Systems Administrator
Atlas Networks, LLC
support@xxxxxxxxxxxxxxxx
http://support.atlasnetworks.us/portal