|
|
|
|
|
|
|
|
|
|
xen-api
Re: [Xen-API] Alternative to Vastsky?
On 20/04/11 01:51, George Shuklin wrote:
I see no problem with split brains in case of DRBD between two XCP
hosts (with DRBD between local drive in fist XCP host and second drive
via network on second XCP host). XCP assure there is no two copy of
same VM running in pool (we talking about XCP, not xend?). If some
pool suddenly go offline or disconnected (same thing), you must
manually say vm-reset-powerstate. I think this kind of protection is
fairy normal, except it will delay automatic restart in case of
unexpected host hangup - but in case of XCP this problem exists for
every storage solution - problem is not with storage but with XCP way
to detects HOST_OFFLINE (only after long delay XCP will assume host
down... or never? I still not test this well).
The main sad thing in DRBD is two host limit, but it still better,
than plain /dev/sd[abcde] for pack of 'mission critical applications
with new level of performance and effibla-bla-bla'. And (as far as I
know XCP internals) it have all capabilities (may be with little
tweaking) to get DRBD support at logic level. We have shared SR with
two PBD on two hosts. We calculate vm-vbd-vdi-sr-pbd-host paths before
sending task to slave (start/migrate/evacuate), we accounting them
before returning calculated ha-avability (forgot exact names). To
avoid 'tripple confilct' we allow only one DRBD per host: if A have
two different DRBD with B and C, B have same with C and A, and C with
B and A and we create vm with two vdi on fist and second DRBD volumes,
we lost any way for successful migration (and, in certain meaning,
loose some redundancy).
I have not considered using a separate DRBD resource for each vdi for
this very reason however, if you are sticking to a simple paired host
setup there are advantages in putting an LVM storage repository on top
of one large DRBD disk mirrored between both hosts (assuming you don't
loose network connectivity).
Than you for reply about two iscsi target for same drbd... I have a
little doubts about data consistency due iscsi queue...
I never had a problem, probably because it was setup for fail-over not
performance. I would be inclined to agree with you.
The last: I DO really wants to see 2.6.38+ in XCP. In 2.6.38 Red Hat
has add support for blkio-throttle in this version - most wanted
feature for dom0 - its allow to shape IOPS and bandwidth for every
process separately (this means 'for every VM'). We have (not very
good, but working) traffic shaper, so disk shaper is very actual too...
_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api
|
|
|
|
|