Hi Javier,
thank you very much for your thoughts.
SAN should be providing HDD space for several hundreds of VM at
several Xen boxes over iSCSI.
This should provide:
- easy offline migration of VMs between hosts (just stop VM on xen01
an start on xen04 - matter of seconds)
- higher utilization of expensive 15k harddrives
- ? possible live migration in future
- higher speed than current RAID1 local drive
- easy expandable
- ? VM HDD snapshot on the fly?
Technology summary:
2 SAN hosts
DRBD
cLVM
highly available iSCSI TARGET
HEARTBEAT
Whats clear difference between LVM and cLVM?
Br
Peter
2009/11/26 Javier Guerra <javier@xxxxxxxxxxx>:
> On Thu, Nov 26, 2009 at 4:26 AM, Peter Braun <xenware@xxxxxxxxx> wrote:
>> Am looking for opensource solution of H/A SAN - and this is close to my goal.
>
> there are lots of options, some simply bad; most good for some things
> and bad for other things
>
>>
>>
>> The basics:
>>
>> 1) 2 machines with some HDD space synchronized between them with DRBD.
>
> ok
>
>>
>> 2) Now am little bit confused what shall I use above the DRBD block
>> device?
>> - some cluster FS like OCFS2 or GFS?
>> - LVM?
>
> first question: what would be the clients, and how many??
>
> if the client(s) would store files, you need a filesystem. if the
> client(s) are Xen boxes, the best would be block devices, shared via
> iSCSI (or AoE, or FC, or nbd...) and split with LVM. or split with
> LVM and shared.
>
> if you want a single client, for filesystem anyone would do and for
> block devices LVM would be enough. if you want several clients, you
> need a cluster filesystem or cLVM. or you could split with plain LVM
> and share each LV via iSCSI.
>
> pruning some branches off the decision tree, you get two main options:
>
> 1: two storage boxes, synced with DRBD, split with (plain) LVM, share
> each LV via iSCSI.
> pros:
> - easy to admnister
> - no 'clustering' software (apart from DRBD)
> - any number of clients
> cons:
> - you can grow by adding more storage pairs; but a single LV can't
> span two pairs
> - no easy way to move LVs between boxes
> - if you're not careful you can get 'hot spots' of unbalanced load
>
> 2: any number of 'pairs', each synced with DRBD. no LVM, share the
> full block via iSCSI. set cLVM on the clients, using each 'pair' as a
> PV.
> pros:
> - very scalable
> - lots of flexibility, the clients see a single continous expanse
> of storage split in LVs
> cons:
> - somewhat more complex to setup well
> - cLVM has some limitations: no pvmove, no snapshots (maybe soon
> will be fixed?)
>
>
>> 3) create files with DD on cluster FS and export them with iSCSI?
>> create LVM partitions and export them with iSCSI?
>
> if you're exporting image files with iSCSI, then the only direct
> client of these files is iSCSI itself. no need for a cluster FS, any
> plain FS would do.
> ... and of course, LVM is more efficient than any FS
>
>>
>> 4) how to make iSCSI target highly available?
>> - configure iSCSI on virtual IP/another IP and run it as HA service
>> - configure separate iSCSI targets on both SAN hosts and
>> connect it to Xen server as multipath?
>
> no experience here. i'd guess multipath is nicer; but any delay in
> DRBD replication would be visible as read inconsistencies. a
> migratable IP number might be safer.
>
>> 5) hearbeat configuration
>
> yes, this can be a chore.
>
>> VM machines with iSCSI HDD space on SAN should survive reboot/non
>> availability of one SAN hosts without interruption nor noticing that
>> SAN is degraded.
>>
>> Is that even possible?
>
> yes, but plan to spend lots of time to get it right.
>
>
> --
> Javier
>
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|