> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Javier Guerra
> Sent: Friday, March 13, 2009 10:04 AM
> To: Jan Marquardt
> Cc: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-users] Storage alternatives
>
> On Fri, Mar 13, 2009 at 8:22 AM, Jan Marquardt <jm@xxxxxxxxxxx> wrote:
> >
> >> IOW: the iSCSI initiator and RAID (i guess it's RAID1) should be on
> >> Dom0, and the DomU configs should refer the resultant blockdevices.
> >
> > This is one solution we are discussing at the moment, but I think it
> would
> > be a lot smarter to get the raid functionality on a layer
> between the
> > harddisks and the iscsi targets as adviced by Nathan.
>
> yep, that further reduces duplicated traffic. i didn't mention this
> just because i'm not too familiar with DRBD, and because i thought
> it's too different from your current setup, so you might want to go in
> steps. if it's easier to redo it all again, this is a better idea.
>
> >> Agreed. You could even potentially move the mirroring down to the
> >> storage nodes (mirrored nbd/etc. devices) and HA the iSCSI target
> >> service itself to reduce dom0's work, although that would depend on
> you
> >> being comfortable with iSCSI moving around during a storage node
> >> failure, which may be a risk factor.
> >
> > I think that we would have to reboot each domU in this case after a
> failure,
> > isn't it? The goal is to have domUs which would not be affected by
> failure
> > of one storage servers.
>
> that's one reason to put all storage management as low on the stack as
> possible. in this case, Dom0 should be the only one noting the
> movement, and any failover detection (either RAID1, multipath, IP
> migration, etc) should finish at Dom0. DomUs won't feel a thing
> (unless it takes so long that you get timeouts).
>
> >> Abstract your disks and iscsi exports; then use ZFS on two pools
> this will
> >> minimize the administration.
> >
> > ZFS seems to be very nice, but sadly we are not using Solaris and
> don't want
> > to use it with FUSE under Linux. Nevertheless does anyone use ZFS
> under
> > Linux and can share his/her experiences?
>
> ZFS gets you some nice ways to rethink about storage, but on these
> cases it's (mostly) the same as any other well-thought scheme.
>
>
> --
> Javier
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
Just to add to this thread,
We are using DRBD here spread across local storage on two machines with plans
for failover coming soon. Basically each machine has two NICS pointing at
eachother for the replication and I haven't experienced an IO related issues.
Now we have everything in file based images so we can export them in extreme
emergency to another pool of servers that aren't setup this way. You can surely
use LVM ontop of DRBD (we are using XFS and it is fantastic).
In a test bed I had Heartbeat and DRBD working perfectly to provide auto
failover of the xen resources. We decided on DRBD because of two reasons, one
that we can easily change which server can mount which DRBD device (for
maintenance one server can have both mounted) and for backups. I have written a
backup script that kills the connection between the DRBD arrays, tells the
server that the DRBD was secondary on to go primary and mount, then copy the
VMs to a iSCSI or NFS mount while the array is down. This script then
reconnects the DRBD and resyncs, never killing the online VMs (I still have to
play with the rate of resync to find optimal settings for 10+ vms).
So yeah, DRBD is pretty sweet so if you can connect multiple iSCSI targets in
dom0 using DRBD, you are golden.
Tait
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|