WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] Storage alternatives

To: "Javier Guerra" <javier@xxxxxxxxxxx>, "Jan Marquardt" <jm@xxxxxxxxxxx>
Subject: RE: [Xen-users] Storage alternatives
From: "Tait Clarridge" <Tait.Clarridge@xxxxxxxxxxxx>
Date: Fri, 13 Mar 2009 10:32:08 -0400
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 13 Mar 2009 07:34:05 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <49B7D607.8010106@xxxxxxxxxxx> <49BA5E0B.10804@xxxxxxxxxxx> <90eb1dc70903130704yb0192advfa595cd1c73086b6@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acmj5ITdbYEdH31oTl2nLKR1vuj16QAAvpLA
Thread-topic: [Xen-users] Storage alternatives
> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Javier Guerra
> Sent: Friday, March 13, 2009 10:04 AM
> To: Jan Marquardt
> Cc: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-users] Storage alternatives
> 
> On Fri, Mar 13, 2009 at 8:22 AM, Jan Marquardt <jm@xxxxxxxxxxx> wrote:
> >
> >> IOW: the iSCSI initiator and RAID (i guess it's RAID1) should be on
> >> Dom0, and the DomU configs should refer the resultant blockdevices.
> >
> > This is one solution we are discussing at the moment, but I think it
> would
> > be a lot smarter to get the raid functionality on a layer
> between  the
> > harddisks and the iscsi targets as adviced by Nathan.
> 
> yep, that further reduces duplicated traffic.  i didn't mention this
> just because i'm not too familiar with DRBD, and because i thought
> it's too different from your current setup, so you might want to go in
> steps.  if it's easier to redo it all again, this is a better idea.
> 
> >> Agreed.  You could even potentially move the mirroring down to the
> >> storage nodes (mirrored nbd/etc. devices) and HA the iSCSI target
> >> service itself to reduce dom0's work, although that would depend on
> you
> >> being comfortable with iSCSI moving around during a storage node
> >> failure, which may be a risk factor.
> >
> > I think that we would have to reboot each domU in this case after a
> failure,
> > isn't it? The goal is to have domUs which would not be affected by
> failure
> > of one storage servers.
> 
> that's one reason to put all storage management as low on the stack as
> possible.  in this case, Dom0 should be the only one noting the
> movement, and any failover detection (either RAID1, multipath, IP
> migration, etc) should finish at Dom0.  DomUs won't feel a thing
> (unless it takes so long that you get timeouts).
> 
> >> Abstract your disks and iscsi exports; then use ZFS on two pools
> this will
> >> minimize the administration.
> >
> > ZFS seems to be very nice, but sadly we are not using Solaris and
> don't want
> > to use it with FUSE under Linux. Nevertheless does anyone use ZFS
> under
> > Linux and can share his/her experiences?
> 
> ZFS gets you some nice ways to rethink about storage, but on these
> cases it's (mostly) the same as any other well-thought scheme.
> 
> 
> --
> Javier
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users


Just to add to this thread,


We are using DRBD here spread across local storage on two machines with plans 
for failover coming soon. Basically each machine has two NICS pointing at 
eachother for the replication and I haven't experienced an IO related issues. 

Now we have everything in file based images so we can export them in extreme 
emergency to another pool of servers that aren't setup this way. You can surely 
use LVM ontop of DRBD (we are using XFS and it is fantastic). 

In a test bed I had Heartbeat and DRBD working perfectly to provide auto 
failover of the xen resources. We decided on DRBD because of two reasons, one 
that we can easily change which server can mount which DRBD device (for 
maintenance one server can have both mounted) and for backups. I have written a 
backup script that kills the connection between the DRBD arrays, tells the 
server that the DRBD was secondary on to go primary and mount, then copy the 
VMs to a iSCSI or NFS mount while the array is down. This script then 
reconnects the DRBD and resyncs, never killing the online VMs (I still have to 
play with the rate of resync to find optimal settings for 10+ vms).

So yeah, DRBD is pretty sweet so if you can connect multiple iSCSI targets in 
dom0 using DRBD, you are golden.


Tait

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>