WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Storage alternatives

To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] Storage alternatives
From: Jan Marquardt <jm@xxxxxxxxxxx>
Date: Fri, 13 Mar 2009 14:22:19 +0100
Delivery-date: Fri, 13 Mar 2009 06:23:05 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <49B7D607.8010106@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <49B7D607.8010106@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.19 (X11/20090105)

IOW: the iSCSI initiator and RAID (i guess it's RAID1) should be on
Dom0, and the DomU configs should refer the resultant blockdevices.

This is one solution we are discussing at the moment, but I think it would be a lot smarter to get the raid functionality on a layer between the harddisks and the iscsi targets as adviced by Nathan.

Agreed.  You could even potentially move the mirroring down to the
storage nodes (mirrored nbd/etc. devices) and HA the iSCSI target
service itself to reduce dom0's work, although that would depend on you
being comfortable with iSCSI moving around during a storage node
failure, which may be a risk factor.

I think that we would have to reboot each domU in this case after a failure, isn't it? The goal is to have domUs which would not be affected by failure of one storage servers.

If you have a storage node go offline in your current configuration for any real length of time, when it becomes available again, all of the nodes will begin to resync the array simultaneously. With a single DomU, you'll just consume the vast majority of either your Disk IO or Network IO. However, if you had a dozen guests, and they all start to rebuild their RAID1s from the same source SAN to the same destination SAN, through the same network link (in and out), at the same time, things are probably going to grind to an absolute halt.

This is of course also one reason why I want to change the current setup.

Abstract your disks and iscsi exports; then use ZFS on two pools this will
minimize the administration.

ZFS seems to be very nice, but sadly we are not using Solaris and don't want to use it with FUSE under Linux. Nevertheless does anyone use ZFS under Linux and can share his/her experiences?

Regards,

Jan


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>