WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] RAID10 Array

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] RAID10 Array
From: Bart Coninckx <bart.coninckx@xxxxxxxxxx>
Date: Thu, 17 Jun 2010 13:03:05 +0200
Cc: Adi Kriegisch <kriegisch@xxxxxxxx>, Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
Delivery-date: Thu, 17 Jun 2010 04:05:54 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100617073236.GC30903@xxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4C195614.1030501@xxxxxxxxxxx> <20100617073236.GC30903@xxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.12.4 (Linux/2.6.31.12-0.2-desktop; KDE/4.3.5; x86_64; ; )
On Thursday 17 June 2010 09:32:37 Adi Kriegisch wrote:
> Hi!
> 
> > I have 3 RAID ideas, and I'd appreciate some advice on which would be
> > better for lots of VMs for customers.
> >
> > My storage server will be able to hold 16 disks. I am going to export 1
> > iSCSI LUN to each xen node. 6 nodes will connect to one storage server,
> > so that's 6 LUNs per server of equal size. The server will connect to a
> > switch using quad port bonded NICs (802.3ad), and each Xen node will
> > connect to the switch using Dual port bonded NICs.
> 
> hmmm... with one LUN per server you will loose the ability to do live
> migration -- or do I miss something?
> Some people mention problems with bonding more than two NICs for iSCSI as
> the reordering of the commands/packets adds tremendously to latency and
>  load. If you want high performance and avoid latency issues you might want
>  to choose ATA-over-Ethernet.

If I understand correctly, you could do live migration, but you would have to 
migrate them all at once.

> > I'd appreciate any thoughts or ideas on which would be best for
> > throughput/IPOS
> 
> Your server is a Linux box exporting the RAIDs to your Xen servers? Then
> just take fio and do some benchmarking. If you're using software raid than
> you might want to add RAID5 to the equation.
> I'd suggest to measure performance of your RAID system with various
> configurations and then choose which level of isolation gives the best
> performance.
> I don't think a setup with 6 hot spare disks is necessary -- at least not
> when they're connected to the same server. Depending on the quality of your
> disks 1 to 3 should suffice. With eg. 1 hot spare in the server plus some
> cold spares in your office you should be able to survive a broken harddisk.
> You should also "smartctl -t long" your disks frequently (ie once per week)
> and do more or less permanent resync of your raid to be able to detect
> disk errors early. (The worst case scenario is to never check your disks --
> then a disk is broken and replaced by a hot/cold spare -- and raid resync
> fails other disks on your array, just because the bad blocks are already
> there...)

I've been following Jonathan postings for a while and my general feeling is 
that there's quite some difference into what he aims for and what reality 
offers as boundaries. I wish him luck anyway, it would be cool if he could get 
things working. By the way, I will post my planned setup in response to one of 
his other postings, might be useful to compare. 

> Hope this helps
> 
> -- Adi
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 


B.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>