|   | 
      | 
  
  
      | 
      | 
  
 
     | 
    | 
  
  
     | 
    | 
  
  
    |   | 
      | 
  
  
    | 
         
xen-users
RE: [Xen-users] RAID10 Array
 
Hi Rob, 
  
Good tip. 
  
Can you suggest a way I could benchmark all these things? I've 
never benchmarked Hard Drives before.. 
  
Thanks
  
  
From: Robert Dunkley 
[mailto:Robert@xxxxxxxxx] Sent: Thu 17/06/2010 10:06 To: 
Jonathan Tripathy; Adi Kriegisch; 
Xen-users@xxxxxxxxxxxxxxxxxxx Subject: RE: [Xen-users] RAID10 
Array
 
   
Hi, 
  
  
I 
like the sound of idea 1 best. One big Raid 10 might sound nice but are you sure 
it is purely bandwidth you need. For small file latency I think a number of 
smaller arrays spread between the different VMs might be faster (eg. 4 Raid 10 
or 4 Raid 5).  Seperate arrays also provides some degree of performance 
isolation between the LUNs. The Raid 1 part of raid 10 does allow for read 
interleaving but if you have random mixed reads and writes occurring fairly 
evenly across the VMs then separate arrays should be more responsive (Even with 
read and write caching enabled on the raid card). 
  
The 
way to find out is to benchmark with multiple VMs simultaneously. 
  
  
Rob 
  
  
From: 
xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jonathan 
Tripathy Sent: 17 June 2010 09:09 To: Adi Kriegisch; 
Xen-users@xxxxxxxxxxxxxxxxxxx Subject: RE: [Xen-users] RAID10 
Array   
  
  
  
From: Adi Kriegisch 
[mailto:kriegisch@xxxxxxxx] Sent: Thu 17/06/2010 08:32 To: 
Jonathan Tripathy Cc: Xen-users@xxxxxxxxxxxxxxxxxxx Subject: 
Re: [Xen-users] RAID10 Array  
Hi!
  > I have 3 RAID ideas, and I'd 
appreciate some advice on which would be > better for lots of VMs for 
customers. > > My storage server will be able to hold 16 disks. I am 
going to export 1 > iSCSI LUN to each xen node. 6 nodes will connect to 
one storage server, > so that's 6 LUNs per server of equal size. The 
server will connect to a > switch using quad port bonded NICs (802.3ad), 
and each Xen node will > connect to the switch using Dual port bonded 
NICs. hmmm... with one LUN per server you will loose the ability to do 
live migration -- or do I miss something? Some people mention problems 
with bonding more than two NICs for iSCSI as the reordering of the 
commands/packets adds tremendously to latency and load. If you want high 
performance and avoid latency issues you might want to choose 
ATA-over-Ethernet.
  > I'd appreciate any thoughts or ideas on which 
would be best for > throughput/IPOS Your server is a Linux box 
exporting the RAIDs to your Xen servers? Then just take fio and do some 
benchmarking. If you're using software raid than you might want to add RAID5 
to the equation. I'd suggest to measure performance of your RAID system with 
various configurations and then choose which level of isolation gives the 
best performance. I don't think a setup with 6 hot spare disks is 
necessary -- at least not when they're connected to the same server. 
Depending on the quality of your disks 1 to 3 should suffice. With eg. 1 hot 
spare in the server plus some cold spares in your office you should be able 
to survive a broken harddisk. You should also "smartctl -t long" your disks 
frequently (ie once per week) and do more or less permanent resync of your 
raid to be able to detect disk errors early. (The worst case scenario is to 
never check your disks -- then a disk is broken and replaced by a hot/cold 
spare -- and raid resync fails other disks on your array, just because the 
bad blocks are already there...)
  Hope this helps
  -- 
Adi 
------------------------------------------------------------------------------------------------------------------- 
Hi Adi,  
The RAID controller I'm 
planning to use is the MegaRAID SAS 9260-4i. The storage server will be 
built by Broadberry, so it will be using Supermicro kit.  
As for the O/S on the server, I was thinking of using Windows 
Storage Server actually, however maybe this is a bad idea? You're correct about 
the live migration, however I may implement some sort of clustering iSCSI 
filesystem, however the main issue at the minute is the RAID array.  
I've heard the same things about bonding 2 vs 4 NICs as 
well.  
Currently, I'm leaning towards the RAID10 array with 14 disks 
with 2 hot spares  
   
 
The 
SAQ Group 
Registered 
Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the 
trading name of SEMTEC Limited. Registered in England & Wales Company 
Number: 06481952 
  
http://www.saqnet.co.uk AS29219 
SAQ 
Group Delivers high quality, honestly priced communication and I.T. services to 
UK Business. 
Broadband : 
Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed 
Networks : Remote Support. 
  
 ![SAQ Group]()  
  
ISPA 
Member  
 |  
 _______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users 
 |   
 
| <Prev in Thread] | 
Current Thread | 
[Next in Thread>
 |  
- [Xen-users] RAID10 Array, Jonathan Tripathy
- Re: [Xen-users] RAID10 Array, Adi Kriegisch
- RE: [Xen-users] RAID10 Array, Jonathan Tripathy
- RE: [Xen-users] RAID10 Array, Robert Dunkley
 - RE: [Xen-users] RAID10 Array,
Jonathan Tripathy <=
 - Re: [Xen-users] RAID10 Array, Adi Kriegisch
 - RE: [Xen-users] RAID10 Array, Jonathan Tripathy
 - Re: [Xen-users] RAID10 Array, Adi Kriegisch
 - RE: [Xen-users] RAID10 Array, Jonathan Tripathy
 - Re: [Xen-users] RAID10 Array, Adi Kriegisch
 - RE: [Xen-users] RAID10 Array, Jonathan Tripathy
 - Re: [Xen-users] RAID10 Array, Adi Kriegisch
 - RE: [Xen-users] RAID10 Array, Jonathan Tripathy
 - RE: [Xen-users] RAID10 Array, James Harper
 
- Re: [Xen-users] RAID10 Array, Adi Kriegisch
 
  
  
  
 
 |  
  
 | 
    | 
  
  
    |   | 
    |