WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Xen SAN Questions

To: <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] Xen SAN Questions
From: "Tait Clarridge" <Tait.Clarridge@xxxxxxxxxxxx>
Date: Tue, 27 Jan 2009 12:03:02 -0500
Delivery-date: Tue, 27 Jan 2009 09:05:08 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcmAoT7EfQcOMOyUEd3GhwAcfgK4rw==
Thread-topic: Xen SAN Questions
Hello Everyone,

I recently had a question that got no responses about GFS+DRBD clusters for Xen 
VM storage, but after some consideration (and a lot of Googling) I have a 
couple of new questions.

Basically what we have here are two servers that will each have a RAID-5 array 
filled up with 5 x 320GB SATA drives, I want to have these as useable file 
systems on both servers (as they will both be used for Xen VM storage) but they 
will be replicating in the background for disaster recovery purposes over a GbE 
link.

First of all, I need to know if this is good practice because I can see a 
looming clusterf**k if both machines are running VMs from the same shared 
storage location.

Second, I ran a test on two identical servers with DRBD and GFS in a 
Primary/Primary cluster setup and the performance numbers were appalling 
compared to local ext3 storage, for example:
 
5 Concurrent Sessions in iozone gave me the following:

Average Throughput for Writers per process:
EXT3:               41395.96 KB/s
DRBD+GFS (2 nodes): 10884.23 KB/s

Average Throughput for Re-Writers per process:
EXT3:               91709.05 KB/s
DRBD+GFS (2 nodes): 15347.23 KB/s

Average Throughput for Readers per process:
EXT3:             210302.31 KB/s
DRBD+GFS (2 nodes): 5383.27 KB/s  <-------- a bit ridiculous

And more of the same where basically it can range from being 4x to however many 
times slower reading was. I can only assume that this would be a garbage setup 
for Xen VM storage and was wondering if anyone could point me to a solution 
that may be more promising. We currently are running out of space on our NetApp 
(that does snapshots for backups) for VMs not to mention the I/O available for 
multiple VMs on a single NetApp directory is already dangerously low.

Anyone have thoughts as to what might solve my problems?

I am thinking a few things:

- Experiment with DRBD again with another Filesystem (XFS?) and have it 
re-exported as NFS to both machines (so they can both bring up VMs from the 
"pool")
- Export one of the machines as iSCSI and software raid it on a primary (not 
really what I want but might work)
- Write a custom script that will backup the VM storage directories to a 3rd 
server (don't really have the budget for a redundant backup server) using 
something like rsync

And finally, what kind of redundant server to server storage do most people use 
here?


Thanks a lot for reading my novel of a question :)

Best,

Tait Clarridge





_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>