WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] Xen SAN Questions

To: <lists@xxxxxxxxxxxx>, "xen-users" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] Xen SAN Questions
From: "Tait Clarridge" <Tait.Clarridge@xxxxxxxxxxxx>
Date: Tue, 27 Jan 2009 13:32:37 -0500
Cc:
Delivery-date: Tue, 27 Jan 2009 10:34:39 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <2009127123050.989001@leena>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcmArUTpjs0jPaXUSSquTmJTbdavdwAADEOA
Thread-topic: [Xen-users] Xen SAN Questions
Hi Mike,

I did see that and didn't want to hijack your thread. However, I didn't want to 
add some non-cluster questions into a thread that I thought was predominantly 
about clustering.

Sorry about that, I can try and append my questions into your line of 
questioning... but sometimes it helps when questions asked and answered provide 
slightly different responses.. like when trolling Google for responses and a 
mix from different pages is what solves the problem.

Cheers,
Tait

-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of lists@xxxxxxxxxxxx
Sent: Tuesday, January 27, 2009 1:31 PM
To: xen-users
Subject: Re: [Xen-users] Xen SAN Questions

Guys, I did start a thread on this before this one. I've been asking about 
using NAS/SAN, local attached vs FC and Ethernet. I've also been asking about 
clustering, redundancy and have been given a lot of good information, 
especially from Fajar who sounds like a guru.

See Optimizing I/O, Distributed vs Cluster and I can't recall the other thread 
now.

Mike


On Tue, 27 Jan 2009 15:58:37 -0200, Ricardo J. Barberis wrote:
> El Martes 27 Enero 2009, Tait Clarridge escribió:
> 
>> Hello Everyone,
>
> Hi, I'm no expert but I'm in the same path as you, so let's try to help each
> other... and get help from others as we go :)
> 
>> I recently had a question that got no responses about GFS+DRBD clusters
>> for
>> Xen VM storage, but after some consideration (and a lot of Googling) I
>> have
>> a couple of new questions.
>
>> Basically what we have here are two servers that will each have a RAID-5
>> array filled up with 5 x 320GB SATA drives, I want to have these as
>> useable
>> file systems on both servers (as they will both be used for Xen VM
>> storage)
>> but they will be replicating in the background for disaster recovery
>> purposes over a GbE link.
>
> OK,
> 
>> First of all, I need to know if this is good practice because I can see a
>> looming clusterf**k if both machines are running VMs from the same shared
>> storage location.
>
> Well, it shouldn't happen if you're using GFS or another cluster aware
> filesystem.
> 
>> Second, I ran a test on two identical servers with DRBD and GFS in a
>> Primary/Primary cluster setup and the performance numbers were appalling
>> compared to local ext3 storage, for example:
>
> Yes, cluster filesystem have lower performance than non-cluster filesystems,
> due to the former performing lokcs on files/dirs.
> Add DRBD replication on top of that and performance will be lower.
> 
>> 5 Concurrent Sessions in iozone gave me the following:
>
>> Average Throughput for Writers per process:
>> EXT3:               41395.96 KB/s
>> DRBD+GFS (2 nodes): 10884.23 KB/s
>
>> Average Throughput for Re-Writers per process:
>> EXT3:               91709.05 KB/s
>> DRBD+GFS (2 nodes): 15347.23 KB/s
>
>> Average Throughput for Readers per process:
>> EXT3:             210302.31 KB/s
>> DRBD+GFS (2 nodes): 5383.27 KB/s  <-------- a bit ridiculous
>
> Ridiculous indeed
> 
>> And more of the same where basically it can range from being 4x to however
>> many times slower reading was. I can only assume that this would be a
>> garbage setup for Xen VM storage and was wondering if anyone could point
>> me
>> to a solution that may be more promising. We currently are running out of
>> space on our NetApp (that does snapshots for backups) for VMs not to
>> mention the I/O available for multiple VMs on a single NetApp directory is
>> already dangerously low.
>
>> Anyone have thoughts as to what might solve my problems?
>
> Have you tried any GFS optimizations? e.g. use noatime and nodiratime,
> disable
> gfs quotas, etc. The first two should improve reading performance.
> 
>> I am thinking a few things:
>
>> - Experiment with DRBD again with another Filesystem (XFS?) and have it
>> re-exported as NFS to both machines (so they can both bring up VMs from
>> the
>> "pool")
>
> I guess NFS could work, unless you have too many machines using it (Linux's
> NFS sucks)
> 
>> - Export one of the machines as iSCSI and software raid it on a primary
>> (not
>> really what I want but might work)
>
> This one sound interesting.
> 
>> - Write a custom script that will backup the VM storage directories to a
>> 3rd
>> server (don't really have the budget for a redundant backup server) using
>> something like rsync
>
>> And finally, what kind of redundant server to server storage do most
>> people
>> use here?
>
> From what I'been reading on the list, most people uses some form of DRBD +
> AoE
> or iSCSI.
> 
> Check the thread with subject "disk backend performance" from November 27,
> 2008. There started a very nice discussion involving Thomas Halinka and
> Stefan de Konink about AoE vs. iSCSI (thank you both!).
> 
> Also, the thread with subject "lenny amd64 and xen" will be of your
> interest,
> on November 27 Thomas started a description of his self-build SAN which is
> very insightful.
> 
>> Thanks a lot for reading my novel of a question :)
>
>> Best,
>
>> Tait Clarridge
>
> Best regards,



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>