WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen SAN Questions

To: "Fajar A. Nugraha" <fajar@xxxxxxxxx>
Subject: Re: [Xen-users] Xen SAN Questions
From: Matthew Sacks <ntwrkd@xxxxxxxxx>
Date: Wed, 28 Jan 2009 20:34:06 -0800
Cc: Tait Clarridge <Tait.Clarridge@xxxxxxxxxxxx>, xen-users list <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 28 Jan 2009 20:35:01 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=FskQwh+IvamnoU+0/NXUuegK76ucp3wHm2ubHa1bza0=; b=S85u2L34zh+d8No4AmoMEcIYzy5z3ZIz+3dwmLSLP80MmqTO+NPNCR/LXLdrMYHDkv 6yGm46Q8VpOryh9zU67YXmCuGsjce7JFmKrGpLZCRK1Z2IqlXc340DVtMlSZX21ZCmBC HvMIrZyxBjxmTqezF1qdiWha2UgtETKZj1B/k=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=WlitXfVGC8tmXd+Wj6uid3vlXZlmoS6wiEc9lMVjWEq0FQAR7yherCGLxDqB2MD0bR ejM5axa53nnGhql6DmmBjHIwCBx33oSZhXqa/kSYs1qrHFxIeHrIWKf+D5bZOo62MMrv FLG5t1Z0YMRRot19CmSxHAD7mzvalESBzpSPI=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <7207d96f0901280701n210441ajb0522b57e3e178a4@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <EBEEB3C8FBD6B0448AEDF986B303BEB608B0CEA0@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20090127130801.31661de2mepyw329@xxxxxxxxxxxxxxxxxxxxxx> <EBEEB3C8FBD6B0448AEDF986B303BEB60ED88282@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <alpine.LFD.2.00.0901271331050.5394@xxxxxxxxxxxxxxxxx> <EBEEB3C8FBD6B0448AEDF986B303BEB608B0CEA2@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <alpine.LFD.2.00.0901271401530.5992@xxxxxxxxxxxxxxxxx> <EBEEB3C8FBD6B0448AEDF986B303BEB60ED88291@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <7207d96f0901280701n210441ajb0522b57e3e178a4@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Just to clarify, which GFS are you talking about?
Two exist.

On Wed, Jan 28, 2009 at 7:01 AM, Fajar A. Nugraha <fajar@xxxxxxxxx> wrote:
> On Wed, Jan 28, 2009 at 9:09 PM, Tait Clarridge
> <Tait.Clarridge@xxxxxxxxxxxx> wrote:
>> Well I figured out one thing, that my numbers were totally off for EXT3 and 
>> XFS without DRBD haha.
>>
>> I will do some more testing, I can't believe I ruled out "dd" as a viable 
>> benchmark.
>>
>> *smacks forehead*
>>
>> Thanks for the help, I am testing with those DRBD config options now.
>
> I'd be interested to hear about your results.
>
> To tell the truth, I was tempted to try a similar setup. I decided
> against it though, because :
> - using local disks provide higher I/O throughput. Using
> network-attach disks, however, is mostly limited by the network
> interconnect speed. For example, 1 Gbps network link could only give
> max (theoretical) throughput of 125 MBps while local disks can easily
> give 235 MBps (tested with dd)
> - active-active DRBD setup can produce split-brain
>
> So in the end I settled for scheduled zfs-based backup. That is :
> - when using opensolaris dom0, I can use zvol-backed storage and do
> backups from dom0
> - when using Linux dom0, I use zfs-fuse on domU and perform backups there.
>
> Again, I'd be interested to hear about your results. If you can get
> something like 200 MBps then I'd probably try to implement a similar
> setup.
>
> Hint : You probably want to stay away from GFS as domU's backend
> storage. Just use LVM-backed storage (with cLVM, of course) for MUCH
> faster performance. To measure its performance, a simple way is using
> dd on the block device.
>
> Regards,
>
> Fajar
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>