WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] iscsi vs nfs for xen VMs

To: "Javier Guerra Giraldez" <javier@xxxxxxxxxxx>, "Christian Zoffoli" <czoffoli@xxxxxxxxxxx>
Subject: RE: [Xen-users] iscsi vs nfs for xen VMs
From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
Date: Sat, 29 Jan 2011 08:31:36 +1100
Cc: yue <ooolinux@xxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 28 Jan 2011 13:33:09 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTinRjnLtgpL69zn8N75rB8EMqVS5Rs3zY=Pznwuy@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4D408069.9000302@xxxxxxxxxxx> <4D40079D.4070407@xxxxxxxxxxx><994429490908070648s69eed40eua19efc43c3eb85a7@xxxxxxxxxxxxxx><7bc80d500908070700s7050c766g4d1c6af2cd71ea89@xxxxxxxxxxxxxx><994429490908070711q4c64f92au9baa6577524e5c5d@xxxxxxxxxxxxxx><3463f63d0908070726y630d320u3e3f1f1cae9b34a4@xxxxxxxxxxxxxx><sig.0007322cfd.AANLkTi=2S3bKf6jv9BbqYMbkWFbjJTrpYh8GK2EGXGns@xxxxxxxxxxxxxx><4D3FD940.1090000@xxxxxxxxxxxx><1b04b04.e81d.12dc316a0d6.Coremail.ooolinux@xxxxxxx><181df34.9ec5.12dcb739a09.Coremail.ooolinux@xxxxxxx><4D42A107.60702@xxxxxxxxxxx> <AANLkTinRjnLtgpL69zn8N75rB8EMqVS5Rs3zY=Pznwuy@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acu+/+r2LIfCACOvSayPddwdDOyyTAAMgGgg
Thread-topic: [Xen-users] iscsi vs nfs for xen VMs
> 
> 2011/1/28 Christian Zoffoli <czoffoli@xxxxxxxxxxx>:
> > Il 28/01/2011 08:08, yue ha scritto:
> >> what is the performance of clvm+ocfs2?
> >> stability,?
> >
> > it's very reliable but not as fast as using clvm directly.
> 
> to expand a little:
> 
> ocfs2:
> it's a cluster filesystem, it has the overheads of being a filesystem
> (as opposed to 'naked' block devices), and of the clustering
> requirements: in effect, having to check shared locks at critical
> instants.

Microsoft achieve high performance with their cluster filesystem. In fact the 
docs clearly state it's only reliable for Hyper-V virtual disks, any other use 
could cause problems, so I assume they get around the metadata locking problem 
by isolating each disk file so there are no (or minimal) shared resources.

> 
> clvm:
> it's the clustering version of LVM.  since the whole LVM metadata is
> quite small, it's shared entirely, so all accesses are exactly the
> same on CLVM as on LVM.
> 
> the only impact is when modifying the LVM metadata
> (creating/modifying/deleting/migrating/etc volumes), since _all_
> access is suspended until every node has the a local copy of the new
> LVM metadata.
> 
> Of course, a pause of a few tens or hundreds of milliseconds for an
> operation done less than once a day (less than once a month in many
> cases) is totally imperceptible.
> 

The dealbreaker for me with clvm was that snapshots aren't supported. I assume 
this hasn't changed and even if it has, every write to a snapshotted volume 
potentially involves a metadata lock so the performace drops right down unless 
you can optimise for that 'original + snapshot only accessed on the same node' 
case, which may be a limitation I could tolerate.

James
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users