WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] experience with gnbd

To: Jacob Gorm Hansen <jacobg@xxxxxxx>
Subject: Re: [Xen-devel] experience with gnbd
From: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>
Date: Tue, 19 Oct 2004 16:11:55 +0100
Cc: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxxx, Ian.Pratt@xxxxxxxxxxxx
Delivery-date: Tue, 19 Oct 2004 18:26:43 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: Your message of "Tue, 19 Oct 2004 16:47:14 +0200." <417528F2.3030000@xxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
> What kind of performance improvement did you experience? This is not 
> just due to the NetApp filer being on a separate network with a router 
> or firewall in between (if I recall correctly)?

With gnbd we were getting sequential read performance equivalent
to native disk performance 40MB/s (though with more CPU burn).

With Linux 2.4 and linux-iscsi-3.6.1 talking to our NetApp filer
we were seeing around 10MB/s, as I recall. It's not a fair
comparison as we don't know what else was loading the filer at
the time. The NetApp probably isn't optimised for iscsi anyhow
(it's a great NFS/CIFS server).

I haven't investigated the level of CRC etc protection offered by
gnbd vs iscsi, but I doubt gnbd is so sophisticated.  It seems to
work pretty well, though, and is easy to set up.

[Just to follow up to my previous message, when building gnbd
various binaries failed to build due to not having the magma
headers/libraries installed. I just did a 'make -i' to ignore the
errors, and ended up with a working system providing you use the
'-c' option to gnbd_export. The magma stuff is to do with cluster
monitoring.]
 
> > I haven't tried it, but the csnap writeable snapshot driver looks
> > worth investigation too -- its design is rather more reassuring
> > than lvm2 snap.
> 
> Perhaps it is better to have the writable/client-specific parts of your 
> root filesystem (/tmp, /var/tmp, perhaps /etc) mounted via NFS (or 
> something else, or just as symlinks to a separate device) on top of a 
> read-only generalized rootfs (like the debian diskless packages used to 
> do), rather than trying to handle this at the block-level. It seems to 
> me all sorts of bad stuff can happen with a writable block-level 
> overlay, for instance if you try to upgrade the filesystem underneath.

If only there was a decent file system-level
CoW/overlay/union/stackable file system for linux...

There are a whole bunch of implementations, but none of them seem
particularly well supported. I don't know of any that exist for
2.6. Does anyone on the list?

We have one that works as a user-space NFS server, but "lightning
fast" is not how I'd describe it...

Ian


-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel

<Prev in Thread] Current Thread [Next in Thread>