WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] RE: Best way to store domU's. NFS? NBD?

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] RE: Best way to store domU's. NFS? NBD?
From: Wiebe Cazemier <halfgaar@xxxxxxx>
Date: Thu, 25 Mar 2010 10:30:46 +0100
Delivery-date: Thu, 25 Mar 2010 02:32:30 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <hodhbd$nqc$1@xxxxxxxxxxxxxxx> <64D0546C5EBBD147B75DE133D798665F055D8CA3@xxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KNode/0.10.9
On Wednesday 24 March 2010 19:34, Jeff Sturm wrote:
> Decide whether you want files or block devices as the backing store of
> your domU's.  There are pros and cons to each.  In my installations I
> work solely with block devices, so I won't discuss NFS further.

Is storing disk images on NFS even an option? I can imagine that this gives
problems with syncs not actually being written when the driver reports it, and
similar problems. I have found out that when you do use NFS, the tap:aio:
back-end should be used, as opposed to the file: back-end.

> 
> DRBD does a great job of providing shared, reliable block devices for
> two-node Linux clusters.  It requires a good network connection but no
> specialized hardware.
> 
> iSCSI is nearly ubiquitous among commercial SAN products these days.
> Its main advantage is interoperability--there are many SAN vendors, many
> client implementations (Linux and otherwise), and works over any network
> that supports TCP/IP.  You can also use a Fibre Channel SAN or even
> FCoE, AoE, or any of the lesser-known protocols.  It may make more sense
> to choose a good storage vendor, using their recommended and supported
> protocol, than the other way around.
> 
> GNBD is available for those who don't wish to invest in a commercial SAN
> (as is DRBD), but I don't believe GNDB is receiving further development.
> 
> No matter what you choose for shared block storage, you need some kind
> of logical volume management so you can easily carve up your large
> physical RAID arrays into manageable pieces to store individual disk
> images or filesystems.  Many SAN products include management tools that
> make this easy.  Some offer handy features such as thin provisioning or
> volume snapshots.  For Linux installations, Red Hat's Clustered LVM
> (CLVM) can also provide volume management independent of whatever
> network storage you choose, and is simple to deploy on RHEL or CentOS
> clusters.
> 
> What is "best" may well depend on your exact requirements.  Do you need
> simple failover (2 machines) or might you need to grow to 3 or more
> hosts?  How much storage overall do you need today, and are you prepared
> to grow this on demand?  Are you running a homogenous Linux environment
> or do you need to mix in Windows or other systems?  What will you use to
> backup data?  (And so forth.)

Let me put it this way, what are common and reliable storage solutions, using
only Linux hosts? My problem here is that there is documentation for a whole
bunch of methods, but still I don't really know which of those methods is
reliable and commonly used, and which are just legacy's from the past, and so
forth.

And, if you say iSCSI is ubiquitous, why are there 0 hits when I search for
iscsi on wiki.xensource.org?

> 
> -Jeff

- Wiebe


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users