WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Network Storage

To: lists@xxxxxxxxxxxx
Subject: Re: [Xen-users] Network Storage
From: "Fajar A. Nugraha" <fajar@xxxxxxxxx>
Date: Wed, 21 Jan 2009 15:39:35 +0700
Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 21 Jan 2009 00:40:24 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <2009120105319.225506@leena>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <2009120105319.225506@leena>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, Jan 20, 2009 at 11:53 PM, lists@xxxxxxxxxxxx <lists@xxxxxxxxxxxx> wrote:
> Another area of problems is that of using network storage.
>
> In my case, I'm using legacy 1GB and 2GB fibre channel storage units 
> connected into fc filers that hand off NFS/CIFS. It's a nice, simple setup 
> but in trying various combinations, things are still very slow and sluggish.
>
> I've tried various things such as having an NFS or direct FC share onto each 
> VM server, then installing the guest onto the network storage. Works fine but 
> when you start adding servers, things get a bit difficult.

The way I see it, when it comes to storage allocation, it's probably
best (in terms of balance between performance and managability) to
treat xen domUs like any other physical server. You allocate a
different LUNs for each domU. Other than giving higher I/O
performance, this method has the added benefit that it allows easy
converting between domU <-> real servers.

> The servers run just fine, nice and fast but, the gotcha so far, seems to be 
> when copying files across that storage for any of the servers. So for 
> example, when copying say large backup files, GB sized, that slows down 
> everything way too much.
>

Yeah, I know.
For a long time we used to centralize every storage on 1 & 2 Gbps FC
SAN. This works fine for the most part, until we put I/O-hungry
applications on it. Oracle database was competing for I/O with web
servers, making performance of both suffer greatly. In the end we find
that local storage actually provides MUCH higher I/O throughput, since
it has "dedicated" disks with plenty of I/O bandwitdh :p

So bottom line, my suggestions are :
- treat domU's storage like real server's storage
- usual I/O optimizations apply : more disks for more throughput, have
lots of available BW, dedicated storage when possible, etc.

As a side note, if you're already familiar with FC filers, you might
want to try SUN's new Unified Storage or even a simple
OpenSolaris-based NAS. You can then use iscsi-exported zfs-volumes
which gives you features like :
- snapshot and clones (can save space significantly, plus making
things like backup a lot easier)
- compression (also a space-saver, and might even increase I/O
throughput in certain conditions)
- chekcsums and raidz to ensure data integrity

Regards,

Fajar

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>