WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] IO intensive guests - how to design for best performance

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] IO intensive guests - how to design for best performance
From: Bart Coninckx <bart.coninckx@xxxxxxxxxx>
Date: Thu, 24 Jun 2010 14:24:41 +0200
Delivery-date: Thu, 24 Jun 2010 10:18:00 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTik-EU-uSxPLWIAXLsYuJs7v_A9En2xl4BGBWpb5@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTik-EU-uSxPLWIAXLsYuJs7v_A9En2xl4BGBWpb5@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.12.4 (Linux/2.6.31.12-0.2-desktop; KDE/4.3.5; x86_64; ; )
On Thursday 24 June 2010 14:03:06 Kevin Maguire wrote:
> Hi
> 
> I am trying to engineer a HA xen solution for a specific application
>  workload.
> 
> I will use:
> 
> *) 2 multicore systems (maybe 32 or 48 cores) with lots of RAM (256 GB)
> *) dom0 OS will be RHEL 5.5
> *) I would prefer to use xen as bundled by the distribution, but if
> required features are found in later releases then this can be
> considered
> *) the servers are connected to the SAN
> *) I have about 10 TB of shared storage, and will use around 20-25
> RHEL paravirt guests
> *) The HA I will manage with heartbeat, probably use use clvmd for the
> shared storage
> 
> 
> My concern is to get the most out of the system in terms of IO.  The
> guests will have a range of vCPUs assigned, from 1 to 8 say, and their
> workload varies over time. when they are doing some work it is both
> I/O and CPU intensive. It is only in unlikely use cases that all or
> most guests are very busy at the same time.
> 
> The current solution to this workload is a cluster of nodes with
> either GFS (using shared SAN storage)  or local disks, of which both
> approaches have some merits.  However I am not tied to that
> architecture at all.
> 
> There seems a lot (too many!) of options here
> 
> *) created a large LUN / LVM volume on my SAN, and pass it to the
> guests and use GFS/GFS2
> *) same thing, except use OCFS2
> *) split my SAN storage into man LUNs / LVM volumes, and export 1
> chunk per VM via phys: or tap:... interfaces
> *) more complex PCI-passthru configurations giving guest direct (?)
> access to storage
> *) create a big ext3/xfs/... file system on dom0 and export using NFS
> to guests (a kind of loopback?)
> *) others ...
> 
> I ask really for any advise and experiences of list members faced with
> similar problems and what they found best.
> 
> Thanks
> KM
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 

Hi Kevin,

I opted for iSCSI, mainly because I need support of my distro supplier 
(Novell) and because it is pretty mainstream. It is not the most high 
performing probably.
What happens on top of iSCSI is to my understanding related to whether you 
want to use images files (for live migration you would need to have a cluster 
filesystem) or block devices. I create a LVM LV on one big DRBD partition 
which is a LUN on a target. Each DomU has a seperate LUN. I initiate to this 
LUN from all the Dom0's and use them as block devices.  I don't put any LVM on 
their frmo the Dom0's perspective since this involved cLVM which I don't have 
on SLES 10. Since there is no snapshotting with cLVM anyway, I don't see the 
added value of LVM anyway. 

Hope this helps somehat.




_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>