WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen and I/O Intensive Loads

To: "John Madden" <jmadden@xxxxxxxxxxx>
Subject: Re: [Xen-users] Xen and I/O Intensive Loads
From: "Nick Couchman" <Nick.Couchman@xxxxxxxxx>
Date: Wed, 26 Aug 2009 11:41:55 -0600
Cc: XEN Mailing List <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 26 Aug 2009 10:42:50 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1251307964.8897.1374.camel@quagmire>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4A9507E90200009900017BED@xxxxxxxxxxxxxxxxxxxxx> <1251307964.8897.1374.camel@quagmire>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx

John,

What filesystem did you use for this test in the domU for the e-mail storage?  I'm currently running XFS on my volume where the GroupWise data sits, and I'm wondering if the filesystem isn't tuned properly.  Could you give me a run-down of what filesystem you used, and what parameters you used for creating the filesystem (block size, inode size, etc.)?


Thanks!


-Nick

>>> On 2009/08/26 at 11:32, John Madden <jmadden@xxxxxxxxxxx> wrote:

> I'm attempting to run an e-mail server on Xen.  The e-mail system is
> Novell GroupWise, and it serves about 250 users.  The disk volume for
> the e-mail is on my SAN, and I've attached the FC LUN to my Xen host,
> then used the "phy:/dev..." method to forward the disk through to the
> domU.  I'm running into an issue with high I/O wait on the box (~250%)
> and large load averages (20-40 for the 1/5/15 minute average).  I was
> wondering if anyone has ideas on tuning the domU to handle this - is
> there a better way to forward the disk device through, should I try
> using an iSCSI software initiator in the domU, or is it just a bad
> idea to put an I/O load like this in a domU?  Unfortunately mapping
> the entire F/C card through to the domU isn't really an option - the
> FC card accesses other SAN volumes for the Xen host, so it needs to be
> present in dom0.

If this turns out to be a global issue, I'd certainly like to hear about
it.  I recently load-tested a postfix+cyrus domU with 6 SATA-backed
spools and 6 FC-backed meta partitions for about 300,000 IMAP accounts
and consistently delivered around 100 messages/sec to them.  That load
was obviously all i/o-bound, but at what I'd consider to be an
acceptable delivery rate (which seems to be the most
performance-challenging operation at least with Cyrus).  I did see
similar load averages though.

This was with a RHEL 5 domU and a CentOS 5 dom0 and phy: mappings. 

John



--
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmadden@xxxxxxxxxxx


<br><hr>
This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users