WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] XEN - networking and performance

To: Simon Hobson <simon@xxxxxxxxxxxxxxxx>, "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] XEN - networking and performance
From: Jeff Sturm <jeff.sturm@xxxxxxxxxx>
Date: Fri, 7 Oct 2011 18:12:43 +0000
Accept-language: en-US
Cc:
Delivery-date: Fri, 07 Oct 2011 11:14:36 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <p0624081bcab3c4d141f1@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <CAHsh07tm73Mb5VA6ye3uiwOVnhswbm-YUda=Ac139iOueavQMg@xxxxxxxxxxxxxx> <6A7C506116BA4B5A99FA3766341F1ED1@maindesk> <B1B9801C5CBC954680D0374CC4EEABA50BE0C5A9@xxxxxxxxxxxxxxxxxxxxxx> <p0624081bcab3c4d141f1@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AQHMhFAN5rOzJb1ggECVvcySiYiOp5Vv8jcA///SucCAAEirAIABIJaQ
Thread-topic: [Xen-users] XEN - networking and performance
> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Simon Hobson
> Sent: Thursday, October 06, 2011 4:51 PM
>
> Jeff Sturm wrote:
> 
> >One of the traps we've run into when virtualizing moderately I/O-heavy
> >hosts, is not sizing our disk arrays right.  Not in terms of capacity
> >(terabytes) but in spindles.  If each physical host normally has 4
> >dedicated disks, for example, virtualizing 8 of these onto a domU
> >attached to a disk array with 16 drives effectively cuts that ratio
> >from 4:1 down to 2:1.  Latency goes up, throughput goes down.
> 
> Not only that, but you also guarantee that the I/O is across different areas 
> of the disk
> (different partitions/logical volumes) and so you also virtually guarantee a 
> lot more
> seek activity.

Very true, yes.  In such an environment, sequential disk performance means very 
little.  You need good random I/O throughput and that's hard to get with 
mechanical disks, beyond a few thousand iops.  15k disks help, a larger chassis 
with more disks helps, but that's just throwing $$$ at the problem and doesn't 
really break through the iops barrier.

Anyone tried SSD with good results?  I'm sure capacity requirements can make it 
cost-prohibitive for many.

Jeff



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users