WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] Best Practices for PV Disk IO?

To: "Christopher Chen" <muffaleta@xxxxxxxxx>, <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] Best Practices for PV Disk IO?
From: Jeff Sturm <jeff.sturm@xxxxxxxxxx>
Date: Mon, 20 Jul 2009 22:25:18 -0400
Cc:
Delivery-date: Mon, 20 Jul 2009 19:27:46 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <7bc80d500907201726y53ded167sf565da72c36908b1@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <7bc80d500907201726y53ded167sf565da72c36908b1@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcoJmdQiwqX+ewocSFWl5qIewDfSGAAB5o7A
Thread-topic: [Xen-users] Best Practices for PV Disk IO?
> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Christopher Chen
> Sent: Monday, July 20, 2009 8:26 PM
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-users] Best Practices for PV Disk IO?
> 
> I was wondering if anyone's compiled a list of places to look to
> reduce Disk IO Latency for Xen PV DomUs. I've gotten reasonably
> acceptable performance from my setup (Dom0 as a iSCSI initiator,
> providing phy volumes to DomUs), at about 45MB/sec writes, and
> 80MB/sec reads (this is to a IET target running in blockio mode).

For domU hosts, xenblk over phy: is the best I've found.  I can get
166MB/s read performance from domU with O_DIRECT and 1024k blocks.

Smaller block sizes yield progressively lower throughput, presumably due
to read latency:

256k: 131MB/s
64k:    71MB/s
16k:    33MB/s
4k:     10MB/s

Running the same tests on dom0 against the same block device yields only
slightly faster throughput.

If there's any additional magic to boost disk I/O under Xen, I'd like to
hear it too.  I also pin my dom0 to an unused CPU so it is always
available.  My shared block storage runs the AoE protocol over a pair of
1GbE links.

The good news is that there doesn't seem to be much I/O penalty imposed
by the hypervisor, so the domU hosts typically enjoy better disk I/O
than an inexpensive server with a pair of SATA disks, at far less cost
than the interconnects needed to couple a high-performance SAN to many
individual hosts.  Overall, the performance seems like a win for Xen
virtualization.

Jeff



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>