This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-API] XCP - Performance Issues with Disk-I/O for small block-sizes a

To: xen-api@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-API] XCP - Performance Issues with Disk-I/O for small block-sizes as well as poor memory i/o
From: "Balg, Andreas" <a.balg@xxxxxxxx>
Date: Mon, 9 Aug 2010 13:59:42 +0200
Delivery-date: Mon, 09 Aug 2010 05:00:34 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
Hello everybody,

during some extensive benchmarking for an evaluation of virtualization 
technologies we found some issues with i/o-performance and also memory 
bandwidth of XCP-0.5 running
on a quite up-to-date and performant Dell-R610 Server (2 x Xeon E5620, 16G RAM,
4 x SATA 15K HDD's - RAID 5):

We ran the tests on plain hardware, using CentOS 5.5 with Xen-Kernel in a 
single VM as well as 7 VMs (Guests restricted to 2 VCPUs, 2GB RAM) at the same 

- Especially for small block sizes (below 32k) the Disk-I/O is very poor. 

to give some figures:  The same Benchmark 
"time iozone -az -i0 -i1 -i2 -i8 -Rb results.xls"

Runs around 3 Minutes on the bare Hardware, around 30 Minutes in a KVM-VM and 
more than 1 hour(!) in a xen VM - See attached graphs and focus on the front of 
the diagram (red and blue "foot" of the xen graph) 

 I'd like to know is, if this is be a glitch in a device driver, an 
error in our configuration or might be eliminated in any other way.

Or is this a issue of hypervisor design or just nobody noticed it so far and it 
should be 
looked at by the developers. 

Without these two significant problems Xen 
would outperform kvm in almost any possible manner .... 

Best regards 

XiNCS GmbH              MwST-Nr: 695 740 
Schmidshaus 118         HR-Nr:   CH-300.4.015.621-9
CH-9064 Hundwil AR

Webseite:       http://www.xincs.eu
AGB:            http://www.xincs.eu/agb.html
Tel.            +41 (0)31 526 50 95

Attachment: Disk-IO.png
Description: PNG image

Attachment: RAM-Bandwidth.png
Description: PNG image

Attachment: Time-graph.png
Description: PNG image

xen-api mailing list
<Prev in Thread] Current Thread [Next in Thread>