|
|
|
|
|
|
|
|
|
|
xen-api
RE: [Xen-API] XCP - Performance Issues with Disk-I/O for small block-siz
Hi Andreas,
What SR type are you using underneath the VM? If a sparse format is in use such
as VHD, or raw images on top of an EXT filesystem, you should pre-populate the
image first. If you run iozone with the -w flag it will give you a better
estimate of the raw performance. In XCP you should also compare the iozone
benchmark from within the guest to a standalone VBD (not the system disk), and
do the same from within Dom0 to measure the raw performance against the storage
device.
For best performance you should attach a direct LUN as a VBD to a guest, or
alternatively create a VDI of type=raw for LVM-based performance testing. With
this data to compare against it should be fairly clear whether the performance
drop is caused in Xen with the VBD IO request, or in the raw device driver in
the kernel.
Thanks,
Julian
> -----Original Message-----
> From: xen-api-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-api-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Balg, Andreas
> Sent: 09 August 2010 13:00
> To: xen-api@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-API] XCP - Performance Issues with Disk-I/O for small
> block-sizes as well as poor memory i/o
>
> Hello everybody,
>
> during some extensive benchmarking for an evaluation of virtualization
> technologies we found some issues with i/o-performance and also memory
> bandwidth of XCP-0.5 running on a quite up-to-date and performant Dell-
> R610 Server (2 x Xeon E5620, 16G RAM,
> 4 x SATA 15K HDD's - RAID 5):
>
> We ran the tests on plain hardware, using CentOS 5.5 with Xen-Kernel in
> a single VM as well as 7 VMs (Guests restricted to 2 VCPUs, 2GB RAM) at
> the same time:
>
> - Especially for small block sizes (below 32k) the Disk-I/O is very
> poor.
>
> to give some figures: The same Benchmark
>
> "time iozone -az -i0 -i1 -i2 -i8 -Rb results.xls"
>
> Runs around 3 Minutes on the bare Hardware, around 30 Minutes in a KVM-
> VM and more than 1 hour(!) in a xen VM - See attached graphs and focus
> on the front of the diagram (red and blue "foot" of the xen graph)
>
> What
> I'd like to know is, if this is be a glitch in a device driver, an
> error in our configuration or might be eliminated in any other way.
>
> Or is this a issue of hypervisor design or just nobody noticed it so
> far and it should be looked at by the developers.
>
> Without these two significant problems Xen would outperform kvm in
> almost any possible manner ....
>
> Best regards
> Andreas
>
>
>
> --
> -----------------------------------------------------------------------
> ------
> XiNCS GmbH MwST-Nr: 695 740
> Schmidshaus 118 HR-Nr: CH-300.4.015.621-9
> CH-9064 Hundwil AR
>
> Webseite: http://www.xincs.eu
> AGB: http://www.xincs.eu/agb.html
> Tel. +41 (0)31 526 50 95
> -----------------------------------------------------------------------
> ------
_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api
|
|
|
|
|