This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-bugs] [Bug 992] Disk I/O performance of IA32-pae HVM guest is very

To: xen-bugs@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-bugs] [Bug 992] Disk I/O performance of IA32-pae HVM guest is very slow
From: bugzilla-daemon@xxxxxxxxxxxxxxxxxxx
Date: Tue, 3 Nov 2009 12:56:15 -0800
Delivery-date: Tue, 03 Nov 2009 12:56:19 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <bug-992-3@xxxxxxxxxxxxxxxxxxxxxxxxxxx/bugzilla/>
List-help: <mailto:xen-bugs-request@lists.xensource.com?subject=help>
List-id: Xen Bugzilla <xen-bugs.lists.xensource.com>
List-post: <mailto:xen-bugs@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-bugs>, <mailto:xen-bugs-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-bugs>, <mailto:xen-bugs-request@lists.xensource.com?subject=unsubscribe>
Reply-to: bugs@xxxxxxxxxxxxxxxxxx
Sender: xen-bugs-bounces@xxxxxxxxxxxxxxxxxxx

MaxZinal@xxxxxxxxx changed:

           What    |Removed                     |Added
                 CC|                            |MaxZinal@xxxxxxxxx

------- Comment #2 from MaxZinal@xxxxxxxxx  2009-11-03 12:56 -------
I can confirm that HVM guest I/O performance is very low, comparing to PV guest
performance under exactly the same load. We had to migrate almost all our
guests to PV mode (linux) or install PVGPL drivers (windows).

Our hardware configuration:
  CPUs:         2 x Quad-core Intel Xeon X5460 @3.16 GHz
  Memory:       32 GBytes
  Int.storage:  low-end SAS RAID5 (6 disks, 148 GBytes each)
  Ext.storage:  low-end SATA RAID attached through Gigabit ethernet iSCSI
                    (8 disks, 1 TByte each, RAID10)

  Dom0:     Debian 5.0.3 64-bit (Lenny)
  Xen:      Pre-packaged xen-hypervisor-3.2-1-amd64
  DomU #1:  SLES 9 (32-bit)
  DomU #2:  Windows 2003 Server Standard Edition (32-bit)
  ... some more DomU systems - unimportant here
  LLVM as a disk partitioning tool.

SLES 9 is rather hard to configure for running in PV guest mode, and PVGPL
is a pure hack, so we tried to use HVM mode first.

Then we saw the following symptoms:

1. Extremely slow guest I/O (when compared to native speed), both
on internal and external storage and both on Linux and Windows.

2. Very high CPU load in Dom0 - caused by qemu-dm processes.
Each of these processes just sits down on a chosen CPU core
and boils it.

After some investigation we found:

1. Write is much worse then read because in causes lots of reads.
We ran `atop' both in Dom0 and DomU, and saw that even when
there are zero read operations in DomU (pure write), there's
lots of reads on Dom0 caused by the corresponding qemu-dm
process (found with `iotop' tool).

2. In DomU we saw a moderate number of I/O operations and large
average I/O time. In Dom0 we saw a huge number of I/O operations
over the same (mapped) device and a small I/O time.


1. If you care about I/O performance, do not use HVM.

2. The main cause of the HVM I/O slowness is qemu-dm which splits
large I/O operations into small ones and performs reads to do
actual writes.

Configure bugmail: 
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

Xen-bugs mailing list

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-bugs] [Bug 992] Disk I/O performance of IA32-pae HVM guest is very slow, bugzilla-daemon <=