WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O
From: jim burns <jim_burn@xxxxxxxxxxxxx>
Date: Sun, 27 Jan 2008 22:23:58 -0500
Delivery-date: Sun, 27 Jan 2008 19:24:56 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.6 (enterprise 20071221.751182)
Sami Dalouche writes:
> So, conclusion, I am lost :
> On the one side, it seems that Xen, when used on top of a raid array, is
> wayyy slower, but when used on top a plain old disk, seems to be pretty
> much native performance. Is there a potential link between Xen and RAID
> vs non raid performance ? Or maybe the problem is caused by Xen + RAID +
> LVM ?

Hmm, interesting couple of paragraphs in the
http://kernelnewbies.org/LinuxChanges page on the 2.6.24 kernel. Apparently, 
lvm is prone to dirty write page deadlocks. Maybe this is being aggravated by 
raid, at least in your case? I quote:

2.7. Per-device dirty memory thresholds

You can read this recommended article about the "per-device dirty thresholds" 
feature.

When a process writes data to the disk, the data is stored temporally 
in 'dirty' memory until the kernel decides to write the data to the disk 
('cleaning' the memory used to store the data). A process can 'dirty' the 
memory faster than the data is written to the disk, so the kernel throttles 
processes when there's too much dirty memory around. The problem with this 
mechanism is that the dirty memory thresholds are global, the mechanism 
doesn't care if there are several storage devices in the system, much less if 
some of them are faster than others. There are a lot of scenarios where this 
design harms performance. For example, if there's a very slow storage device 
in the system (ex: a USB 1.0 disk, or a NFS mount over dialup), the 
thresholds are hit very quickly - not allowing other processes that may be 
working in much faster local disk to progress. Stacked block devices (ex: 
LVM/DM) are much worse and even deadlock-prone (check the LWN article).

In 2.6.24, the dirty thresholds are per-device, not global. The limits are 
variable, depending on the writeout speed of each device. This improves the 
performance greatly in many situations.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>