WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O

To: jim burns <jim_burn@xxxxxxxxxxxxx>
Subject: Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O
From: Sami Dalouche <skoobi@xxxxxxx>
Date: Mon, 28 Jan 2008 20:31:32 +0100
Cc: Christophe Clapp <clc@xxxxxxxxxxxxx>, Christophe Clapp <christophe.clapp@xxxxxxxxx>, xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>, Florent Valdelièvre <Florent.Valdelievre@xxxxxxxxxx>
Delivery-date: Mon, 28 Jan 2008 11:32:05 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200801272223.59034.jim_burn@xxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <200801272223.59034.jim_burn@xxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hmm.. 

Thanks a lot for the tip !
Ok, so I guess it is theoretically possible that the RAID stuff creates
problems with Xen in my case. So, to check that, I'll go install more
disks on the RAID array, create a new device without LVM and see if the
performance is the same.

Since I have done tests with : Xen + No RAID + No LVM, Xen + No RAID +
LVM, and Xen + RAID + LVM, the only missing bench is Xen + RAID + No
LVM.

I'll post back when I have the results of this bench (first need to go
to the data center, etc..)

And I'll also check the performance improvements by upgrading the kernel
to 2.6.24. What do you think of the container support in 2.6.24 ? Isn't
it a better-Xen for servers ? (I believe most people use virtualization
on the server side just to isolate processes, so the containers seem
like a better-xen in this case, isn't it ?)

Regards,
Sami

On Sun, 2008-01-27 at 22:23 -0500, jim burns wrote:
> Sami Dalouche writes:
> > So, conclusion, I am lost :
> > On the one side, it seems that Xen, when used on top of a raid array, is
> > wayyy slower, but when used on top a plain old disk, seems to be pretty
> > much native performance. Is there a potential link between Xen and RAID
> > vs non raid performance ? Or maybe the problem is caused by Xen + RAID +
> > LVM ?
> 
> Hmm, interesting couple of paragraphs in the
> http://kernelnewbies.org/LinuxChanges page on the 2.6.24 kernel. Apparently, 
> lvm is prone to dirty write page deadlocks. Maybe this is being aggravated by 
> raid, at least in your case? I quote:
> 
> 2.7. Per-device dirty memory thresholds
> 
> You can read this recommended article about the "per-device dirty thresholds" 
> feature.
> 
> When a process writes data to the disk, the data is stored temporally 
> in 'dirty' memory until the kernel decides to write the data to the disk 
> ('cleaning' the memory used to store the data). A process can 'dirty' the 
> memory faster than the data is written to the disk, so the kernel throttles 
> processes when there's too much dirty memory around. The problem with this 
> mechanism is that the dirty memory thresholds are global, the mechanism 
> doesn't care if there are several storage devices in the system, much less if 
> some of them are faster than others. There are a lot of scenarios where this 
> design harms performance. For example, if there's a very slow storage device 
> in the system (ex: a USB 1.0 disk, or a NFS mount over dialup), the 
> thresholds are hit very quickly - not allowing other processes that may be 
> working in much faster local disk to progress. Stacked block devices (ex: 
> LVM/DM) are much worse and even deadlock-prone (check the LWN article).
> 
> In 2.6.24, the dirty thresholds are per-device, not global. The limits are 
> variable, depending on the writeout speed of each device. This improves the 
> performance greatly in many situations.
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>