Sami Dalouche writes:
> So, conclusion, I am lost :
> On the one side, it seems that Xen, when used on top of a raid array, is
> wayyy slower, but when used on top a plain old disk, seems to be pretty
> much native performance. Is there a potential link between Xen and RAID
> vs non raid performance ? Or maybe the problem is caused by Xen + RAID +
> LVM ?
Hmm, interesting couple of paragraphs in the
http://kernelnewbies.org/LinuxChanges page on the 2.6.24 kernel. Apparently,
lvm is prone to dirty write page deadlocks. Maybe this is being aggravated by
raid, at least in your case? I quote:
2.7. Per-device dirty memory thresholds
You can read this recommended article about the "per-device dirty thresholds"
feature.
When a process writes data to the disk, the data is stored temporally
in 'dirty' memory until the kernel decides to write the data to the disk
('cleaning' the memory used to store the data). A process can 'dirty' the
memory faster than the data is written to the disk, so the kernel throttles
processes when there's too much dirty memory around. The problem with this
mechanism is that the dirty memory thresholds are global, the mechanism
doesn't care if there are several storage devices in the system, much less if
some of them are faster than others. There are a lot of scenarios where this
design harms performance. For example, if there's a very slow storage device
in the system (ex: a USB 1.0 disk, or a NFS mount over dialup), the
thresholds are hit very quickly - not allowing other processes that may be
working in much faster local disk to progress. Stacked block devices (ex:
LVM/DM) are much worse and even deadlock-prone (check the LWN article).
In 2.6.24, the dirty thresholds are per-device, not global. The limits are
variable, depending on the writeout speed of each device. This improves the
performance greatly in many situations.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|