WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] poor domU VBD performance.

To: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] poor domU VBD performance.
From: Peter Bier <peter_bier@xxxxxx>
Date: Sat, 26 Mar 2005 18:14:50 +0000 (UTC)
Delivery-date: Sat, 26 Mar 2005 18:24:11 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
User-agent: Loom/3.14 (http://gmane.org/)
I have installed XEN and linux 2.6.10 on three different machines. The slowest 
of them was my computer at home running and Athlon XP 1600+ ( 1.4 GHZ ) and 256 
MB RAM. 

My Problem is reduced file-system performance in domU guests. These guest run
faster when I use loopbacked files on Dom0 than the do when I use real 
partitions
and poulate them with a linux system. 

I found out that dom0 does file-system IO and raw IO ( using dd as a tool to 
test
throughput from the disk ) is about exactly the same as when using a standard 
linux kernel without XEN. But the raw IO from DomU to an unused disk ( a second
disk in the system ) is limited to fourty percent of the speed I get within 
Dom0.
This effect transforms to about the same ratio when doing real file-system IO.

I found this sympthom in all of the systems I installed. An early paper about 
XEN describes that the penalty when using VDBs is close to zero and neglectable.
I think this conflicts with the results I got and I believe this reflects that 
something in my configuration is wrong ( at least I hope so ). 

I have the drivers for my chipset linked into the kernel and hdparm tells me 
that
DMA is enabled for the used disks ( using hdparm under Dom0 ). 

What worries me is that the results within Dom0 are completely satisfactory, 
while those in DomU are not. Do I have to change the kernel config for DomU ? Or
is there any special option I have to set in the kernel configuration for Dom0 
or
even for xen?

I have compiled version 2.0.5 - the newest available, to my knowledge.

Any hints ??  



-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel