WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen

On Thu, Feb 12, 2009 at 8:37 PM, DOGUET Emmanuel
<Emmanuel.DOGUET@xxxxxxxx> wrote:
>
>        Oops sorry!
>
> We use only phy: with LVM. PV only (Linux on domU,Linux form dom0).
> LVM is on hardware RAID.

That's better :) Now for more questions :
What kind of test did you run? How did you determine that "domU was 2x
slower than dom0"?
How much memory did you assign to domU and dom0? Are other programs
running? What were the results (how many seconds, how many MBps, etc.)

I've had good results so far, with domU's disk I/O performance is
similar or equal to dom0. A simple

time dd if=/dev/zero of=test1G bs=1M count=1024

took about 5 seconds and give me about 200 MB/s on idle dom0 and domU.
This is on IBM, hardware RAID, 7x144GB RAID5 + 1 hot spare 2.5" SAS
disk. Both dom0 and domU has 512MB memory.

>
> For the RAID my question was (I'm bad in English):
>
> It's better to have :
>
> *case 1*
> Dom0 and DomU   on         hard-drive 1 (with HP raid: c0d0)
>
> Or
>
> *case 2*
> Dom0            on      hard-drive 1    (if HP raid: c0d0)
> DomU            on      hard-drive 2    (if HP raid: c0d1)
>
>

Depending on how you use it, it might not matter :)
General rule-of-thumb, more disks should provide higher I/O throughput
when setup properly. In general (like when all disks are the same, for
general-purpose domUs) I'd simply put all available disks in a RAID5
(or multiple RAID5s for lots of disks) and put them all in a single
VG.

Regards,

Fajar

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>