WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen Disk I/O performance vs native performance

Ok, so I'm currently doing bonnie++ benchmarks and will report the
results as soon as everything is finished.
But in any case.. I am not trying to create super-accurate benchmarks. I
am just trying to say that the VM's I/O is definitely slowwer than the
Dom0, and I don't even need a benchmark to tell that everything is at
least twice as slow.

it seriously is super slow, so my original post was about knowing how
much slower (vs native performance) is acceptable. 

Concerning your question, I don't quite understand it...
What I did was : 
1] Created a LV on the real disk
2] exported this LV as a Xen disk using 
disk = [ 'phy:/dev/mapper/mylv,hda1,w']
3] mounted it on the DomU by mount /dev/mapper/mylv

isn't it what I'm supposed to do ?
regards,
Sami Dalouche

On Fri, 2008-01-25 at 16:28 -0500, John Madden wrote:
> > obviously it's the same filesystem type, since it's the same
> > 'partition'.  of course, different mount flags could in theory affect
> > measurements.
> 
> Sorry, I must've missed something earlier.  I didn't realize you were
> mounting and writing to the same filesystem in both cases.  But this is
> interesting -- if you're mounting a filesystem on an LV in dom0 and then
> passing it as a physical device to domU, how does domU see it?  Does it
> then put an LV inside this partition?
> 
> > > Please use bonnie++ at a minimum for i/o benchmarking.  dd is not a
> > > benchmarking tool.
> > 
> > besides, no matter what tool you use to measure, use datasets at the
> > very least three or four times the largest memory size.
> 
> Exactly.  bonnie++ (for example) provides the -r argument, which causes
> it to deal with i/o at twice your memory size to avoid cache benefits.  
> 
> John
> 
> 


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>