[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Performance issues


  • To: "Stephan Austermühle" <au@xxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
  • From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
  • Date: Tue, 11 Apr 2006 17:24:58 +0200
  • Delivery-date: Tue, 11 Apr 2006 08:30:36 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AcZdeIZ+mEAzuvKQSVCMY+ECNCympQAAkhkQ
  • Thread-topic: [Xen-users] Performance issues

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> Stephan Austermühle
> Sent: 11 April 2006 15:59
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-users] Performance issues
> 
> Hi everybody!
> 
> Now that Xen 3.0 unstable (downloaded on 2006-04-06) is up 
> and running for me I did some performance tests. I have 
> chosen a Linux kernel compile as a benchmark to compare the 
> native versus the domU performance. The results are:
> 
>               native  domU    loss
> make -j4        553s    666s  -17%
> make -j2        565s    713s  -22%
> make          1,026s  1,199s  -14%
> 
> 
> System: Athlon64, Dual-Core, 2,0 GHz, 64bit, glibc 2.3.6 
> (Debian Etch) Native settings: kernel booted with 'mem=512M', 
> kernel 2.6.16.1 Xen settings: dom0 128 MByte, domU 512 MByte, 
> kernel 2.6.16.1-xen Test sequence:
> make -jN clean && make -jN && make -jN clean && time make -jN
> 
> Both test series' ran on the same partition on the same disk. 
> In the Xen setup I exported the partition to the domU using
> 
>       disk = [ ...,
>                'phy:sda1,hda11,w' ]
> 
> in the config file.
> 
> The performance loss is greater than what I have expected. 
> Can anybody confirm the dimension of the performance loss? 
> Are these values normal for a Xen setup?

I haven't got any benchmarks, but I don't think the results you're seeing are 
completely unreasonable. 

The benchmarks that you've choosen are VERY file-intensive, and any delay in 
delivering the file-data to the compiler (etc.) would show up "at the bottom 
line". File-reads, for example, will have to pass from DomU to Dom0, where the 
actual read of the hard-disk is performed, and then passed back to DomU. These 
extra steps, whilst individually not huge, will add to the total time. 

I agree with Randy, to see how much overhead is Xen "just being there", and how 
much is emulating the hard-disk interface in DomU, you could run the compile in 
Dom0. 

Also, whilst it's great that you run 512MB for native Linux, but I'd be 
surprised if the disk-caching in Dom0 is quite as effective as it could be - 
maybe you'd get better results (for this particular type of benchmark) if you 
gave another lot of memory to the Dom0 and took it away from DomU (even better, 
give some more to Dom0 without taking it away from DomU!). 

Any free memory in Linux is used for Disk Caching, and most of the time, the 
compiler will not use up 512MB (not even 4 compiles at the same time, unless 
you have HUGE C-files with large functions). 

--
Mats
> 
> I'm also interested in whether there are already best 
> practices for performance tuning.
> 
> Thanks,
> 
> Stephan
> 


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.