This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-ia64-devel] HVM Multi-Processor Performance followup

To: "'alex.williamson@xxxxxx'" <alex.williamson@xxxxxx>, "'anthony.xu@xxxxxxxxx'" <anthony.xu@xxxxxxxxx>
Subject: Re: [Xen-ia64-devel] HVM Multi-Processor Performance followup
From: Kayvan Sylvan <kayvan@xxxxxxxxx>
Date: Thu, 31 Jan 2008 19:14:59 -0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "'xen-ia64-devel@xxxxxxxxxxxxxxxxxxx'" <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 31 Jan 2008 19:15:17 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AchkfMKoHuQdGN2QSZ62SRkVrgPeXAAA92fo
Thread-topic: [Xen-ia64-devel] HVM Multi-Processor Performance followup
I used the workfile.compute for those results.

I will redo the tests with some different parameters tomorrow.

Kayvan Sylvan, Platform Solutions Inc.

----- Original Message -----
From: Alex Williamson <alex.williamson@xxxxxx>
To: Xu, Anthony <anthony.xu@xxxxxxxxx>
Cc: Kayvan Sylvan; xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Sent: Thu Jan 31 18:47:14 2008
Subject: RE: [Xen-ia64-devel] HVM Multi-Processor Performance followup

On Fri, 2008-02-01 at 09:11 +0800, Xu, Anthony wrote:
> Thanks for your efforts
> You can see the drop in performance starts to get really bad at about
> 9 CPUs and beyond
> If you increase guest vCPU number, the bottleneck may be dom0 vCPU
> number( only 1vCPU for dom0).
> You can try configure two/four vCPU for dom0, the performance may be
> back.
> a curious question,
> Alex said there are ~70% degradation on RE-AIM7,
> Your test result seems much better than his.
> What's the difference of your test environment?

   re-aim-7 provides a number of different workloads.  I was
specifically running the high_systime workload to try to get the worst
case performance out of an HVM domain.  Anything that involves more time
spent running code in user space will lean more towards the results I
showed for the kernel build test.  What workload was this?

   Kayvan, when you ran the native test, did you also limit the memory
using the mem= boot option?  I would expect that you need to increase
the guest memory as vCPUs are increased, or you may be getting into a
scenario where memory management in the guest or even swapping comes
into play.  Thanks,


Alex Williamson                             HP Open Source & Linux Org.

Xen-ia64-devel mailing list
<Prev in Thread] Current Thread [Next in Thread>