WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Poor HVM performance with 8 vcpus

To: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Poor HVM performance with 8 vcpus
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Wed, 07 Oct 2009 08:26:57 +0100
Cc: Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>
Delivery-date: Wed, 07 Oct 2009 00:27:25 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4ACC3B49.4060500@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcpHGzNKuqcMIyydQyam/MqwGdtXYwABFnLr
Thread-topic: [Xen-devel] Poor HVM performance with 8 vcpus
User-agent: Microsoft-Entourage/12.20.0.090605
Hi Juergen,

Tim Deegan is the man for this stuff (cc'ed) - you don't want to get too
involved in the shadow code without syncing with him first. My
understanding, however, is that shadow code is currently designed with
scalability up to only about 4 VCPUs in mind. The expectation is that, as
users want to scale wider than that, they will typically be upgrading to
modern many-core processors with hardware assistance (Intel EPT, AMD NPT).

If you don't fit into that scenario, perhaps we can find you some
lowish-hanging fruit to improve parallelism. Big changes in shadow code
could be scary for us due to the likely nasty bug tail!

 -- Keir

On 07/10/2009 07:55, "Juergen Gross" <juergen.gross@xxxxxxxxxxxxxx> wrote:

> Hi,
> 
> we've got massive performance problems running a 8 vcpu HVM-guest (BS2000)
> under XEN (xen 3.3.1).
> 
> With a specific benchmark producing a rather high load on memory management
> operations (lots of process creation/deletion and memory allocation) the 8
> vcpu performance was worse than the 4 vcpu performance. On other platforms
> (/390, MIPS, SPARC) this benchmark scaled rather well with the number of cpus.
> 
> The result of the usage of the software performance counters of XEN seemed
> to point to the shadow lock being the reason. I modified the Hypervisor to
> gather some lock statistics (patch will be sent soon) and found that the
> shadow lock is really the bottleneck. On average 4 vcpus are waiting to get
> the lock!
> 
> Is this a known issue?
> Is there a chance to split the shadow lock into sub-locks or to use a
> reader/writer lock instead?
> I just wanted to ask before trying to understand all of the shadow code :-)
> 
> 
> Juergen



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel