This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] Too much VCPUS makes domU high CPU utiliazation

To: MaoXiaoyun <tinnycloud@xxxxxxxxxxx>, xen devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Too much VCPUS makes domU high CPU utiliazation
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Sun, 22 May 2011 10:13:59 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "george.dunlap@xxxxxxxxxxxxx" <george.dunlap@xxxxxxxxxxxxx>
Delivery-date: Sat, 21 May 2011 19:14:39 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <BLU157-w84615E6A1CEFF8C2960D6DA700@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <BAY0-MC1-F215criL7a002cc34e@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>, <blu157-w3926296290D038E14CB39ADA8F0@xxxxxxx>, <625BA99ED14B2D499DC4E29D8138F1505C9BBF8F5E@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>, <BLU157-w4915B0F19BFB49A9482184DA8E0@xxxxxxx>, <625BA99ED14B2D499DC4E29D8138F1505C9BEEFA52@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <BLU157-w84615E6A1CEFF8C2960D6DA700@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcwXaVXK+tr+DK4hQ0W+jN8cW0mPdgAu44pw
Thread-topic: [Xen-devel] Too much VCPUS makes domU high CPU utiliazation
>From: MaoXiaoyun [mailto:tinnycloud@xxxxxxxxxxx] 
>Sent: Saturday, May 21, 2011 11:44 AM
>Although I still not figure out why VCPU fall either on even or odd PCPUS only 
>, If I explictly set "VCPU=[4~15]" in HVM configuration, VM will use 
>all PCPUS from 4 to 15.

This may implicate that NUMA is enabled on your M1 and thus Xen scheduler tries 
to use
local memory to avoid remote access latency and that's why your domain A is 
affined to
a fix set of cpus

>Also I may find the reason why guest boot so slow.
>I think the reason is  the Number of Guest VCPU   >   the  Number of physical 
>CPUs that the Guest can run on
>In my test,  my physical has 16 PCPUS and dom0 takes 4, so for every Guest, 
>only 12 Physical CPUs are available.

The scheduler in the hypervisor is designed to multiplex multiple vcpus on a 
single cpu,
and thus even when dom0 has 4 vcpus it doesn't mean that only the rest 12 pcpus 
available for use.
>So, if Guest has 16 VCPUS, and only 12 Physical are available, when heavy 
>load, there will be two or more VCPUS are queued
>on one Physical CPU, and if there exists VCPU is waiting for other other VCPUS 
>respone(such as IPI memssage), the waiting time 
>would be much longer. 
>Especially, during Guest running time, if a process inside Guest takes 16 
>threads to run, then it is possible each VCPU owns one 
>thread, under physical,  those VCPUs still queue on PCPUS,  if there is some 
>busy waiting code process, such as (spinlock),  
>it will make Guest high CPU utilization.  If the the busy waiting code is not 
>so frequently, we might see CPU utilization jump to 
>very high and drop to low now and then. 
>Could it be possible?

It's possible. As I replied in earlier thread, lock contention at boot time may 
slow down
the process slightly or heavily. Remember that the purpose of virtualization is 
consolidate multiple VMs on a single platform to maximum resource utilization. 
use cases can have N:1 (where N can be 8) consolidation ratio, and others may 
smaller ratio. There're many reasons for a given environment to scale up, and 
you need
capture enough trace information for the bottleneck. Some bottlenecks may be 
to tackle which will finally form into your business best practice, while some 
may be
simply improved by proper configuration change. So it's really too early to say 
your setup is not feasible or not. You need dive into it with more details. :-)


Xen-devel mailing list