This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Cpu pools discussion

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Cpu pools discussion
From: Zhigang Wang <zhigang.x.wang@xxxxxxxxxx>
Date: Wed, 29 Jul 2009 08:29:43 +0800
Cc: Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>, George Dunlap <dunlapg@xxxxxxxxx>, Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Delivery-date: Tue, 28 Jul 2009 17:30:18 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C694DC8E.10D23%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C694DC8E.10D23%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird (X11/20090605)
Keir Fraser wrote:
> On 28/07/2009 16:29, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote:
>> Currently this is done with pinning, but pinning
>> does restrict the flexibility of a multi-vcpu VM.
>> Affinity seems like it should help, but affinity
>> doesn't restrict the VM from running on a non-affinitive
>> pcpu (does it?)
> Yes it does. VCPUs only run on PCPUs in their affinity masks.
>  -- Keir
I'm wondering is there some performance difference between these
two scenario:

1) vcpu0 pinned to pcpu0, vcpu1 pinned to pcpu1.
2) vcpu0 and vcpu1 affined to pcpu0 and pcpu1 but not pinned.

Currently we have to explicitly pin *every* vcpu to get true hard partitioning.

We are seeking for a better solution, whether it will be in the hypervisor or 
user space tools. But seems the cpu pool concept is attractive.



Xen-devel mailing list