This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Cpu pools discussion

To: George Dunlap <dunlapg@xxxxxxxxx>
Subject: Re: [Xen-devel] Cpu pools discussion
From: Zhigang Wang <zhigang.x.wang@xxxxxxxxxx>
Date: Tue, 28 Jul 2009 08:41:17 +0800
Cc: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Mon, 27 Jul 2009 17:41:53 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <de76405a0907270820gd76458cs34354a61cc410acb@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <de76405a0907270820gd76458cs34354a61cc410acb@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird (X11/20090605)
George Dunlap wrote:
> Keir (and community),
> Any thoughts on Jeurgen Gross' patch on cpu pools?
> As a reminder, the idea is to allow "pools" of cpus that would have
> separate schedulers.  Physical cpus and domains can be moved from one
> pool to another only by an explicit command.  The main purpose Fujitsu
> seems to have is to allow a simple machine "partitioning" that is more
> robust than using simple affinity masks.  Another potential advantage
> would be the ability to use different schedulers for different
> purposes.
> For my part, it seems like they should be OK.  The main thing I don't
> like is the ugliness related to continue_hypercall_on_cpu(), described
> below.
> Jeurgen, could you remind us what were the advantages of pools in the
> hypervisor, versus just having
> affinity masks (with maybe sugar in the toolstack)?
> Re the ugly part of the patch, relating to continue_hypercall_on_cpu():
> Domains are assigned to a pool, so
> if continue_hypercall_on_cpu() is called for a cpu not in the domain's
> pool, you can't just run it normally.  Jeurgen's solution (IIRC) was to
> pause all domains in the other pool, temporarily move the cpu in
> question to the calling domain's pool, finish the hypercall, then move
> the cpu in question back to the other pool.
> Since there's a lot of antecedents in that, let's take an example:
> Two pools; Pool A has cpus 0 and 1, pool B has cpus 2 and 3.
> Domain 0 is running in pool A, domain 1 is running in pool B.
> Domain 0 calls "continue_hypercall_on_cpu()" for cpu 2.
> Cpu 2 is in pool B, so Jeurgen's patch:
>  * Pauses domain 1
>  * Moves cpu 2 to pool A
>  * Finishes the hypercall
>  * Moves cpu 2 back to pool B
>  * Unpauses domain 1
> That seemed a bit ugly to me, but I'm not familiar enough with the use
> cases or the code to know if there's a cleaner solution.
A usecase from me: I want a pool that passthrough pcpus to the mission critical 
domains. A scheduling algorithm
will map vcpus to pcpus one by one in this pool. That will implement a reliable 
hard partitioning.
although it will lose some benefit of virtualization.

And we still want a pool using the credit scheduler for common domains.



>  -George
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

Xen-devel mailing list