|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Cpu pools discussion
George Dunlap wrote:
> Keir (and community),
>
> Any thoughts on Jeurgen Gross' patch on cpu pools?
>
> As a reminder, the idea is to allow "pools" of cpus that would have
> separate schedulers. Physical cpus and domains can be moved from one
> pool to another only by an explicit command. The main purpose Fujitsu
> seems to have is to allow a simple machine "partitioning" that is more
> robust than using simple affinity masks. Another potential advantage
> would be the ability to use different schedulers for different
> purposes.
>
> For my part, it seems like they should be OK. The main thing I don't
> like is the ugliness related to continue_hypercall_on_cpu(), described
> below.
>
> Jeurgen, could you remind us what were the advantages of pools in the
> hypervisor, versus just having
> affinity masks (with maybe sugar in the toolstack)?
>
> Re the ugly part of the patch, relating to continue_hypercall_on_cpu():
>
> Domains are assigned to a pool, so
> if continue_hypercall_on_cpu() is called for a cpu not in the domain's
> pool, you can't just run it normally. Jeurgen's solution (IIRC) was to
> pause all domains in the other pool, temporarily move the cpu in
> question to the calling domain's pool, finish the hypercall, then move
> the cpu in question back to the other pool.
>
> Since there's a lot of antecedents in that, let's take an example:
>
> Two pools; Pool A has cpus 0 and 1, pool B has cpus 2 and 3.
>
> Domain 0 is running in pool A, domain 1 is running in pool B.
>
> Domain 0 calls "continue_hypercall_on_cpu()" for cpu 2.
>
> Cpu 2 is in pool B, so Jeurgen's patch:
> * Pauses domain 1
> * Moves cpu 2 to pool A
> * Finishes the hypercall
> * Moves cpu 2 back to pool B
> * Unpauses domain 1
>
> That seemed a bit ugly to me, but I'm not familiar enough with the use
> cases or the code to know if there's a cleaner solution.
>
A usecase from me: I want a pool that passthrough pcpus to the mission critical
domains. A scheduling algorithm
will map vcpus to pcpus one by one in this pool. That will implement a reliable
hard partitioning.
although it will lose some benefit of virtualization.
And we still want a pool using the credit scheduler for common domains.
thanks,
zhigang
> -George
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|