|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Cpu pools discussion
At 13:50 +0100 on 28 Jul (1248789008), George Dunlap wrote:
> On Tue, Jul 28, 2009 at 11:15 AM, Juergen
> Gross<juergen.gross@xxxxxxxxxxxxxx> wrote:
> > Tim Deegan wrote:
> >> That's easily done by setting affinity masks in the tools, without
> >> needing any mechanism in Xen.
> >
> > More or less.
> > You have to set the affinity masks for ALL domains to avoid scheduling on
> > the
> > "special" cpus.
Bah. You have to set the CPU pool of all domains to achieve the same
thing; in any case this kind of thing is what toolstacks are good at. :)
> > You won't have reliable scheduling weights any more.
That's a much more interesting argument. It seems to me that in this
simple case the scheduling weights will work out OK, but I can see that
in the general case it gets entertaining.
> Given that people want to partition a machine, I think cpu pools makes
> the most sense:
> * From a user perspective it's easier; no need to pin every VM, simply
> assign which pool it starts in
I'll say it again because I think it's important: policy belongs in the
tools. User-friendly abstractions don't have to extend into the
hypervisor interfaces unless...
> * From a scheduler perspective, it makes thinking about the algorithms
> easier. It's OK to build in the assumption that each VM can run
> anywhere. Other than partitioning, there's no real need to adjust the
> scheduling algorithm to do it.
...unless there's a benefit to keeping the hypervisor simple. Which
this certainly looks like.
Does strict partitioning of CPUs like this satisfy everyone's
requirements? Bearing in mind that
- It's not work-conserving, i.e. it doesn't allow best-effort
scheduling of pool A's vCPUs on the idle CPUs of pool B.
- It restricts the maximum useful number of vCPUs per guest to the size
of a pool rather than the size of the machine.
- dom0 would be restricted to a subset of CPUs. That seems OK to me
but occasionally people talk about having dom0's vCPUs pinned 1-1 on
the physical CPUs.
Cheers,
Tim.
--
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, Citrix Systems (R&D) Ltd.
[Company #02300071, SL9 0DZ, UK.]
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] Cpu pools discussion, George Dunlap
- [Xen-devel] Re: Cpu pools discussion, Keir Fraser
- Re: [Xen-devel] Cpu pools discussion, Zhigang Wang
- Re: [Xen-devel] Cpu pools discussion, Tim Deegan
- Re: [Xen-devel] Cpu pools discussion, Juergen Gross
- Re: [Xen-devel] Cpu pools discussion, George Dunlap
- Re: [Xen-devel] Cpu pools discussion,
Tim Deegan <=
- Re: [Xen-devel] Cpu pools discussion, Juergen Gross
- Re: [Xen-devel] Cpu pools discussion, Tim Deegan
- Re: [Xen-devel] Cpu pools discussion, Juergen Gross
- Re: [Xen-devel] Cpu pools discussion, George Dunlap
- Re: [Xen-devel] Cpu pools discussion, Juergen Gross
- RE: [Xen-devel] Cpu pools discussion, Dan Magenheimer
- Re: [Xen-devel] Cpu pools discussion, Keir Fraser
- Re: [Xen-devel] Cpu pools discussion, George Dunlap
- Re: [Xen-devel] Cpu pools discussion, Zhigang Wang
- Re: [Xen-devel] Cpu pools discussion, Juergen Gross
|
|
|
|
|