|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] credit2's csched_init() registering of a CPU notifier
csched_alloc_pdata() only calls init_pcpu() for cpu 0.
The problem here is that we have a dependency loop. credit2 (at the
moment) wants to have one runqueue per socket. However, at the time the
first csched_alloc_pdata() is called, the cpu topology information is
not yet available. So for all cpus except cpu 0, we register a cpu
callback notifier, so that we can call init_pcpu() when the cpu is up
(at which point the topology information for that cpu is known).
Cpu 0 never gets a callback; so we special-case csched_alloc_pdata() to
call init_pcpu() for cpu 0, and to always assign cpu 0 to runqueue 0.
I know that this is almost certainly incredibly broken WRT cpu pools at
the moment. I will fix it, but ATM it's just not a priority.
-George
On Fri, 2011-03-18 at 12:25 +0000, Jan Beulich wrote:
> >>> On 18.03.11 at 10:24, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
> > George,
> >
> > as ->init() can be called more than once (for CPU pools) it seems
> > wrong to do any global initialization in ->init(). The question is
> > whether it's worth adding a ->global_init(), or whether instead
> > a callout from the notifier schedule.c sets up wouldn't be a
> > better mechanism (though that would require maintaining a list
> > of scheduler instances).
>
> Just moving this onto a global_init doesn't work (crashes), and
> looking at what the notifier handler does I wonder why it's
> needed at all - csched_alloc_pdata() also calls init_pcpu(), and
> that ought to be the canonical way. Plus there's also this
> somewhat frightening comment "Hope this is safe from cpupools
> switching things around. :-)" in csched_cpu_starting().
> Minimally I think there needs to be a check that *ops really is
> credit2's.
>
> Jan
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|