This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Cpupools and pdata_alloc

To: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Cpupools and pdata_alloc
From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Date: Tue, 11 May 2010 12:25:21 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Tue, 11 May 2010 10:26:17 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type; bh=CV2mbaVy1y1XuMDnORMa6gs0CHlUq5XAPum0qzGW/1U=; b=cIeQrTC0SXjhIDGJLb8A15ZdGRpmGyf0yf8LrszEvvE0ZEEBGYamp1IqrgVjfhBWAO XY9ooLwgvwMbFqs/KbtZVu3PmTFTRZuNHLV68kY6aLCFpPHbqzqEI0OaFLfxdmz3M0WG U2NgZdfV5W0OFMZyMV7CVO38tKanIU51uAG78=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; b=pa4h67OLtcg5HXB/YNO8+QLvfis8N0NyobiLOk3e8mkc+EPE1UaQNPUbqvu/KhjGHx sNWJqO6QiHbkp+5LffHDcW8/sc9lHP0mIbB2QHacCLYRgikSXiUi3/5bpOorBahT4tQW 5rZf0I1gRgjtjSeQi9tjPVMD7zfpMdHOvzPXk=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4BE8E1B1.2030305@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTin3dC9sCDP1-zwKog6BZtwaHAmkt53NqwAZtiNw@xxxxxxxxxxxxxx> <4BE8E1B1.2030305@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Mon, May 10, 2010 at 11:48 PM, Juergen Gross
<juergen.gross@xxxxxxxxxxxxxx> wrote:
> No. It happens when idle vcpus are allocated. At this time there is no
> cpupool
> existing, all physical cpus are marked as "free", e.g. they are in no pool
> at
> all.
> Dom0 vcpus are allocated in Pool-0. This pool is created after allocation of
> the idle vcpus.

Yeah, I spent some time tracing through the init code yesterday and
figured that out.  So it appears that at init time, regarding cpupools
and schedulers:
* init_idle_domain() calls schedule_init(), which calls ops->init()
with a sort of "default" ops pointer.  It also calls sched_init_vcpu()
for idle domain's vcpu 0, which will call ops->alloc_pdata for cpu 0.
* smp_prepare_cpus will eventually call do_boot_cpu for each online
cpu.  do_boot_cpu will initialize the idle_domain vcpu for that cpu,
which will call ops->alloc_pdata for that cpu (again, with cpu 0
 - At this point, all online cpus have had alloc_pdata called, albeit
for the "default" ops structure in the scheduler
* cpupool_create will create cpupool 0, calling sched_init(), which
calls ops->init with the cpupool0 ops structure.
* cpupool0_cpu_assign will then un-assign all online cpus from the
"default" ops structure and re-assign them into cpupool 0.
Re-assigning looks like this:
 - First call alloc_pdata and then alloc_vdata for the new cpupool ops
structure, for the physical cpu and idle vcpu respectively.
 - Ticks will be disabled on the old ops structure, then resumed on
the new ops structure
 - The idle vcpu is added to the new pool
 - Calls free_vdata and free_pdata on the old cpupool ops structure
for the idle vcpu and physical cpu, respectively.

Now all online cpus have idle vcpus and pdatas initialized, and set up
for cpupool 0.

Is that a pretty accurate picture?

> BTW: Allocating the percpu data of the scheduler during the allocation of
> the
> first vcpu on this cpu was in sched_credit.c before cpupools were
> introduced.

Yes, I remember that.  IIRC it had something to do with the timer
infrastructure not being ready during sched_init, so init_cpu was used
as a hook to set up the per-cpu tick after the timer infrastructure
had been initialized.  Although, looking at the code again, it's more
likely that I thought that because when sched_init was called, only
one cpu was online.

Kier, out of curiosity, is there a reason init_idle_domain() (and thus
schedule_init()) is called so early, before all of the cpus are up?
Is it so that adding a cpu dynamically and at boot (which needs to do
in it, add an idle vcpu, &c) all take the same codepath?

> Not yet.
> I'll write something up in the next days.

Cool, thanks.


Xen-devel mailing list