WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Cpu pools discussion

To: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Subject: Re: [Xen-devel] Cpu pools discussion
From: George Dunlap <dunlapg@xxxxxxxxx>
Date: Tue, 28 Jul 2009 14:41:26 +0100
Cc: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>, Zhigang Wang <zhigang.x.wang@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>
Delivery-date: Tue, 28 Jul 2009 06:41:54 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type:content-transfer-encoding; bh=a8mXT23LXASD5X51hyuB9/bkuVu8C5BMryYXxI1uTW8=; b=egum4JQGegUtt6IWtBFt57ad+QGEZSwcODLYC3lrdzfifFxmvEYQQonu1Y2lTU0PvG 609XN2qc6g0DM00JYW1ccOGvWQYb98TGWmnZ6qGsQ/E0uKNun0xVO8CoAyApawX5Gq+o IiX7c8poMxocV+YfZyXn54XKG5VNrOIAVI/Wo=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=XmmIK8f5sgxfOgZJTjTv7pdeQ3TwWzXMI31NVgYh9xpfNJh+HO7Ckvf24nYI4ZcOMD +kJMCJGSu4OI2+3tCpCddZAGvj6dbiyGJmrRcuULO125C929xeV2Z8CW5gbGu4+0LKHl a7k/TLyzZ4/H6wKySJLurAg3e6fvtKrO2LTTk=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20090728133134.GK5235@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <de76405a0907270820gd76458cs34354a61cc410acb@xxxxxxxxxxxxxx> <4A6E492D.201@xxxxxxxxxx> <20090728091929.GI5235@xxxxxxxxxxxxxxxxxxxxx> <4A6ECFD3.4030709@xxxxxxxxxxxxxx> <de76405a0907280550j1ff82f1dq507f0258f138c477@xxxxxxxxxxxxxx> <20090728130701.GJ5235@xxxxxxxxxxxxxxxxxxxxx> <4A6EFC11.9010404@xxxxxxxxxxxxxx> <20090728133134.GK5235@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, Jul 28, 2009 at 2:31 PM, Tim Deegan<Tim.Deegan@xxxxxxxxxx> wrote:
> At 14:24 +0100 on 28 Jul (1248791073), Juergen Gross wrote:
>> > Does strict partitioning of CPUs like this satisfy everyone's
>> > requirements?  Bearing in mind that
>> >
>> >  - It's not work-conserving, i.e. it doesn't allow best-effort
>> >    scheduling of pool A's vCPUs on the idle CPUs of pool B.
>> >
>> >  - It restricts the maximum useful number of vCPUs per guest to the size
>> >    of a pool rather than the size of the machine.
>> >
>> >  - dom0 would be restricted to a subset of CPUs.  That seems OK to me
>> >    but occasionally people talk about having dom0's vCPUs pinned 1-1 on
>> >    the physical CPUs.
>>
>> You don't have to define other pools. You can just live with the default pool
>> extended to all cpus and everything is as today.
>
> Yep, all I'm saying is you can't do both.  If the people who want this
> feature (so far I count two of you) want to do both, then this
> solution's good not enough, and we should think about that before going
> ahead with it.

Yes, if you have more than one pool, then dom0 can't run on all cpus;
but it can still run with dom0's vcpus pinned 1-1 on the physical cpus
in its pool.

I'm not sure why someone who wants to partition a machine would
simultaneously want dom0 to run across all cpus...

As Juergen says, for people who don't use the feature, it shouldn't
have any real effect.  The patch is pretty straightforward, except for
the "continue_hypercall_on_cpu()" bit.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel