WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support vcpus, add

To: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support vcpus, add vcpu to cpu map
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: Thu, 14 Apr 2005 12:41:59 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 14 Apr 2005 17:41:55 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D1E3B81@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D1E3B81@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040907i
* Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2005-04-14 11:58]:
> > > "xm pincpu mydom 1 2,4-6" which would allow VCPU 1 of mydom 
> > to run on 
> > > CPUs 2,4 and 5 but no others. -1 would still mean "run anywhere". 
> > > Having this functionality is really important before we can 
> > implement 
> > > any kind of CPU load ballancer.
> > 
> > Interesting idea.  I don't see anything in the schedulers 
> > that would take advantage of that sort of definition.  AFAIK, 
> > exec_domains are never migrated unless told to do so via 
> > pincpu.  Does the new scheduler do this?  Or is this more of 
> > setting up the rules that the load balancer would query to 
> > find out where it can migrate vcpus?
> 
> I see having this as a pre-requisite for any fancy new scheduler (or as
> a first step, CPU load ballancer). Without it, I think it'll be
> scheduling anarchy :-)

OK.  Makes sense, that sounds like I separate patch.  I was thinking a
u32 bitmap, but that doesn't give us the -1, run-anywhere.  Maybe
EDF_USEPINMAP and a u32 bitmap.  if EDF_USEPINMAP is set, then the
balancer/scheduler looks at the bitmap to see on which cpus the vcpu can
run, if it is not set, the vcpu can run anywhere.

> > > Secondly, I think it would be really good if we could have some 
> > > hierarchy in CPU names. Imagine a 4 socket system with dual 
> > core hyper 
> > > threaded CPUs. It would be nice to be able to specify the 
> > 3rd socket, 
> > > 1st core, 2nd hyperthread as CPU "2.0.1".
> > > 
> > > Where we're on a system without one of the levels of hierarchy, we 
> > > just miss it off. E.g. a current SMP Xeon box would be "x.y". This 
> > > would be much less confusing than the current scalar representation.
> > 
> > I like the idea of being able to specify "where" the vcpu 
> > runs more explicitly than 'cpu 0', which does not give any 
> > indication of physical cpu characteristics.  We would 
> > probably need to still provide a simple mapping, but allow 
> > the pincpu interface to support a more specific target as 
> > well as the more generic.
> > 
> > 2-way hyperthreaded box:
> > CPU     SOCKET.CORE.THREAD
> > 0       0.0.0
> > 1       0.0.1
> > 2       1.0.0
> > 3       1.0.1
> > 
> > That look sane?
> 
> Yep, that's what I'm thinking. I think its probably worth squeezing out
> unsused levels of hierarchy, e.g. just having SOCKET.THREAD in the above

OK.  I'll see how the implementation looks when I'm done.  It sounds
nice though.

> example. Keeping it pretty generic makes sense too. E.g. imagine a big
> ccNUMA system with a 'node' level above that of the actual CPU socket.

Sure, I'll look at the Linux cpu groups stuff and the Linux topology
code to see if there is anything like this there.

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@xxxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel