WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support, vcpus, ad

To: Sam Gill <samg@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] xen, tools/python/xen: pincpu support, vcpus, add vcpu to cpu map
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: Thu, 14 Apr 2005 12:51:41 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 14 Apr 2005 17:51:42 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <425EA997.8060409@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <E1DM7eW-0002zR-9S@host-192-168-0-1-bcn-london> <425EA997.8060409@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040907i
* Sam Gill <samg@xxxxxxxxxxxxx> [2005-04-14 12:31]:
> tool that just shows
> you how many cpus you have to work with. (also a debugging tool, to see 

Yeah, I think we should add something that better shows the available
resources.  Currently the total number of Physical CPUs a system has
isn't really available in an obvious location.

> such as "xm pincpu-show" and "xm pincpu-show-details" for a more verbose 
> listing

What would these look like?

> Then the next step would be creating some helper functions "xm 
> pincpu-add" so you could add a cpu to


> a domain, or "xm pincpu-move" to move a cpu from one domain to another. 
> In addition you could have

> "xm pincpu-lock"/"xm pincpu-unlock" which would only allow one single 
> domain to access that cpu.

I think the mapping that Ian mentioned was needed for load-balancing
would achieve that, but we certainly could create an interface wrapper,
like lock/unlock that was translated into the correct mapping command.

> I am just thinking that maybe if you detail (if you have already not 
> done so) what you want the end result to
> be, than it might be easier to figure out how to implement the lower 
> level functions more efficiently.

No, this is good things to be talking about.  The goal of this patch was
to allow us to pin VCPUs mainly so we can test space-sharing versus
time-sharing of VCPUs.  That is, if we have a 4-way SMP box, with two
domUs, each with four VCPUs, what is the perf difference between domUs each
getting 2 physical cpus to run their 4 VCPUs versus domUs having access
to all 4 physical cpus on which to run their 4 VCPUs.

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@xxxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel