WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] SMP guest support in unstable tree.

To: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] SMP guest support in unstable tree.
From: Andrew Theurer <habanero@xxxxxxxxxx>
Date: Wed, 05 Jan 2005 09:06:10 -0600
Delivery-date: Wed, 05 Jan 2005 15:13:34 +0000
Envelope-to: xen+James.Bulpin@xxxxxxxxxxxx
In-reply-to: <20050105142331.GO8251@xxxxxxxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
References: <20041215232547.GA16409@xxxxxxxxxxxx> <41DAC755.9050409@xxxxxxxxxx> <20050105142331.GO8251@xxxxxxxxxxxx>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 0.8 (Windows/20040913)
Christian Limpach wrote:

On Tue, Jan 04, 2005 at 10:41:57AM -0600, Andrew Theurer wrote:
Do you think there would be any room for "dedicated" cpus in a domain, like a one-to-one mapping of physical cpu to domain cpu? I am asking because I think there would be situations where (a) one would want to discretely divide a large system, in particular one with numa characteristics where one could dedicate cpus and memory close to each other and

We have a one-to-one mapping (pinning) of virtual cpus to physical cpus --
if you don't allocate multiple virtual cpus to the same physical cpu, then
the physical cpu becomes implicitly dedicated to that domain.
OK, great, this is essentially the option I wanted, thanks!

This mapping can be changed dynamically, at least at the Xen level -- the
tools don't have support for changing the mapping of SMP guests yet.  We
also don't support enforcing allocation policies yet.

(b) perhaps in this one to one mapping, there might be less overhead of managing cpus in a domain, vs (assuming) some sort of timesharing of a physical cpu to many domains, and even more than one virtual cpu in just one domain.

I don't think there's significant overhead if there's only a single
virtual cpu pinned to one physical cpu so I wouldn't expect a noticeable
performance advantage if we handled this case differently.
Hopefully soon I can get some performance tests going and we can see if there's any issues here. My other concern would be on larger (multi numa-node) systems, even with one to one mapping, that the hardware topology (numa) information does not make it to the SMP guest -it would be nice to take advantage of the numa work developed in the linux kernel over that last 2 years. I am not sure exactly what impact this could be.

Anyway, I am mostly curious at this point. This is just what I have seen in the ppc/power5 world, a choice of dedicated cpus (however, if they are idle that cpu can be "shared" if desired) or virtual cpus (up to 64 I think) backed by N physical cpus.

I think we need load balancing software and we also need to get
measurements to see what's the cost of moving virtual cpus between
physical cpus (or hyperthreads) and what impact service domains have
on the scheduling and load balancing decisions.
Agreed, thanks for the info.

-Andrew Theurer


-------------------------------------------------------
The SF.Net email is sponsored by: Beat the post-holiday blues
Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek.
It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel

<Prev in Thread] Current Thread [Next in Thread>