WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] numa: select nodes by cpu affinity

To: Andrew Jones <drjones@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] numa: select nodes by cpu affinity
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Wed, 4 Aug 2010 17:15:26 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "dulloor@xxxxxxxxx" <dulloor@xxxxxxxxx>
Delivery-date: Wed, 04 Aug 2010 09:16:21 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C598EF4.50001@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acsz7luuAAl40QZgSZKXmEbu1XxBwgAAePGF
Thread-topic: [Xen-devel] [PATCH] numa: select nodes by cpu affinity
User-agent: Microsoft-Entourage/12.24.0.100205
On 04/08/2010 17:01, "Andrew Jones" <drjones@xxxxxxxxxx> wrote:

> I also considered managing the nodemask as new domain state, as you do,
> as it may come in useful elsewhere, but my principle of least patch
> instincts kept me from doing it...

Yeah, I don't fancy iterating over all vcpus for every little bitty
allocation. So it's a perf thing mainly for me.

> I'm not sure about keeping track of the last_alloc_node and then always
> avoiding it (at least when there's more than 1 node) by checking it
> last. I liked the way it worked before, favoring the node of the
> currently running processor, but I don't have any perf numbers to know
> what would be better.

Well, you can expect vcpus to move around within their affinity masks over
moderate timescales (like say seconds or minutes). And in fact the original
credit scheduler *loves* to migrate vcpus around the place over much less
reasonable timescales than that (sub second). It is nice to balance our
allocations rather than hitting one node 'unfairly' hard.

> I've attached a patch with a couple minor tweaks. It removes the
> unnecessary node clearing from an empty initialized nodemask, and also
> moves a couple of domain_update_node_affinity() calls outside
> for_each_vcpu loops.

Thanks, I tweaked your tweaks (just one tiny optimisation) and applied it so
it should show up in the staging tree rsn.

 -- Keir

> Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>