WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: next->vcpu_dirty_cpumask checking at the top of context_

To: Jan Beulich <jbeulich@xxxxxxxxxx>
Subject: [Xen-devel] Re: next->vcpu_dirty_cpumask checking at the top of context_switch()
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Fri, 17 Apr 2009 16:19:03 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 17 Apr 2009 08:19:50 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <49E899140200007800046D88@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acm/ZJh2PKryLxJtQXK0keK455EOvQACz7Em
Thread-topic: next->vcpu_dirty_cpumask checking at the top of context_switch()
User-agent: Microsoft-Entourage/12.15.0.081119
On 17/04/2009 14:58, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

>>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 04/16/09 6:00 PM >>>
>> How big NR_CPUS are we talking about? Is the overhead measurable, or is this
>> a premature micro-optimisation?
> 
> We're shipping Xen in SLE11 with NR_CPUS=255, and I'm convinced we'll be asked
> to up this further in the service packs.
> 
> Also, micro-optimization reads for me like I was aiming at a performance
> issue, but the goal really just is to get the use of stack space down (and in
> particular, make it as independent as possible from configuration settings).

Well, I'm not against it, at least if it's a reasonably straightforward
patch. Any untangling of cpus_empty() logic should be a separate patch
though. And that should mean that the patch to convert from pass-by-value to
pass-by-pointer is nice and trivial.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>