This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: next->vcpu_dirty_cpumask checking at the top of context

To: <keir.fraser@xxxxxxxxxxxxx>
Subject: [Xen-devel] Re: next->vcpu_dirty_cpumask checking at the top of context_switch()
From: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Date: Fri, 17 Apr 2009 14:58:28 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 17 Apr 2009 06:59:05 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 04/16/09 6:00 PM >>>
>How big NR_CPUS are we talking about? Is the overhead measurable, or is this
>a premature micro-optimisation?

We're shipping Xen in SLE11 with NR_CPUS=255, and I'm convinced we'll be asked to up this further in the service packs.

Also, micro-optimization reads for me like I was aiming at a performance issue, but the goal really just is to get the use of stack space down (and in particular, make it as independent as possible from configuration settings).

Xen-devel mailing list
<Prev in Thread] Current Thread [Next in Thread>