This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Re: [PATCH 1/5] xen: events: use irq_alloc_desc(_at) ins

To: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: [PATCH 1/5] xen: events: use irq_alloc_desc(_at) instead of open-coding an IRQ allocator.
From: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
Date: Tue, 26 Oct 2010 20:49:31 +0100
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <Stefano.Stabellini@xxxxxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, "linux-kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, "mingo@xxxxxxx" <mingo@xxxxxxx>, "tglx@xxxxxxxxxxxxx" <tglx@xxxxxxxxxxxxx>
Delivery-date: Tue, 26 Oct 2010 12:51:17 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1288080948.10179.57.camel@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1288023736.11153.40.camel@xxxxxxxxxxxxxxxxxxxxxx> <1288023813-31989-1-git-send-email-ian.campbell@xxxxxxxxxx> <20101025173522.GA5590@xxxxxxxxxxxx> <1288029736.10179.35.camel@xxxxxxxxxxxxxxxxxxxxx> <1288080948.10179.57.camel@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Alpine 2.00 (DEB 1167 2008-08-23)
On Tue, 26 Oct 2010, Ian Campbell wrote:
> On Mon, 2010-10-25 at 19:02 +0100, Ian Campbell wrote:
> > 
> > 
> > > What do you see when you pass in a PCI device and say give the guest
> > 32 CPUs??
> > 
> > I can try tomorrow and see, based on what you say above without
> > implementing what I described I suspect the answer will be "carnage". 
> Actually, it looks like multi-vcpu is broken, I only see 1 regardless of
> how many I configured. It's not clear if this is breakage in Linus'
> tree, something I pulled in from one of Jeremy's, yours or Stefano's
> trees or some local pebcak. I'll investigate...
I found the bug, it was introduced by:

"xen: use vcpu_ops to setup cpu masks"

I have added the fix at the end of my branch and I am also appending the
fix here.


xen: initialize cpu masks for pv guests in xen_smp_init

Pv guests don't have ACPI and need the cpu masks to be set
correctly as early as possible so we call xen_fill_possible_map from
On the other hand the initial domain supports ACPI so in this case we skip
xen_fill_possible_map and rely on it. However Xen might limit the number
of cpus usable by the domain, so we filter those masks during smp
initialization using the VCPUOP_is_up hypercall.
It is important that the filtering is done before

Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 1386767..834dfeb 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -28,6 +28,7 @@
 #include <asm/xen/interface.h>
 #include <asm/xen/hypercall.h>
+#include <xen/xen.h>
 #include <xen/page.h>
 #include <xen/events.h>
@@ -156,6 +157,25 @@ static void __init xen_fill_possible_map(void)
        int i, rc;
+       if (xen_initial_domain())
+               return;
+       for (i = 0; i < nr_cpu_ids; i++) {
+               rc = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL);
+               if (rc >= 0) {
+                       num_processors++;
+                       set_cpu_possible(i, true);
+               }
+       }
+static void __init xen_filter_cpu_maps(void)
+       int i, rc;
+       if (!xen_initial_domain())
+               return;
        num_processors = 0;
        disabled_cpus = 0;
        for (i = 0; i < nr_cpu_ids; i++) {
@@ -179,6 +199,7 @@ static void __init xen_smp_prepare_boot_cpu(void)
           old memory can be recycled */
+       xen_filter_cpu_maps();
@@ -195,8 +216,6 @@ static void __init xen_smp_prepare_cpus(unsigned int 
        if (xen_smp_intr_init(0))
-       xen_fill_possible_map();
        if (!alloc_cpumask_var(&xen_cpu_initialized_map, GFP_KERNEL))
                panic("could not allocate xen_cpu_initialized_map\n");
@@ -487,5 +506,6 @@ static const struct smp_ops xen_smp_ops __initdata = {
 void __init xen_smp_init(void)
        smp_ops = xen_smp_ops;
+       xen_fill_possible_map();

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>