This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [XenPPC] [PATCH/RFC] Schedule idle domain on secondary processors

To: Amos Waterland <apw@xxxxxxxxxx>
Subject: Re: [XenPPC] [PATCH/RFC] Schedule idle domain on secondary processors
From: Jimi Xenidis <jimix@xxxxxxxxxxxxxx>
Date: Tue, 29 Aug 2006 11:31:43 -0400
Cc: xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 29 Aug 2006 08:31:55 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20060829042144.GA13088@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-ppc-devel-request@lists.xensource.com?subject=help>
List-id: Xen PPC development <xen-ppc-devel.lists.xensource.com>
List-post: <mailto:xen-ppc-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ppc-devel>, <mailto:xen-ppc-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ppc-devel>, <mailto:xen-ppc-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20060829042144.GA13088@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-ppc-devel-bounces@xxxxxxxxxxxxxxxxxxx
This patch check-stops my box.
For those of you with Maples, the 405 console spits out those nasty:
  Error: Magic number in message area NVRAM is not valid.

If I sync the console I get as far as:
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen).

  zImage starting: loaded at 0x00400000 (sp: 0x01ffff90)
  Allocating 0x71bb38 bytes for kernel ...
  OF version = 'IBM,XenOF0.1'
  gunzipping (0x1400000 <- 0x407000:0x680934)...

I serialized the CPUS and I stop earlier, cannot debug it because I believe the issue is on CPU1.
am investigating.

On Aug 29, 2006, at 12:21 AM, Amos Waterland wrote:

This patch fixes memory corruption caused by start_of_day, and makes the
secondary processors join the idle domain and become eligible for domU

It is quite stable in that the secondary processors reliably join the
idle domain and wait for free pages to scrub, handling 0x980 interrupts
with no problem.  I have been able to use `xm create' to launch Linux
domU's up to their bash prompt on the secondary processors.

However, the domU's sometimes hang during initialization. When the domU hangs, it seems the whole machine freezes, including the serial console.
From the description of the network-backed filesystem race with network
interface bringup, I don't think the serial console was affected. I am
usually able to create one or two domU's before the hang happens, but
sometimes it happens on the first attempt.

I'd appreciate any testing of this patch on Maple or JS20 hardware
and/or comments on what might be causing the machine to hang.


 setup.c |   41 ++++++++++++++++++++---------------------
 1 file changed, 20 insertions(+), 21 deletions(-)

diff -r f05a3e9d3e8f xen/arch/powerpc/setup.c
--- a/xen/arch/powerpc/setup.c  Mon Aug 28 18:35:29 2006 -0500
+++ b/xen/arch/powerpc/setup.c  Mon Aug 28 23:43:19 2006 -0400
@@ -88,6 +88,8 @@ extern void initialize_keytable(void);

 volatile struct processor_area * volatile global_cpu_table[NR_CPUS];

+static struct domain *idle_domain;
 int is_kernel_text(unsigned long addr)
     if (addr >= (unsigned long) &_start &&
@@ -164,8 +166,6 @@ static void percpu_free_unused_areas(voi

 static void __init start_of_day(void)
-    struct domain *idle_domain;

@@ -180,23 +180,6 @@ static void __init start_of_day(void)
/* for some reason we need to set our own bit in the thread map */
     cpu_set(0, cpu_sibling_map[0]);

-    percpu_free_unused_areas();
-    {
-        /* FIXME: Xen assumes that an online CPU is a schedualable
- * CPU, but we just are not there yet. Remove this fragment when
-         * scheduling processors actually works. */
-        int cpuid;
-        printk("WARNING!: Taking all secondary CPUs offline\n");
-        for_each_online_cpu(cpuid) {
-            if (cpuid == 0)
-                continue;
-            cpu_clear(cpuid, cpu_online_map);
-        }
-    }
     /* Register another key that will allow for the the Harware Probe
      * to be contacted, this works with RiscWatch probes and should
@@ -253,8 +236,9 @@ static int kick_secondary_cpus(int maxcp
         if (cpuid >= maxcpus)
+        cpu_set(cpuid, cpu_online_map);
-        cpu_set(cpuid, cpu_online_map);

     return 0;
@@ -264,7 +248,19 @@ int secondary_cpu_init(int cpuid, unsign
 int secondary_cpu_init(int cpuid, unsigned long r4);
 int secondary_cpu_init(int cpuid, unsigned long r4)
+    struct vcpu *vcpu;
+    vcpu = alloc_vcpu(idle_domain, cpuid, cpuid);
+    if (vcpu == NULL)
+        BUG();
+    set_current(idle_domain->vcpu[cpuid]);
+    idle_vcpu[cpuid] = current;
+    startup_cpu_idle_loop();

@@ -337,6 +333,8 @@ static void __init __start_xen(multiboot

+    start_of_day();
     /* Deal with secondary processors.  */
     if (opt_nosmp) {
printk("nosmp: leaving secondary processors spinning forever\n");
@@ -345,7 +343,8 @@ static void __init __start_xen(multiboot

-    start_of_day();
+ /* This cannot be called before secondary cpus are marked online. */
+    percpu_free_unused_areas();

     /* Create initial domain 0. */
     dom0 = domain_create(0);

Xen-ppc-devel mailing list

Xen-ppc-devel mailing list