WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ppc-devel

Re: [XenPPC] vcpu-list panic


On Sep 6, 2006, at 4:40 PM, Hollis Blanchard wrote:

On Wed, 2006-09-06 at 14:56 -0500, Hollis Blanchard wrote:
On Wed, 2006-09-06 at 14:44 -0400, Amos Waterland wrote:
Using current xen.hg and linux.hg, I get this when I try to
run `xm vcpu-list':

(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) paddr_to_maddr: Dom:0 paddr: 0xf5d5ecc0 bad type:0x3
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...

Note that `xm list' works fine. The 0xf5d5ecc0 address is consistent
across boots.

hmm, "xm vcpu-list" worked fine before the merge, I have not had a chance to look at the recent changes in that space. However, please note that the paddr is firmly in the I/O Hole of a 970 class machine (hence type 3) and you have no business xencomm'ing that address. I suspect that the paddr we are seeing is actually an from the stack of an app and the xencomm logic if faulty.

looks like Hollis has it figured out. :)

I see the same thing. The backtrace is:
#0  panic (fmt=0x456d10 "%s: Dom:%d paddr: 0x%lx bad type:0x%x\n")
    at console.c:610
#1 0x00000000004459bc in paddr_to_maddr (paddr=0xf6fb7278) at usercopy.c:65
#2  0x000000000044619c in xencomm_handle_is_null (ptr=0xf6fb7278)
    at usercopy.c:265
#3 0x0000000000401c58 in cpumask_to_xenctl_cpumap (xenctl_cpumap=0xfaa8,
    cpumask=0xb7e178) at domctl.c:36
#4 0x0000000000403348 in do_domctl (u_domctl={__pad = 0xfc20, p = 0x7d04000})
    at domctl.c:396
#5 0x00000000004373c4 in hcall_xen (num=0x24, regs=0xfd90) at hcalls.c:76
#6  0x00000000004374e0 in do_hcall (regs=0xfd90) at hcalls.c:104
#7  0x0000000000449d34 in ex_hcall_continued () at misc.h:31

OK, the problem was that the vcpuaffinity calls had a hidden handle that wasn't being mapped. I believe I've fixed this now with Linux changeset
efefb3db340a.

BTW: this is a panic, only because of all this is at an early stage, normally it should just fail. BTW2: that reminds me, we do 0 bounds checking on the rest of memory, we should add that soon, I'll file a bug.

-JX


_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel

<Prev in Thread] Current Thread [Next in Thread>