WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Daily Xen-HVM Builds

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Daily Xen-HVM Builds
From: Rick Gonzalez <rcgneo@xxxxxxxxxx>
Date: Wed, 15 Feb 2006 15:51:10 -0600
Delivery-date: Wed, 15 Feb 2006 21:05:21 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1139855538.26169.14.camel@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <43F0D0A2.8070009@xxxxxxxxxx> <1139855538.26169.14.camel@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.7 (X11/20050923)
changeset:   8843:765b0657264d
tag:         tip
user:        cl349@xxxxxxxxxxxxxxxxxxxx
date:        Wed Feb 15 08:13:10 2006 +0000
summary:     Cleanup x86/x86_64 apic.c files.

SAME ISSUE AS BEFORE

x460:

x86_32:

Status:

- dom0 boots fine
- xend loads fine
- single HVM domain loads fine
- Multiple HVM domains load fine
- destruction of any HVM domain causes dom0 to reboot

Issues affecting HVM:

* During xm-test, dom0 reboots. This happens during "11_create_concurrent_pos" test case. The destroy call is causing the reboot.

Details:


dom0 will also reboot with the following console messages:

(XEN) HVM_PIT: guest freq in cycles=3002234
(XEN) CPU: -14688196
(XEN) EI…N…þÅÿì1ÿN(XEN) CPU: 12
(XEN) EIP: e008:[<ff117988>]CPU: 9
(XEN) EIP: e008:[<ff111584>]CPU: 4
(XEN) EIP: e008:[<ff111584>] timer_softirq_action+0x64/0x140
(XEN) EFLAGS: 00010006 CONTEXT: hypervisor
(XEN) eax: 0dee319b ebx: ffff4d85 ecx: 00000000 edx: 000000c0
(XEN) esi: ff1e9a00 edi: 00000480 ebp: ffbd2080 esp: ffbd1f68
(XEN) cr0: 8005003b cr3: 00178000
(XEN) ds: e010 es: e010 fs: e010 gs: e010 ss: e010 cs: e008
(XEN) Xen stack trace from esp=ffbd1f68:
(XEN) idle_loop+0x38/0x80
(XEN) EFLAGS: 00010246
(XEN) CR3: 00000000
(XEN) eax: 00000600 ebx: ffbc5fb4 ecx: 00000000 edx: 00000600
(XEN) esi: 00000600 edi: 00000600 ebp: 00000000 esp: ffbc5fa8
(XEN) ds: e010 es: e010 fs: e010 gs: e010 ss: e010
(XEN) ************************************
(XEN) CPU12 DOUBLE FAULT -- system shutdown
(XEN) 0ded646e System needs manual reset.
(XEN) ************************************
(XEN) timer_softirq_action+0x64/0x140
(XEN) EFLAGS: 00010006 CONTEXT: hypervisor
(XEN) eax: 0e162278 ebx: 00010293 ecx: 00000000 edx: 000000c0
(XEN) esi: ff1e7900 edi: 00000200 ebp: 00000000 esp: ffbe5f68
(XEN) cr0: 8005003b cr3: 00178000
(XEN) ds: e010 es: e010 fs: e010 gs: e010 ss: e010 cs: e008
(XEN) Xen stack trace from esp=ffbe5f68:
(XEN) 0e155462 000000c0 00000100 00000200 ffbe5f7c 00000200 00000004
00000200
(XEN) 00000200 00000200 00000000 ff110772 00ef0000 ff117988 ffbe5fb4
ff1179c6
(XEN) ffbe6080 ff19ffa0 ff1ef880 00000000 00000000 00000000 00000000
00000000
(XEN) 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000
(XEN) 00000000 00000000 00000000 00000000 00000004 ffbe6080
(XEN) Xen call trace:
(XEN) [<ff111584>] timer_softirq_action+0x64/0x140
(XEN) [<ff110772>]000000c0 00000000 00000480 do_softirq+0x32/0x50
(XEN) [<ff117988>]ffbd1f7c 00000480

changeset:   8824:4caca2046421
tag:         tip
user:        kaf24@xxxxxxxxxxxxxxxxxxxx
date:        Mon Feb 13 03:23:26 2006 +0100
summary:     Fix error exit path in __gnttab_map_grant_ref() to


x460:

x86_32:

Status:

- dom0 boots fine
- xend loads fine
- single HVM domain loads fine
- Multiple HVM domains load fine
- destruction of any HVM domain causes dom0 to reboot

Issues affecting HVM:

* During xm-test, dom0 reboots. This happens during "11_create_concurrent_pos" test case. The destroy call is causing the reboot.

Details:

== Last entries of xm-test .output file: ==

Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_5']
[11_create_5] Sending `foo'
[11_create_5] Sending `ls'
[11_create_5] Sending `echo $?'
[5] Started 11_create_5
[dom0] Running `xm create /tmp/xm-test.conf'
Using config file "/tmp/xm-test.conf".
Started domain 11_create_6
[dom0] Waiting 20 seconds for domU boot...
Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_6']
[11_create_6] Sending `foo'
[11_create_6] Sending `ls'
[11_create_6] Sending `echo $?'
[6] Started 11_create_6
[dom0] Running `xm create /tmp/xm-test.conf'
Using config file "/tmp/xm-test.conf".
Started domain 11_create_7
[dom0] Waiting 20 seconds for domU boot...
Console executing: ['/usr/sbin/xm', 'xm', 'consolvmxdom2:/tmp/xm-test-results/021306-vmxdom2

== HVM domain output during before crash: ==

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel: Call Trace:

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c010810a>] show_stack_log_lvl+0xaa/0xe0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c01082f1>] show_registers+0x161/0x1e0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c01084e9>] die+0xd9/0x180

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c0108619>] do_trap+0x89/0xd0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c0108998>] do_invalid_op+0xb8/0xd0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c0107d67>] error_code+0x2b/0x30

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c0147390>] zap_pte_range+0x1b0/0x310

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c01475d9>] unmap_page_range+0xe9/0x1b0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c014777a>] unmap_vmas+0xda/0x1a0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c014cbde>] exit_mmap+0x6e/0xf0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c0119707>] mmput+0x27/0x80

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c011d45b>] exit_mm+0x6b/0xe0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c011dc79>] do_exit+0xe9/0x380

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c011df86>] do_group_exit+0x36/0x90

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c01277a9>] get_signal_to_deliver+0x269/0x2f0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c01079ab>] do_signal+0x6b/0x170

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c0107aea>] do_notify_resume+0x3a/0x3c

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c0107c8b>] work_notifysig+0x13/0x18

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel: Code: 53 0c 8b 42 04 c7 04 24 e2 f2 47 c0 40 89 44 24 04 e8 4e d1 fc ff 8b 43 10 c7 04 24 f9 f2 47 c0 89 44 24 04 e8 3b d1 fc ff eb 84 <0f> 0b 2b 02 b7 f2 47 c0 eb 80 eb 0d 90 90 90 90 90 90 90 90 90

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel: Bad page state in process 'qemu-dm'

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel: page:c131b340 flags:0x00000004 mapping:00000000 mapcount:-1 count:0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel: Trying to fix it up, but a reboot is needed

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel: Backtrace:


== dom0 Serial Console output: ===

Code: 08 2d f8 04 00 00 89 04 24 e8 1a 1f 00 00 c9 c3 90 8d b4 26 00 00 00 00 55 89 e5 83 ec 08 89 74 24 04 8b 75 0c 8b 55 08 89 1c 24 <ff> 0e 8d 5a 20 8b 42 20 8b 4b 04 89 01 89 48 04 c7 43 04 00 02
<1>Fixing recursive fault but reboot is needed!
Unable to handle kernel NULL pointer dereference at virtual address 00000000
printing eip:
c01174e3
*pde = ma 00000000 pa 55555000
Oops: 0002 [#18]
Modules linked in: thermal processor fan button battery ac sworks_agp agpgart
CPU:    0
EIP:    0061:[<c01174e3>]    Tainted: G    B VLI
EFLAGS: 00010096   (2.6.16-rc2-xen0)
EIP is at dequeue_task+0x13/0x50
eax: 00000000   ebx: dbc02530   ecx: dbc02530   edx: dbc02530
esi: 00000000   edi: 00000010   ebp: d6f184a4   esp: d6f1849c
ds: 007b   es: 007b   ss: 0069
Process qemu-dm (pid: 14455, threadinfo=d6f18000 task=dbc02530)
Stack: <0>dbc02530 dbc02530 d6f184b8 c011780e dbc02530 00000000 dbc02530 d6f1852c c045f9d8 dbc02530 c05d5ca0 00000030 00000001 d6f18550 c0107fa1 c04a3b9a c0107bf1 00000004 00000001 c0107ffa 069f6bc7 2a8e2801 00000156 dbc02530
Call Trace:
[<c010810a>] show_stack_log_lvl+0xaa/0xe0
[<c01082f1>] show_registers+0x161/0x1e0
[<c01084e9>] die+0xd9/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c0108619>] do_trap+0x89/0xd0
[<c0108998>] do_invalid_op+0xb8/0xd0
[<c0107d67>] error_code+0x2b/0x30
[<c01147e2>] __pgd_pin+0x32/0x50
[<c0114894>] mm_pin+0x14/0x20
[<c045fa08>] schedule+0x498/0x6f0
[<c011dd71>] do_exit+0x1e1/0x380
[<c010858b>] die+0x17b/0x180
[<c0108619>] do_trap+0x89/0xd0
[<c0108998>] do_invalid_op+0xb8/0xd0
[<c0107d67>] error_code+0x2b/0x30
[<c01147e2>] __pgd_pin+0x32/0x50
[<c0114894>] mm_pin+0x14/0x20
[<c045fa08>] schedule+0x498/0x6f0
[<c04604b0>] schedule_timeout+0x50/0xa0
[<c016e057>] do_select+0x277/0x2e0
[<c016e2d3>] core_sys_select+0x1c3/0x310
[<c016e4d1>] sys_select+0xb1/0x160
[<c0107bf1>] syscall_call+0x7/0xb
Code: 08 2d f8 04 00 00 89 04 24 e8 1a 1f 00 00 c9 c3 90 8d b4 26 00 00 00 00 55 89 e5 83 ec 08 89 74 24 04 8b 75 0c 8b 55 08 89 1c 24 <ff> 0e 8d 5a 20 8b 42 20 8b 4b 04 89 01 89 48 04 c7 43 04 00 02
<1>Fixing recursive fault but reboot is needed!
Unable to handle kernel NULL pointer dereference at virtual address 00000000
printing eip:
c01174e3
*pde = ma 00000000 pa 55555000
Oops: 0002 [#19]
Modules linked in: thermal processor fan button battery ac sworks_agp agpgart
CPU:    0
EIP:    0061:[<c01174e3>]    Tainted: G    B VLI
EFLAGS: 00010082   (2.6.16-rc2-xen0)
EIP is at dequeue_task+0x13/0x50
eax: 00000000   ebx: dbc02530   ecx: dbc02530   edx: dbc02530
esi: 00000000   edi: 00000010   ebp: d6f1830c   esp: d6f18304
ds: 007b   es: 007b   ss: 0069
Unable to handle kernel NULL pointer dereference at virtual address 00000078
printing eip:
c0114f08
*pde = ma 00000000 pa 55555000
Oops: 0000 [#20]
Modules linked in: thermal processor fan button battery ac sworks_agp agpgart
CPU:    0
EIP:    0061:[<c0114f08>]    Tainted: G    B VLI
EFLAGS: 00010046   (2.6.16-rc2-xen0)
EIP is at do_page_fault+0xb8/0x651
eax: d6efc000   ebx: 0f00fff0   ecx: 0000007b   edx: 00000000
esi: 0000000d   edi: c0114e50   ebp: d6efc0f8   esp: d6efc0a0
ds: 007b   es: 007b   ss: 0069
Unable to handle kernel paging request at virtual address 27bd808e
printing eip:
c0114f08
*pde = ma 00000000 pa 55555000
Recursive die() failure, output suppressed
<0>Kernel panic - not syncing: Fatal exception in interrupt
(XEN) Domain 0 shutdown: rebooting machine.

plain text document attachment (dom0-reboot.txt)
Issues affecting HVM:

* During xm-test, dom0 reboots. This happens during "11_create_concurrent_pos" 
test case. The destroy call is causing the reboot.

Details:

== Last entries of xm-test .output file: ==

Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_5']
[11_create_5] Sending `foo'
[11_create_5] Sending `ls'
[11_create_5] Sending `echo $?'
[5] Started 11_create_5
[dom0] Running `xm create /tmp/xm-test.conf'
Using config file "/tmp/xm-test.conf".
Started domain 11_create_6
[dom0] Waiting 20 seconds for domU boot...
Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_6']
[11_create_6] Sending `foo'
[11_create_6] Sending `ls'
[11_create_6] Sending `echo $?'
[6] Started 11_create_6
[dom0] Running `xm create /tmp/xm-test.conf'
Using config file "/tmp/xm-test.conf".
Started domain 11_create_7
[dom0] Waiting 20 seconds for domU boot...
Console executing: ['/usr/sbin/xm', 'xm', 
'consolvmxdom2:/tmp/xm-test-results/021306-vmxdom2

== HVM domain output during before crash: ==

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel: Call Trace:

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c010810a>] show_stack_log_lvl+0xaa/0xe0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c01082f1>] show_registers+0x161/0x1e0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c01084e9>] die+0xd9/0x180

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c0108619>] do_trap+0x89/0xd0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c0108998>] do_invalid_op+0xb8/0xd0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c0107d67>] error_code+0x2b/0x30

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c0147390>] zap_pte_range+0x1b0/0x310

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c01475d9>] unmap_page_range+0xe9/0x1b0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:24 2006 ...
vmxdom2 kernel:  [<c014777a>] unmap_vmas+0xda/0x1a0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c014cbde>] exit_mmap+0x6e/0xf0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c0119707>] mmput+0x27/0x80

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c011d45b>] exit_mm+0x6b/0xe0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c011dc79>] do_exit+0xe9/0x380

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c011df86>] do_group_exit+0x36/0x90

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c01277a9>] get_signal_to_deliver+0x269/0x2f0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c01079ab>] do_signal+0x6b/0x170

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c0107aea>] do_notify_resume+0x3a/0x3c

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel:  [<c0107c8b>] work_notifysig+0x13/0x18

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel: Code: 53 0c 8b 42 04 c7 04 24 e2 f2 47 c0 40 89 44 24 04 e8 4e d1 fc ff 8b 43 10 c7 04 24 f9 f2 47 c0 89 44 24 04 e8 3b d1 fc ff eb 84 <0f> 0b 2b 02 b7 f2 47 c0 eb 80 eb 0d 90 90 90 90 90 90 90 90 90

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel: Bad page state in process 'qemu-dm'

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel: page:c131b340 flags:0x00000004 mapping:00000000 mapcount:-1 
count:0

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel: Trying to fix it up, but a reboot is needed

Message from syslogd@vmxdom2 at Mon Feb 13 11:30:25 2006 ...
vmxdom2 kernel: Backtrace:


== dom0 Serial Console output: ===

Code: 08 2d f8 04 00 00 89 04 24 e8 1a 1f 00 00 c9 c3 90 8d b4 26 00 00 00 00 55 89 e5 83 ec 08 89 74 24 04 8b 75 0c 8b 55 08 89 1c 24 <ff> 0e 8d 5a 20 8b 42 20 8b 4b 04 89 01 89 48 04 c7 43 04 00 02
<1>Fixing recursive fault but reboot is needed!
Unable to handle kernel NULL pointer dereference at virtual address 00000000
printing eip:
c01174e3
*pde = ma 00000000 pa 55555000
Oops: 0002 [#18]
Modules linked in: thermal processor fan button battery ac sworks_agp agpgart
CPU:    0
EIP:    0061:[<c01174e3>]    Tainted: G    B VLI
EFLAGS: 00010096   (2.6.16-rc2-xen0)
EIP is at dequeue_task+0x13/0x50
eax: 00000000   ebx: dbc02530   ecx: dbc02530   edx: dbc02530
esi: 00000000   edi: 00000010   ebp: d6f184a4   esp: d6f1849c
ds: 007b   es: 007b   ss: 0069
Process qemu-dm (pid: 14455, threadinfo=d6f18000 task=dbc02530)
Stack: <0>dbc02530 dbc02530 d6f184b8 c011780e dbc02530 00000000 dbc02530 
d6f1852c
      c045f9d8 dbc02530 c05d5ca0 00000030 00000001 d6f18550 c0107fa1 c04a3b9a
      c0107bf1 00000004 00000001 c0107ffa 069f6bc7 2a8e2801 00000156 dbc02530
Call Trace:
[<c010810a>] show_stack_log_lvl+0xaa/0xe0
[<c01082f1>] show_registers+0x161/0x1e0
[<c01084e9>] die+0xd9/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c011522c>] do_page_fault+0x3dc/0x651
[<c0107d67>] error_code+0x2b/0x30
[<c011780e>] deactivate_task+0x1e/0x30
[<c045f9d8>] schedule+0x468/0x6f0
[<c011de8c>] do_exit+0x2fc/0x380
[<c010858b>] die+0x17b/0x180
[<c0108619>] do_trap+0x89/0xd0
[<c0108998>] do_invalid_op+0xb8/0xd0
[<c0107d67>] error_code+0x2b/0x30
[<c01147e2>] __pgd_pin+0x32/0x50
[<c0114894>] mm_pin+0x14/0x20
[<c045fa08>] schedule+0x498/0x6f0
[<c011dd71>] do_exit+0x1e1/0x380
[<c010858b>] die+0x17b/0x180
[<c0108619>] do_trap+0x89/0xd0
[<c0108998>] do_invalid_op+0xb8/0xd0
[<c0107d67>] error_code+0x2b/0x30
[<c01147e2>] __pgd_pin+0x32/0x50
[<c0114894>] mm_pin+0x14/0x20
[<c045fa08>] schedule+0x498/0x6f0
[<c04604b0>] schedule_timeout+0x50/0xa0
[<c016e057>] do_select+0x277/0x2e0
[<c016e2d3>] core_sys_select+0x1c3/0x310
[<c016e4d1>] sys_select+0xb1/0x160
[<c0107bf1>] syscall_call+0x7/0xb
Code: 08 2d f8 04 00 00 89 04 24 e8 1a 1f 00 00 c9 c3 90 8d b4 26 00 00 00 00 55 89 e5 83 ec 08 89 74 24 04 8b 75 0c 8b 55 08 89 1c 24 <ff> 0e 8d 5a 20 8b 42 20 8b 4b 04 89 01 89 48 04 c7 43 04 00 02
<1>Fixing recursive fault but reboot is needed!
Unable to handle kernel NULL pointer dereference at virtual address 00000000
printing eip:
c01174e3
*pde = ma 00000000 pa 55555000
Oops: 0002 [#19]
Modules linked in: thermal processor fan button battery ac sworks_agp agpgart
CPU:    0
EIP:    0061:[<c01174e3>]    Tainted: G    B VLI
EFLAGS: 00010082   (2.6.16-rc2-xen0)
EIP is at dequeue_task+0x13/0x50
eax: 00000000   ebx: dbc02530   ecx: dbc02530   edx: dbc02530
esi: 00000000   edi: 00000010   ebp: d6f1830c   esp: d6f18304
ds: 007b   es: 007b   ss: 0069
Unable to handle kernel NULL pointer dereference at virtual address 00000078
printing eip:
c0114f08
*pde = ma 00000000 pa 55555000
Oops: 0000 [#20]
Modules linked in: thermal processor fan button battery ac sworks_agp agpgart
CPU:    0
EIP:    0061:[<c0114f08>]    Tainted: G    B VLI
EFLAGS: 00010046   (2.6.16-rc2-xen0)
EIP is at do_page_fault+0xb8/0x651
eax: d6efc000   ebx: 0f00fff0   ecx: 0000007b   edx: 00000000
esi: 0000000d   edi: c0114e50   ebp: d6efc0f8   esp: d6efc0a0
ds: 007b   es: 007b   ss: 0069
Unable to handle kernel paging request at virtual address 27bd808e
printing eip:
c0114f08
*pde = ma 00000000 pa 55555000
Recursive die() failure, output suppressed
<0>Kernel panic - not syncing: Fatal exception in interrupt
(XEN) Domain 0 shutdown: rebooting machine.

_______________________________________________________
ltc-xen-fullvirt mailing list &lt;ltc-xen-fullvirt@xxxxxxxxxxxxx>
To unsubscribe from the list, change your list options
or if you have forgotten your list password visit:
http://linux.ibm.com/mailman/listinfo/ltc-xen-fullvirt





_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>