|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] Lot's of OOPS
I am seeing the same problem on an x86 machine, as soon as I create a
DomU (no loads running!). I first saw this last Friday, I believe.
Oops: 0002 [#1]
PREEMPT SMP modules linked in:
CPU: 0
EIP: 0061:[<00244ec0>] Not tainted VLI
I submitted Defect #96 to the Xensource Bugzilla system to track this
bug.
On Sun, 2005-07-10 at 18:58 -0500, David_Wolinsky@xxxxxxxx wrote:
> Last one... It crashed again, but this time gave a better error
> message...
>
> kernel BUG at mm/page_alloc.c:649!
> invalid operand: 0000 [#1]
> PREEMPT
> Modules linked in:
> CPU: 0
> EIP: 0061:[<c013c5c2>] Not tainted VLI
> EFLAGS: 00010202 (2.6.11.12-xen0)
> EIP is at buffered_rmqueue+0x26c/0x2c4
> eax: 00000001 ebx: c17e2000 ecx: 00000000 edx: 000290eb
> esi: c1521d74 edi: 00000000 ebp: c031aa1c esp: c17e3ccc
> ds: 007b es: 007b ss: 0069
> Process kjournald (pid: 783, threadinfo=c17e2000 task=c16ff0a0)
> Stack: c031aa00 c1521d74 00000000 00000001 c1521d74 c031aa00 00000000
> 00000050
> 00000000 c013c838 c031aa00 00000000 00000050 00000000 00000000
> 00000000
> 00000000 00000000 c16ff0a0 00000010 c031ad6c 00001594 00000000
> 00000000
> Call Trace:
> [<c013c838>] __alloc_pages+0x176/0x3de
> [<c01379b7>] find_or_create_page+0x65/0xb0
> [<c015a1f7>] grow_dev_page+0x34/0x166
> [<c015a3ba>] __getblk_slow+0x91/0x135
> [<c015a79c>] __getblk+0x3d/0x3f
> [<c01a758c>] journal_get_descriptor_buffer+0x43/0x92
> [<c01a3fa0>] journal_commit_transaction+0xa69/0x1319
> [<c0130a84>] autoremove_wake_function+0x0/0x4b
> [<c0130a84>] autoremove_wake_function+0x0/0x4b
> [<c02d6736>] schedule+0x346/0x5f5
> [<c0117469>] __wake_up+0x4f/0xaa
> [<c01a6a63>] kjournald+0xd9/0x27d
> [<c0130a84>] autoremove_wake_function+0x0/0x4b
> [<c0130a84>] autoremove_wake_function+0x0/0x4b
> [<c01094e6>] ret_from_fork+0x6/0x14
> [<c01a6980>] commit_timeout+0x0/0x9
> [<c01a698a>] kjournald+0x0/0x27d
> [<c0107795>] kernel_thread_helper+0x5/0xb
> Code: ff ff 8d 46 2c 89 44 24 0c 8b 45 0c c7 44 24 04 00 00 00 00 89 14
> 24 89 44 24 08 e8 23 fb ff ff 03 46 1c 89 46 1c e9 f4 fd ff ff <0f> 0b
> 89 02 a9 7c 2e c0 e9 7c fe ff ff e8 11 a4 19 00 eb 8b e8
> <1>Unable to handle kernel paging request at virtual address 00200200
> printing eip:
> c0140bec
> *pde = ma 00000000 pa 55555000
> Oops: 0002 [#2]
> PREEMPT
> Modules linked in:
> CPU: 0
> EIP: 0061:[<c0140bec>] Not tainted VLI
> EFLAGS: 00010297 (2.6.11.12-xen0)
> EIP is at free_block+0xb7/0xda
> eax: c1521d8c ebx: c103f1b8 ecx: 000df114 edx: 00200200
> esi: c1521d80 edi: 00000001 ebp: 0000000c esp: c036de0c
> ds: 007b es: 007b ss: 0069
> Process swapper (pid: 0, threadinfo=c036c000 task=c0314b20)
> Stack: 00000069 00000000 c1521d9c c041ae10 df142000 00000000 c16b6860
> c0140c5c
> c1521d80 c041ae10 0000000c 0000000c c041ae00 df142000 00000000
> c043c980
> c014100e c1521d80 c041ae00 df37f480 df37f480 df37f480 c02815ed
> df142000
> Call Trace:
> [<c0140c5c>] cache_flusharray+0x4d/0xcc
> [<c014100e>] kfree+0x9d/0xa8
> [<c02815ed>] kfree_skbmem+0x10/0x26
> [<c028168a>] __kfree_skb+0x87/0x11c
> [<c02cdec9>] packet_rcv_spkt+0x179/0x27e
> [<c0107537>] __dev_alloc_skb+0x23/0x39
> [<c02877f3>] netif_receive_skb+0x20e/0x235
> [<c021438d>] tg3_rx+0x24c/0x3d0
> [<c0214607>] tg3_poll+0xf6/0x259
> [<c0287aca>] net_rx_action+0x118/0x1b7
> [<c012004c>] __do_softirq+0x6c/0xf5
> [<c0120139>] do_softirq+0x64/0x77
> [<c0120205>] irq_exit+0x36/0x38
> [<c010d2e2>] do_IRQ+0x22/0x28
> [<c0105f1d>] evtchn_do_upcall+0x60/0x86
> [<c01097cc>] hypervisor_callback+0x2c/0x34
> [<c01075b0>] xen_idle+0x32/0x6b
> [<c0107619>] cpu_idle+0x30/0x3e
> [<c036e765>] start_kernel+0x1a0/0x1f7
> [<c036e307>] unknown_bootoption+0x0/0x1bc
> Code: 43 14 66 89 44 4b 18 66 89 4b 14 8b 43 10 83 e8 01 85 c0 89 43 10
> 74 81 8d 46 0c 83 c7 01 8b 50 04 39 ef 89 58 04 89 03 89 53 04 <89> 1a
> 7c 8f 83 c4 0c 5b 5e 5f 5d c3 2b 46 3c 89 46 24 89 5c 24
> <0>Kernel panic - not syncing: Fatal exception in interrupt
> (XEN) Domain 0 shutdown: rebooting machine.
> (XEN) Reboot disabled on cmdline: require manual reset
>
> -----Original Message-----
> From: Ian Pratt [mailto:m+Ian.Pratt@xxxxxxxxxxxx]
> Sent: Sunday, July 10, 2005 12:47 PM
> To: Wolinsky, David; xen-devel@xxxxxxxxxxxxxxxxxxx
> Cc: ian.pratt@xxxxxxxxxxxx
> Subject: RE: [Xen-devel] Lot's of OOPS
>
>
> > Here's the log... I'm running 1 VM in Linux domain....
>
> Do you mean dom0 plus 1 other domain?
>
> CONFIG_SMP got enabled by default recently, but I'd be slightly
> surprised if that was the problem as a bunch of people had tested it
> pretty hard. Similarly for CONFIG_HIGHMEM4G.
>
> It would be great if you could try and get a simple recipe to reproduce.
>
> Thanks,
> Ian
>
> > I have
> > been running specjbb (cpu, memory intensive)... since it appears 3
> > applications are causing it to malfunction, I'm going to update to the
>
> > latest python, stop kjournald, and if that doesn't work try updating
> > X... (I also use VMX domains which cause similar issues, although I
> > don't have a log to prove it). If this is a known problem... or
> > something else...
> > please let me know.
> >
> > Thanks,
> > David
> >
> > Unable to handle kernel paging request at virtual address
> > 379838c5 printing eip:
> > c013c249
> > *pde = ma 00000000 pa 55555000
> > Oops: 0002 [#1]
> > PREEMPT
> > Modules linked in:
> > CPU: 0
> > EIP: 0061:[<c013c249>] Not tainted VLI
> > EFLAGS: 00010202 (2.6.11.12-xen0)
> > EIP is at buffered_rmqueue+0x73/0x2c4
> > eax: c10da3bb ebx: c0752000 ecx: c0354cac edx: 379838c1
> > esi: c0354c80 edi: 00000000 ebp: c0354c9c esp: c0753ccc
> > ds: 007b es: 007b ss: 0069
> > Process kjournald (pid: 813, threadinfo=c0752000 task=c17780a0)
> > Stack: 000014be c0753cf8 00000000 00000001 c10da3a3 c0354c80 00000000
> > 00000050
> > 00000000 c013c6b8 c0354c80 00000000 00000050 00000000 00000000
> > 00000000
> > 00000000 00000000 c17780a0 00000010 c0354fec 0000193b 00000000
> > 00000000 Call Trace:
> > [<c013c6b8>] __alloc_pages+0x176/0x3de [<c0137837>]
> > find_or_create_page+0x65/0xb0 [<c015a017>]
> > grow_dev_page+0x34/0x166 [<c015a1da>]
> > __getblk_slow+0x91/0x135 [<c015a5bc>] __getblk+0x3d/0x3f [<c01a736c>]
>
> > journal_get_descriptor_buffer+0x43/0x92
> > [<c01a3d80>] journal_commit_transaction+0xa69/0x1319
> > [<c0130904>] autoremove_wake_function+0x0/0x4b [<c0130904>]
> > autoremove_wake_function+0x0/0x4b [<c0306f66>]
> > schedule+0x346/0x5f5 [<c0117479>] __wake_up+0x4f/0xaa
> > [<c01a6843>] kjournald+0xd9/0x27d [<c0130904>]
> > autoremove_wake_function+0x0/0x4b [<c0130904>]
> > autoremove_wake_function+0x0/0x4b [<c01094f6>]
> > ret_from_fork+0x6/0x14 [<c01a6760>] commit_timeout+0x0/0x9
> > [<c01a676a>] kjournald+0x0/0x27d [<c01077a5>]
> > kernel_thread_helper+0x5/0xb
> > Code: 0f b6 78 01 c6 40 01 01 83 6b 14 01 8b 46 1c 3b 45 04 0f 8e e3
> > 01 00 00 85 c0 74 25 8b 45 10 8d 48 e8 89 4c 24 10 8b 48 04 8b 10 <89>
>
> > 4a 04 89 11 c7 40 04 00 02 20 00 c7 00 00
> > 01 10 00 83 6e 1c
> >
> > <6>note: kjournald[813] exited with preempt_count 1 Unable to handle
> > kernel paging request at virtual address 379838c5 printing eip:
> > c013c249
> > *pde = ma 00000000 pa 55555000
> > Oops: 0002 [#2]
> > PREEMPT
> > Modules linked in:
> > CPU: 0
> > EIP: 0061:[<c013c249>] Not tainted VLI
> > EFLAGS: 00210202 (2.6.11.12-xen0)
> > EIP is at buffered_rmqueue+0x73/0x2c4
> > eax: c10da3bb ebx: d02f8000 ecx: c0354cac edx: 379838c1
> > esi: c0354c80 edi: 00000000 ebp: c0354c9c esp: d02f9e90
> > ds: 007b es: 007b ss: 0069
> > Process python (pid: 4450, threadinfo=d02f8000 task=d02d4a80)
> > Stack: d986d300 d0311ea0 d02d4a80 d02f9f00 c10da3a3 c0354c80 00000000
> > 000000d0
> > 00000000 c013c6b8 c0354c80 00000000 000000d0 00000000 00000000
> > 00000000
> > 00000000 00000000 d02d4a80 00000010 c0354fec d02e3510 00000000
> > d01a8080 Call Trace:
> > [<c013c6b8>] __alloc_pages+0x176/0x3de [<c013c938>]
> > __get_free_pages+0x18/0x31 [<c016a4be>] __pollwait+0x69/0x9c
> > [<c01645c4>] pipe_poll+0xa5/0xa7 [<c016ae79>] do_pollfd+0x86/0x8a
> > [<c016aedc>] do_poll+0x5f/0xc6 [<c016b116>] sys_poll+0x1d3/0x22d
> > [<c016a455>] __pollwait+0x0/0x9c [<c0109637>] syscall_call+0x7/0xb
> > Code: 0f b6 78 01 c6 40 01 01 83 6b 14 01 8b 46 1c 3b 45 04 0f 8e e3
> > 01 00 00 85 c0 74 25 8b 45 10 8d 48 e8 89 4c 24 10 8b 48 04 8b 10 <89>
>
> > 4a 04 89 11 c7 40 04 00 02 20 00 c7 00 00
> > 01 10 00 83 6e 1c
> >
> > <6>note: python[4450] exited with preempt_count 1 Unable to handle
> > kernel paging request at virtual address 379838c5 printing eip:
> > c013c249
> > *pde = ma 00000000 pa 55555000
> > Oops: 0002 [#3]
> > PREEMPT
> > Modules linked in:
> > CPU: 0
> > EIP: 0061:[<c013c249>] Not tainted VLI
> > EFLAGS: 00210202 (2.6.11.12-xen0)
> > EIP is at buffered_rmqueue+0x73/0x2c4
> > eax: c10da3bb ebx: c0752000 ecx: c0354cac edx: 379838c1
> > esi: c0354c80 edi: 00000000 ebp: c0354c9c esp: c0753de8
> > ds: 0069 es: 0069 ss: 0069
> > Process python (pid: 6522, threadinfo=c0752000 task=c17780a0)
> > Stack: df0723ec 00000003 0000193a 00000000 c10da3a3 c0354c80 00000000
> > 000080d2
> > 00000000 c013c6b8 c0354c80 00000000 000080d2 00000000 00000000
> > 00000000
> > 00000000 00000000 c17780a0 00000010 c035500c 00000000 c0752000
> > d0169b6c Call Trace:
> > [<c013c6b8>] __alloc_pages+0x176/0x3de [<c014893f>]
> > do_anonymous_page+0x83/0x221 [<c0148b4a>] do_no_page+0x6d/0x45e
> > [<c01491c9>]
> > handle_mm_fault+0x197/0x227 [<c0114add>] do_page_fault+0x1d8/0x66c
> > [<c0306c88>] schedule+0x68/0x5f5 [<c0307253>]
> > preempt_schedule+0x3e/0x55 [<c0116870>] deactivate_task+0x1f/0x2c
> > [<c0117b48>]
> > sched_setscheduler+0x22e/0x258 [<c0307253>]
> > preempt_schedule+0x3e/0x55 [<c0117c4b>]
> > do_sched_setscheduler+0xd9/0x150 [<c01099ce>] page_fault+0x2e/0x34
> > Code: 0f b6 78 01 c6 40 01 01 83 6b 14 01 8b 46 1c 3b 45 04 0f 8e e3
> > 01 00 00 85 c0 74 25 8b 45 10 8d 48 e8 89 4c 24 10 8b 48 04 8b 10 <89>
>
> > 4a 04 89 11 c7 40 04 00 02 20 00 c7 00 00
> > 01 10 00 83 6e 1c
> >
> > <6>note: python[6522] exited with preempt_count 1 Unable to handle
> > kernel paging request at virtual address 379838c5 printing eip:
> > c013c249
> > *pde = ma 00000000 pa 55555000
> > Oops: 0002 [#4]
> > PREEMPT
> > Modules linked in:
> > CPU: 0
> > EIP: 0061:[<c013c249>] Not tainted VLI
> > EFLAGS: 00210202 (2.6.11.12-xen0)
> > EIP is at buffered_rmqueue+0x73/0x2c4
> > eax: c10da3bb ebx: d2298000 ecx: c0354cac edx: 379838c1
> > esi: c0354c80 edi: 00000000 ebp: c0354c9c esp: d2299de8
> > ds: 0069 es: 0069 ss: 0069
> > Process gnome-terminal (pid: 4423, threadinfo=d2298000 task=d28c0020)
> > Stack: c10ee0e0 00000000 00000020 c01d8bcc c10da3a3 c0354c80 00000000
> > 000080d2
> > 00000000 c013c6b8 c0354c80 00000000 000080d2 00000000 00000000
> > 00000000
> > 00000000 00000000 d28c0020 00000010 c035500c 00000000 d2298000
> > d214a080 Call Trace:
> > [<c01d8bcc>] n_tty_receive_buf+0xd5/0x14ae [<c013c6b8>]
> > __alloc_pages+0x176/0x3de [<c014893f>]
> > do_anonymous_page+0x83/0x221 [<c0148b4a>] do_no_page+0x6d/0x45e
> > [<c01491c9>]
> > handle_mm_fault+0x197/0x227 [<c01579a9>]
> > do_sync_read+0xbe/0x102 [<c0114add>] do_page_fault+0x1d8/0x66c
> > [<c02ae822>] sock_ioctl+0x179/0x22e [<c0130904>]
> > autoremove_wake_function+0x0/0x4b [<c0169bd6>]
> > do_ioctl+0x76/0x85 [<c0157aa6>] vfs_read+0xb9/0x129 [<c0157db3>]
> > sys_read+0x72/0x74 [<c01099ce>] page_fault+0x2e/0x34
> > Code: 0f b6 78 01 c6 40 01 01 83 6b 14 01 8b 46 1c 3b 45 04 0f 8e e3
> > 01 00 00 85 c0 74 25 8b 45 10 8d 48 e8 89 4c 24 10 8b 48 04 8b 10 <89>
>
> > 4a 04 89 11 c7 40 04 00 02 20 00 c7 00 00
> > 01 10 00 83 6e 1c
> >
> > <6>note: gnome-terminal[4423] exited with preempt_count 1 Unable to
> > handle kernel paging request at virtual address
> > 379838c5 printing eip:
> > c013c249
> > *pde = ma 00000000 pa 55555000
> > Oops: 0002 [#5]
> > PREEMPT
> > Modules linked in:
> > CPU: 0
> > EIP: 0061:[<c013c249>] Not tainted VLI
> > EFLAGS: 00210202 (2.6.11.12-xen0)
> > EIP is at buffered_rmqueue+0x73/0x2c4
> > eax: c10da3bb ebx: da2fc000 ecx: c0354cac edx: 379838c1
> > esi: c0354c80 edi: 00000000 ebp: c0354c9c esp: da2fde1c
> > ds: 007b es: 007b ss: 0069
> > Process X (pid: 3813, threadinfo=da2fc000 task=db22e020)
> > Stack: d9013d00 00000000 00000008 c011221f c10da3a3 c0354c80 00000000
> > 000000d0
> > 00000000 c013c6b8 c0354c80 00000000 000000d0 00000000 00000000
> > 00000000
> > 00000000 00000000 db22e020 00000010 c0354fec bffff160 00000000
> > d9011d80 Call Trace:
> > [<c011221f>] convert_fxsr_to_user+0x11f/0x181 [<c013c6b8>]
> > __alloc_pages+0x176/0x3de [<c013c938>]
> > __get_free_pages+0x18/0x31 [<c016a4be>] __pollwait+0x69/0x9c
> > [<c02fdb46>] unix_poll+0xaf/0xb4 [<c02ae924>] sock_poll+0x26/0x2b
> > [<c016a77f>] do_select+0x1b0/0x2de [<c016a455>] __pollwait+0x0/0x9c
> > [<c016ab4b>]
> > sys_select+0x279/0x521 [<c011269d>] restore_i387+0x8c/0x96
> > [<c010b0b3>] do_gettimeofday+0x27/0x2bb [<c01c4386>]
> > copy_to_user+0x3c/0x50 [<c0109637>] syscall_call+0x7/0xb
> > Code: 0f b6 78 01 c6 40 01 01 83 6b 14 01 8b 46 1c 3b 45 04 0f 8e e3
> > 01 00 00 85 c0 74 25 8b 45 10 8d 48 e8 89 4c 24 10 8b 48 04 8b 10 <89>
>
> > 4a 04 89 11 c7 40 04 00 02 20 00 c7 00 00
> > 01 10 00 83 6e 1c
> >
> > <6>note: X[3813] exited with preempt_count 1
> > (XEN) CPU: 0
> > (XEN) EIP: e008:[<ff10a6aa>]
> > (XEN) EFLAGS: 00210246 CONTEXT: hypervisor
> > (XEN) eax: 00000002 ebx: ff19e700 ecx: 00000000 edx: 00000000
> > (XEN) esi: ff19d000 edi: ff19d000 ebp: 00000000 esp: ff103e24
> > (XEN) cr0: 8005003b cr3: 20102000
> > (XEN) ds: e010 es: e010 fs: 0000 gs: 0000 ss: e010 cs: e008
> > (XEN) Xen stack trace from esp=ff103e24:
> > (XEN) 00000000 00000000 00000000 00000f00 00000000
> > 00000000 00000000 000a0067
> > (XEN) fc400f00 ff196800 ff196080 [ff11e286] ff196080
> > ff196800 000000a0 00000000
> > (XEN) c4b40000 00000004 00e48f57 000000a1 fefa6000
> > ff196800 00039014 [ff11deb0]
> > (XEN) 000a0067 ff196800 00000004 [ff10ede6] c4b40000
> > 00000004 00e48f57 20000000
> > (XEN) fc950818 20000001 00039014 [ff11e03f] fc950818
> > 30000001 00200282 00000000
> > (XEN) 00000000 fc950818 fefa5000 [ff11df19] fc950818
> > 00039014 ff196800 [ff11f5c8]
> > (XEN) fc9581e0 40000000 ffbe7080 47ff0000 fc9581e0
> > 47ff0001 ff196800 [ff11e03f]
> > (XEN) fc9581e0 57ff0001 00000001 ff19f080 fc9581e0
> > 80000003 00000001 [ff1221bd]
> > (XEN) fc9581e0 da2fdbf0 0000000c da000a63 ff103f40
> > ff1534e0 00200246 ff1534e0
> > (XEN) ff103f4f 00010000 ff1534e0 ff1534e0 ff153460
> > ff103fb4 [ff11461c] ff1534e0
> > (XEN) ff196800 ff19f080 fc9581e0 00000000 00000000
> > 00000000 00000000 ff19f080
> > (XEN) 00000004 00039014 00004906 [ff112ca6] c0412a16
> > 0000002d 00000000 ff19f080
> > (XEN) 00007ff0 d9013d00 00000000 [ff138e63] da2fdbf0
> > 00000001 00000000 00007ff0
> > (XEN) d9013d00 00000000 0000001a 00490000 c0115877
> > 00000061 00200246 da2fdbf0
> > (XEN) 00000069 00000069 00000069 00000000 00000000
> > 00000000 ff19f080
> > (XEN) Xen call trace from esp=ff103e24:
> > (XEN) [<ff11e286>] [<ff11deb0>] [<ff10ede6>] [<ff11e03f>]
> > [<ff11df19>] [<ff11f5c8>]
> > (XEN) [<ff11e03f>] [<ff1221bd>] [<ff11461c>] [<ff112ca6>]
> > [<ff138e63>]
> >
> > ****************************************
> > Panic on CPU0:
> > CPU0 FATAL PAGE FAULT
> > [error_code=0000]
> > Faulting linear address: 00000004
> > ****************************************
> >
> >
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
--
Regards,
David F Barrera
Linux Technology Center
Systems and Technology Group, IBM
"The wisest men follow their own direction. "
Euripides
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|