|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] unable to handle kernel paging request at virtual addres
The virtual addresses being faulted on are all rubbish non-kernel addresses.
How long do you run a VM for to see this? Our own (not very in-depth)
testing hasn't shown up anything like this.
-- Keir
On 3/3/08 00:44, "Christopher S. Aker" <caker@xxxxxxxxxxxx> wrote:
> Xen : 3.2.0 64bit
> dom0: 2.6.16.33 32pae
> domU: linux-2.6.18-xen.hg @ 416:08e85e79c65d 32pae
>
> Our server infrastructure automatically logs console output from domUs
> that panic. I have available to me another dozen or so BUG outputs
> similar but not identical to the example below, if this piques anyone's
> interest...
>
> BUG: unable to handle kernel paging request at virtual address 915536e9
> printing eip:
> 03e45000 -> *pde = 00000002:d2d46027
> 15b9a000 -> *pme = 00000000:00000000
> Oops: 0000 [#1]
> SMP
> Modules linked in:
> CPU: 0
> EIP: 0061:[<c01750f4>] Not tainted VLI
> EFLAGS: 00010286 (2.6.18.8-domU-linode7 #1)
> EIP is at iput+0xd/0x6b
> eax: 915536c9 ebx: c6e130d4 ecx: c6e130ec edx: c6e130ec
> esi: 0000002f edi: 00000000 ebp: d61b983c esp: d5e2fee4
> ds: 007b es: 007b ss: 0069
> Process kswapd0 (pid: 114, ti=d5e2e000 task=d5dc0ab0 task.ti=d5e2e000)
> Stack: d20b9214 c0173edf d20b9214 0000002f c0174031 00000080 00000050
> 00003d54
> c137eac0 00000088 00013c51 c0174094 c01439a0 c012eb7b 00000000
> c0513400
> 0000000c 00000000 c05c6fe0 00000080 000000d0 00000000 00000002
> c0513400
> Call Trace:
> [<c0173edf>] prune_one_dentry+0x54/0x75
> [<c0174031>] prune_dcache+0x131/0x15b
> [<c0174094>] shrink_dcache_memory+0x39/0x3b
> [<c01439a0>] shrink_slab+0x111/0x186
> [<c012eb7b>] finish_wait+0x25/0x4b
> [<c0144d5b>] kswapd+0x2e9/0x3eb
> [<c012e960>] autoremove_wake_function+0x0/0x37
> [<c0144a72>] kswapd+0x0/0x3eb
> [<c012e89a>] kthread+0xde/0xe2
> [<c012e7bc>] kthread+0x0/0xe2
> [<c0102b75>] kernel_thread_helper+0x5/0xb
> Code: 06 89 d8 ff d2 5b c3 89 da a1 00 de 58 c0 5b e9 4f 52 fe ff 0f 0b
> ae 00 26 6b 4c c0 eb c5 53 89 c3 85 c0 74 58 8b 80 9c 00 00 00 <8b> 40
> 20 83 bb
> 40 01 00 00 20 74 48 85 c0 74 0e 8b 50 14 85 d2
> EIP: [<c01750f4>] iput+0xd/0x6b SS:ESP 0069:d5e2fee4
> <1>BUG: unable to handle kernel paging request at virtual address 0424cb05
> printing eip:
> c01750f4
> 03e45000 -> *pde = 00000003:30e43027
> 0629d000 -> *pme = 00000000:00000000
> Oops: 0000 [#2]
> SMP
> Modules linked in:
> CPU: 0
> EIP: 0061:[<c01750f4>] Not tainted VLI
> EFLAGS: 00210282 (2.6.18.8-domU-linode7 #1)
> EIP is at iput+0xd/0x6b
> eax: 0424cae5 ebx: c6e132c8 ecx: c6e132e0 edx: c6e132e0
> esi: 0000007f edi: 00000000 ebp: d61b983c esp: d09bfd1c
> ds: 007b es: 007b ss: 0069
> Process httpd (pid: 3677, ti=d09be000 task=d5ce8ab0 task.ti=d09be000)
> Stack: d20b9094 c0173edf d20b9094 0000007f c0174031 00000080 00000000
> 00003d54
> c137eac0 00000090 00013d55 c0174094 c01439a0 00000003 d5c2c878
> d5c2c874
> 00000018 00000000 c05c6fe0 00000100 000280d2 00000000 0000000a
> 00000040
> Call Trace:
> [<c0173edf>] prune_one_dentry+0x54/0x75
> [<c0174031>] prune_dcache+0x131/0x15b
> [<c0174094>] shrink_dcache_memory+0x39/0x3b
> [<c01439a0>] shrink_slab+0x111/0x186
> [<c0144f94>] try_to_free_pages+0x137/0x1f3
> [<c0140ae7>] __alloc_pages+0x12e/0x2d2
> [<c0158316>] shmem_swp_alloc+0x8b/0x277
> [<c015894f>] shmem_getpage+0x10e/0x5e1
> [<c01598fc>] shmem_nopage+0x97/0xce
> [<c014bc6c>] __handle_mm_fault+0x28a/0x15db
> [<c013de28>] __generic_file_aio_read+0x16e/0x243
> [<c013bc05>] file_read_actor+0x0/0xc7
> [<c015db64>] do_sync_read+0xc1/0x11c
> [<c0166e61>] cp_new_stat64+0xf4/0x106
> [<c01103ea>] do_page_fault+0x10f/0xc70
> [<c015daa3>] do_sync_read+0x0/0x11c
> [<c015df67>] vfs_read+0xa2/0x160
> [<c015ea2e>] sys_read+0x41/0x6a
> [<c01102db>] do_page_fault+0x0/0xc70
> [<c01052e3>] error_code+0x2b/0x30
> Code: 06 89 d8 ff d2 5b c3 89 da a1 00 de 58 c0 5b e9 4f 52 fe ff 0f 0b
> ae 00 26 6b 4c c0 eb c5 53 89 c3 85 c0 74 58 8b 80 9c 00 00 00 <8b> 40
> 20 83 bb
> 40 01 00 00 20 74 48 85 c0 74 0e 8b 50 14 85 d2
> EIP: [<c01750f4>] iput+0xd/0x6b SS:ESP 0069:d09bfd1c
> <1>BUG: unable to handle kernel paging request at virtual address 010d9559
> printing eip:
> c015a531
> 06139000 -> *pde = 00000003:69252027
> 0768e000 -> *pme = 00000000:00000000
> Oops: 0002 [#3]
> SMP
> Modules linked in:
> CPU: 0
> EIP: 0061:[<c015a531>] Not tainted VLI
> EFLAGS: 00210082 (2.6.18.8-domU-linode7 #1)
> EIP is at free_block+0x83/0xfe
> eax: c5883000 ebx: d5c2fa80 ecx: c6e13000 edx: 010d9555
> esi: c6e13a00 edi: d5cac080 ebp: d5e30dd8 esp: c6679d24
> ds: 007b es: 007b ss: 0069
> Process httpd (pid: 3680, ti=c6678000 task=d4c54030 task.ti=c6678000)
> Stack: c01426ea 0000001b 00000011 d5e30d94 d5cac080 c7a28618 d5cb5400
> c015a6df
> 00000000 d5e30d80 0000001b d5c2fa80 d5e30d80 00000000 c7a28618
> 00000000
> c015a3f4 c7a286b0 c7a286b0 c6679da0 00000029 c01750ce c7a286b8
> c0175b6f
> Call Trace:
> [<c01426ea>] pagevec_lookup+0x1c/0x24
> [<c015a6df>] cache_flusharray+0x55/0xc1
> [<c015a3f4>] kmem_cache_free+0xc8/0xee
> [<c01750ce>] destroy_inode+0x2e/0x47
> [<c0175b6f>] dispose_list+0x6e/0xcf
> [<c0175db6>] shrink_icache_memory+0x1e6/0x223
> [<c01439a0>] shrink_slab+0x111/0x186
> [<c0144f94>] try_to_free_pages+0x137/0x1f3
> [<c0140ae7>] __alloc_pages+0x12e/0x2d2
> [<c014c9a5>] __handle_mm_fault+0xfc3/0x15db
> [<c013de28>] __generic_file_aio_read+0x16e/0x243
> [<c013bc05>] file_read_actor+0x0/0xc7
> [<c014e8fb>] vma_adjust+0x11a/0x412
> [<c0166e61>] cp_new_stat64+0xf4/0x106
> [<c01103ea>] do_page_fault+0x10f/0xc70
> [<c015dfc5>] vfs_read+0x100/0x160
> [<c014f5a1>] sys_brk+0xe5/0xef
> [<c01102db>] do_page_fault+0x0/0xc70
> [<c01052e3>] error_code+0x2b/0x30
> Code: 00 40 c1 ea 0c c1 e2 05 03 15 a8 2a 5e c0 8b 02 f6 c4 40 75 7c 8b
> 02 84 c0 79 7e 8b 4a 1c 8b 44 24 20 8b 5c 87 50 8b 11 8b 41 04 <89> 42
> 04 89 10
> c7 01 00 01 10 00 c7 41 04 00 02 20 00 2b 71 0c
> EIP: [<c015a531>] free_block+0x83/0xfe SS:ESP 0069:c6679d24
>
>
> -Chris
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|