WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Dmesg log for 2.6.31-rc8 kernel been built on F12 (rawhi

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] Dmesg log for 2.6.31-rc8 kernel been built on F12 (rawhide) vs log for same kernel been built on F11 and installed on F12
From: Boris Derzhavets <bderzhavets@xxxxxxxxx>
Date: Wed, 9 Sep 2009 13:02:17 -0700 (PDT)
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 09 Sep 2009 13:03:08 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1252526537; bh=V7HJHZJMa1OlTAk+Z++t9Xpdwl0XX1V+OAYWRZaCl/w=; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=XaYjHunTCUUmrkIJrqyMnEQ8rZXYcbivLxqtNr41Hut3Y52VyWYWNl9hQBgRsJttpGaZdpgGk3D28qHmE7PRCIad4/M8KebymBdt0VECNqC8ix04emgqZAxab/36VHBreFFPCpo8LRW0VttS5HmQD9v0hjw32kOtnfH/gy6wR1M=
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=dvDPWFUtDZC5eUsMq252Vzg5qrsClFLDy5JrNyRmMKiFx3vDUFK7go9yYqQgrMAO5YB/OFA5pXRVw/Ql2GOgzLnIb9vOw4mG4o6OW86EnUHFdT8OQe7hih/d0l+szio657rBT2rf/KR6NrWqerqPZAsqM1hWPF922MwRxtD12+8=;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20090909190444.GB9181@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>You contradict yourself latter where you say that the 2.6.31-rc8 built
>on F12 and installed on F12 has a stack-trace. Or am I misreading it?

I don't see contradiction.

When build on F12 and install of F12 (dmesg.1.gz ) dmesg log contains:-


======================================================
[ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ]
2.6.31-rc8 #1
------------------------------------------------------
khubd/28 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
 (&retval->lock){......}, at: [<ffffffff81126058>] dma_pool_alloc+0x46/0x312

and this task is already holding:
 (&ehci->lock){-.....}, at: [<ffffffff813cbcf8>] ehci_urb_enqueue+0xb4/0xd7c
which would create a new lock dependency:
 (&ehci->lock){-.....} -> (&retval->lock){......}

but this new dependency connects a HARDIRQ-irq-safe lock:
 (&ehci->lock){-.....}
... which became HARDIRQ-irq-safe at:
  [<ffffffff81098701>] __lock_acquire+0x256/0xc11
  [<ffffffff810991aa>] lock_acquire+0xee/0x12e
  [<ffffffff8150be97>] _spin_lock+0x45/0x8e
  [<ffffffff813ca900>] ehci_irq+0x41/0x441
  [<ffffffff813af8f1>] usb_hcd_irq+0x59/0xcc
  [<ffffffff810c7298>] handle_IRQ_event+0x62/0x148
  [<ffffffff810c982f>] handle_level_irq+0x90/0xf9
  [<ffffffff81018038>] handle_irq+0x9a/0xba
  [<ffffffff813011da>] xen_evtchn_do_upcall+0x10c/0x1bd
  [<ffffffff8101623e>] xen_do_hypervisor_callback+0x1e/0x30
  [<ffffffffffffffff>] 0xffffffffffffffff

to a HARDIRQ-irq-unsafe lock:
 (purge_lock){+.+...}
... which became HARDIRQ-irq-unsafe at:
...  [<ffffffff81098776>] __lock_acquire+0x2cb/0xc11
  [<ffffffff810991aa>] lock_acquire+0xee/0x12e
  [<ffffffff8150be97>] _spin_lock+0x45/0x8e
  [<ffffffff8111f277>] __purge_vmap_area_lazy+0x63/0x198
  [<ffffffff81120b55>] vm_unmap_aliases+0x18f/0x1b2
  [<ffffffff8100e3de>] xen_alloc_ptpage+0x47/0x75
  [<ffffffff8100e449>] xen_alloc_pte+0x13/0x15
  [<ffffffff81114641>] __pte_alloc_kernel+0x6f/0xdd
  [<ffffffff81120082>] vmap_page_range_noflush+0x1c5/0x315
  [<ffffffff81120213>] map_vm_area+0x41/0x6b
  [<ffffffff8112036c>] __vmalloc_area_node+0x12f/0x167
  [<ffffffff81120434>] __vmalloc_node+0x90/0xb5
  [<ffffffff811206ab>] __vmalloc+0x28/0x3e
  [<ffffffff81a0422a>] alloc_large_system_hash+0x12f/0x1fb
  [<ffffffff81a06aba>] vfs_caches_init+0xb8/0x140
  [<ffffffff819dea69>] start_kernel+0x3ef/0x44c
  [<ffffffff819ddd70>] x86_64_start_reservations+0xbb/0xd6
  [<ffffffff819e23b7>] xen_start_kernel+0x5d5/0x5dc
  [<ffffffffffffffff>] 0xffffffffffffffff

other info that might help us debug this:

2 locks held by khubd/28:
 #0:  (usb_address0_mutex){+.+...}, at: [<ffffffff813aa660>] hub_port_init+0x8c/0x81e
 #1:  (&ehci->lock){-.....}, at: [<ffffffff813cbcf8>] ehci_urb_enqueue+0xb4/0xd7c

the HARDIRQ-irq-safe lock's dependencies:
-> (&ehci->lock){-.....} ops: 0 {
   IN-HARDIRQ-W at:
                        [<ffffffff81098701>] __lock_acquire+0x256/0xc11
                        [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                        [<ffffffff8150be97>] _spin_lock+0x45/0x8e
                        [<ffffffff813ca900>] ehci_irq+0x41/0x441
                        [<ffffffff813af8f1>] usb_hcd_irq+0x59/0xcc
                        [<ffffffff810c7298>] handle_IRQ_event+0x62/0x148
                        [<ffffffff810c982f>] handle_level_irq+0x90/0xf9
                        [<ffffffff81018038>] handle_irq+0x9a/0xba
                        [<ffffffff813011da>] xen_evtchn_do_upcall+0x10c/0x1bd
                        [<ffffffff8101623e>] xen_do_hypervisor_callback+0x1e/0x30
                        [<ffffffffffffffff>] 0xffffffffffffffff
   INITIAL USE at:
                       [<ffffffff810987ee>] __lock_acquire+0x343/0xc11
                       [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                       [<ffffffff8150be97>] _spin_lock+0x45/0x8e
                       [<ffffffff813ca900>] ehci_irq+0x41/0x441
                       [<ffffffff813af8f1>] usb_hcd_irq+0x59/0xcc
                       [<ffffffff810c7298>] handle_IRQ_event+0x62/0x148
                       [<ffffffff810c982f>] handle_level_irq+0x90/0xf9
                       [<ffffffff81018038>] handle_irq+0x9a/0xba
                       [<ffffffff813011da>] xen_evtchn_do_upcall+0x10c/0x1bd
                       [<ffffffff8101623e>] xen_do_hypervisor_callback+0x1e/0x30
                       [<ffffffffffffffff>] 0xffffffffffffffff
 }
 ... key      at: [<ffffffff82684768>] __key.35412+0x0/0x8
 -> (hcd_urb_list_lock){......} ops: 0 {
    INITIAL USE at:
                         [<ffffffff810987ee>] __lock_acquire+0x343/0xc11
                         [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                         [<ffffffff8150be97>] _spin_lock+0x45/0x8e
                         [<ffffffff813afc99>] usb_hcd_link_urb_to_ep+0x37/0xc8
                         [<ffffffff813b11d4>] usb_hcd_submit_urb+0x30f/0xa07
                         [<ffffffff813b1f1a>] usb_submit_urb+0x25a/0x2ed
                         [<ffffffff813b3852>] usb_start_wait_urb+0x71/0x1d4
                         [<ffffffff813b3c7c>] usb_control_msg+0x138/0x170
                         [<ffffffff813b5026>] usb_get_descriptor+0x83/0xc9
                         [<ffffffff813b5120>] usb_get_device_descriptor+0xb4/0xfc
                         [<ffffffff813b0b65>] usb_add_hcd+0x472/0x6a8
                         [<ffffffff813c05ce>] usb_hcd_pci_probe+0x263/0x3bd
                         [<ffffffff8128d0f3>] local_pci_probe+0x2a/0x42
                         [<ffffffff8107d667>] do_work_for_cpu+0x27/0x50
                         [<ffffffff810829a0>] kthread+0xac/0xb4
                         [<ffffffff810160ea>] child_rip+0xa/0x20
                         [<ffffffffffffffff>] 0xffffffffffffffff
  }
  ... key      at: [<ffffffff817c4018>] hcd_urb_list_lock+0x18/0x40
 ... acquired at:
   [<ffffffff81098f25>] __lock_acquire+0xa7a/0xc11
   [<ffffffff810991aa>] lock_acquire+0xee/0x12e
   [<ffffffff8150be97>] _spin_lock+0x45/0x8e
   [<ffffffff813afc99>] usb_hcd_link_urb_to_ep+0x37/0xc8
   [<ffffffff813cbd11>] ehci_urb_enqueue+0xcd/0xd7c
   [<ffffffff813b1739>] usb_hcd_submit_urb+0x874/0xa07
   [<ffffffff813b1f1a>] usb_submit_urb+0x25a/0x2ed
   [<ffffffff813b3852>] usb_start_wait_urb+0x71/0x1d4
   [<ffffffff813b3c7c>] usb_control_msg+0x138/0x170
   [<ffffffff813aa918>] hub_port_init+0x344/0x81e
   [<ffffffff813ae002>] hub_events+0x950/0x121e
   [<ffffffff813ae916>] hub_thread+0x46/0x1d0
   [<ffffffff810829a0>] kthread+0xac/0xb4
   [<ffffffff810160ea>] child_rip+0xa/0x20
   [<ffffffffffffffff>] 0xffffffffffffffff


the HARDIRQ-irq-unsafe lock's dependencies:
-> (purge_lock){+.+...} ops: 0 {
   HARDIRQ-ON-W at:
                        [<ffffffff81098776>] __lock_acquire+0x2cb/0xc11
                        [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                        [<ffffffff8150be97>] _spin_lock+0x45/0x8e
                        [<ffffffff8111f277>] __purge_vmap_area_lazy+0x63/0x198
                        [<ffffffff81120b55>] vm_unmap_aliases+0x18f/0x1b2
                        [<ffffffff8100e3de>] xen_alloc_ptpage+0x47/0x75
                        [<ffffffff8100e449>] xen_alloc_pte+0x13/0x15
                        [<ffffffff81114641>] __pte_alloc_kernel+0x6f/0xdd
                        [<ffffffff81120082>] vmap_page_range_noflush+0x1c5/0x315
                        [<ffffffff81120213>] map_vm_area+0x41/0x6b
                        [<ffffffff8112036c>] __vmalloc_area_node+0x12f/0x167
                        [<ffffffff81120434>] __vmalloc_node+0x90/0xb5
                        [<ffffffff811206ab>] __vmalloc+0x28/0x3e
                        [<ffffffff81a0422a>] alloc_large_system_hash+0x12f/0x1fb
                        [<ffffffff81a06aba>] vfs_caches_init+0xb8/0x140
                        [<ffffffff819dea69>] start_kernel+0x3ef/0x44c
                        [<ffffffff819ddd70>] x86_64_start_reservations+0xbb/0xd6
                        [<ffffffff819e23b7>] xen_start_kernel+0x5d5/0x5dc
                        [<ffffffffffffffff>] 0xffffffffffffffff
   SOFTIRQ-ON-W at:
                        [<ffffffff81098797>] __lock_acquire+0x2ec/0xc11
                        [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                        [<ffffffff8150be97>] _spin_lock+0x45/0x8e
                        [<ffffffff8111f277>] __purge_vmap_area_lazy+0x63/0x198
                        [<ffffffff81120b55>] vm_unmap_aliases+0x18f/0x1b2
                        [<ffffffff8100e3de>] xen_alloc_ptpage+0x47/0x75
                        [<ffffffff8100e449>] xen_alloc_pte+0x13/0x15
                        [<ffffffff81114641>] __pte_alloc_kernel+0x6f/0xdd
                        [<ffffffff81120082>] vmap_page_range_noflush+0x1c5/0x315
                        [<ffffffff81120213>] map_vm_area+0x41/0x6b
                        [<ffffffff8112036c>] __vmalloc_area_node+0x12f/0x167
                        [<ffffffff81120434>] __vmalloc_node+0x90/0xb5
                        [<ffffffff811206ab>] __vmalloc+0x28/0x3e
                        [<ffffffff81a0422a>] alloc_large_system_hash+0x12f/0x1fb
                        [<ffffffff81a06aba>] vfs_caches_init+0xb8/0x140
                        [<ffffffff819dea69>] start_kernel+0x3ef/0x44c
                        [<ffffffff819ddd70>] x86_64_start_reservations+0xbb/0xd6
                        [<ffffffff819e23b7>] xen_start_kernel+0x5d5/0x5dc
                        [<ffffffffffffffff>] 0xffffffffffffffff
   INITIAL USE at:
                       [<ffffffff810987ee>] __lock_acquire+0x343/0xc11
                       [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                       [<ffffffff8150be97>] _spin_lock+0x45/0x8e
                       [<ffffffff8111f277>] __purge_vmap_area_lazy+0x63/0x198
                       [<ffffffff81120b55>] vm_unmap_aliases+0x18f/0x1b2
                       [<ffffffff8100e3de>] xen_alloc_ptpage+0x47/0x75
                       [<ffffffff8100e449>] xen_alloc_pte+0x13/0x15
                       [<ffffffff81114641>] __pte_alloc_kernel+0x6f/0xdd
                       [<ffffffff81120082>] vmap_page_range_noflush+0x1c5/0x315
                       [<ffffffff81120213>] map_vm_area+0x41/0x6b
                       [<ffffffff8112036c>] __vmalloc_area_node+0x12f/0x167
                       [<ffffffff81120434>] __vmalloc_node+0x90/0xb5
                       [<ffffffff811206ab>] __vmalloc+0x28/0x3e
                       [<ffffffff81a0422a>] alloc_large_system_hash+0x12f/0x1fb
                       [<ffffffff81a06aba>] vfs_caches_init+0xb8/0x140
                       [<ffffffff819dea69>] start_kernel+0x3ef/0x44c
                       [<ffffffff819ddd70>] x86_64_start_reservations+0xbb/0xd6
                       [<ffffffff819e23b7>] xen_start_kernel+0x5d5/0x5dc
                       [<ffffffffffffffff>] 0xffffffffffffffff
 }
 ... key      at: [<ffffffff8178a788>] purge_lock.26392+0x18/0x40
 -> (vmap_area_lock){+.+...} ops: 0 {
    HARDIRQ-ON-W at:
                          [<ffffffff81098776>] __lock_acquire+0x2cb/0xc11
                          [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                          [<ffffffff8150be97>] _spin_lock+0x45/0x8e
                          [<ffffffff8111f5b8>] alloc_vmap_area+0x11d/0x291
                          [<ffffffff8111f880>] __get_vm_area_node+0x154/0x214
                          [<ffffffff8112041b>] __vmalloc_node+0x77/0xb5
                          [<ffffffff811206ab>] __vmalloc+0x28/0x3e
                          [<ffffffff81a0422a>] alloc_large_system_hash+0x12f/0x1fb
                          [<ffffffff81a06aba>] vfs_caches_init+0xb8/0x140
                          [<ffffffff819dea69>] start_kernel+0x3ef/0x44c
                          [<ffffffff819ddd70>] x86_64_start_reservations+0xbb/0xd6
                          [<ffffffff819e23b7>] xen_start_kernel+0x5d5/0x5dc
                          [<ffffffffffffffff>] 0xffffffffffffffff
    SOFTIRQ-ON-W at:
                          [<ffffffff81098797>] __lock_acquire+0x2ec/0xc11
                          [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                          [<ffffffff8150be97>] _spin_lock+0x45/0x8e
                          [<ffffffff8111f5b8>] alloc_vmap_area+0x11d/0x291
                          [<ffffffff8111f880>] __get_vm_area_node+0x154/0x214
                          [<ffffffff8112041b>] __vmalloc_node+0x77/0xb5
                          [<ffffffff811206ab>] __vmalloc+0x28/0x3e
                          [<ffffffff81a0422a>] alloc_large_system_hash+0x12f/0x1fb
                          [<ffffffff81a06aba>] vfs_caches_init+0xb8/0x140
                          [<ffffffff819dea69>] start_kernel+0x3ef/0x44c
                          [<ffffffff819ddd70>] x86_64_start_reservations+0xbb/0xd6
                          [<ffffffff819e23b7>] xen_start_kernel+0x5d5/0x5dc
                          [<ffffffffffffffff>] 0xffffffffffffffff
    INITIAL USE at:
                         [<ffffffff810987ee>] __lock_acquire+0x343/0xc11
                         [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                         [<ffffffff8150be97>] _spin_lock+0x45/0x8e
                         [<ffffffff8111f5b8>] alloc_vmap_area+0x11d/0x291
                         [<ffffffff8111f880>] __get_vm_area_node+0x154/0x214
                         [<ffffffff8112041b>] __vmalloc_node+0x77/0xb5
                         [<ffffffff811206ab>] __vmalloc+0x28/0x3e
                         [<ffffffff81a0422a>] alloc_large_system_hash+0x12f/0x1fb
                         [<ffffffff81a06aba>] vfs_caches_init+0xb8/0x140
                         [<ffffffff819dea69>] start_kernel+0x3ef/0x44c
                         [<ffffffff819ddd70>] x86_64_start_reservations+0xbb/0xd6
                         [<ffffffff819e23b7>] xen_start_kernel+0x5d5/0x5dc
                         [<ffffffffffffffff>] 0xffffffffffffffff
  }
  ... key      at: [<ffffffff8178a738>] vmap_area_lock+0x18/0x40
  -> (&rnp->lock){..-...} ops: 0 {
     IN-SOFTIRQ-W at:
                            [<ffffffff81098722>] __lock_acquire+0x277/0xc11
                            [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                            [<ffffffff8150c06a>] _spin_lock_irqsave+0x5d/0xab
                            [<ffffffff810cb655>] cpu_quiet+0x38/0xb0
                            [<ffffffff810cbdf5>] __rcu_process_callbacks+0x83/0x259
                            [<ffffffff810cc028>] rcu_process_callbacks+0x5d/0x76
                            [<ffffffff8106daca>] __do_softirq+0xf6/0x1f0
                            [<ffffffff810161ec>] call_softirq+0x1c/0x30
                            [<ffffffff81017d7f>] do_softirq+0x5f/0xd7
                            [<ffffffff8106d3e1>] irq_exit+0x66/0xbc
                            [<ffffffff8130125b>] xen_evtchn_do_upcall+0x18d/0x1bd
                            [<ffffffff8101623e>] xen_do_hypervisor_callback+0x1e/0x30
                            [<ffffffffffffffff>] 0xffffffffffffffff
     INITIAL USE at:
                           [<ffffffff810987ee>] __lock_acquire+0x343/0xc11
                           [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                           [<ffffffff8150c06a>] _spin_lock_irqsave+0x5d/0xab
                           [<ffffffff81506943>] rcu_init_percpu_data+0x3d/0x18b
                           [<ffffffff81506adb>] rcu_cpu_notify+0x4a/0xa7
                           [<ffffffff81a0074d>] __rcu_init+0x168/0x1b3
                           [<ffffffff819fd50b>] rcu_init+0x1c/0x3e
                           [<ffffffff819de8e2>] start_kernel+0x268/0x44c
                           [<ffffffff819ddd70>] x86_64_start_reservations+0xbb/0xd6
                           [<ffffffff819e23b7>] xen_start_kernel+0x5d5/0x5dc
                           [<ffffffffffffffff>] 0xffffffffffffffff
   }
   ... key      at: [<ffffffff8249e240>] __key.20299+0x0/0x8
  ... acquired at:
   [<ffffffff81098f25>] __lock_acquire+0xa7a/0xc11
   [<ffffffff810991aa>] lock_acquire+0xee/0x12e
   [<ffffffff8150c06a>] _spin_lock_irqsave+0x5d/0xab
   [<ffffffff810cbb94>] __call_rcu+0x9d/0x13c
   [<ffffffff810cbc99>] call_rcu+0x28/0x3e
   [<ffffffff8111f1f7>] __free_vmap_area+0x7b/0x98
   [<ffffffff8111f35d>] __purge_vmap_area_lazy+0x149/0x198
   [<ffffffff81120b55>] vm_unmap_aliases+0x18f/0x1b2
   [<ffffffff8100e3de>] xen_alloc_ptpage+0x47/0x75
   [<ffffffff8100e449>] xen_alloc_pte+0x13/0x15
   [<ffffffff8111463f>] __pte_alloc_kernel+0x6d/0xdd
   [<ffffffff8126f1ed>] ioremap_page_range+0x1b1/0x2c8
   [<ffffffff81040ece>] __ioremap_caller+0x2b0/0x32b
   [<ffffffff8104102b>] ioremap_nocache+0x17/0x19
   [<ffffffff81a1f6e9>] pci_mmcfg_arch_init+0xb6/0x162
   [<ffffffff81a205ae>] __pci_mmcfg_init+0x2d8/0x31d
   [<ffffffff81a20611>] pci_mmcfg_late_init+0x1e/0x34
   [<ffffffff81a13b06>] acpi_init+0x1b7/0x288
   [<ffffffff8100a0b3>] do_one_initcall+0x81/0x1b9
   [<ffffffff819de394>] kernel_init+0x195/0x203
   [<ffffffff810160ea>] child_rip+0xa/0x20
   [<ffffffffffffffff>] 0xffffffffffffffff

  -> (&rcu_state.onofflock){..-...} ops: 0 {
     IN-SOFTIRQ-W at:
                            [<ffffffff81098722>] __lock_acquire+0x277/0xc11
                            [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                            [<ffffffff8150be97>] _spin_lock+0x45/0x8e
                            [<ffffffff810cb4c1>] rcu_start_gp+0xc5/0x154
                            [<ffffffff810cb5fe>] cpu_quiet_msk+0xae/0xcd
                            [<ffffffff810cb6ad>] cpu_quiet+0x90/0xb0
                            [<ffffffff810cbdf5>] __rcu_process_callbacks+0x83/0x259
                            [<ffffffff810cc009>] rcu_process_callbacks+0x3e/0x76
                            [<ffffffff8106daca>] __do_softirq+0xf6/0x1f0
                            [<ffffffff810161ec>] call_softirq+0x1c/0x30
                            [<ffffffff81017d7f>] do_softirq+0x5f/0xd7
                            [<ffffffff8106d3e1>] irq_exit+0x66/0xbc
                            [<ffffffff8130125b>] xen_evtchn_do_upcall+0x18d/0x1bd
                            [<ffffffff8101623e>] xen_do_hypervisor_callback+0x1e/0x30
                            [<ffffffffffffffff>] 0xffffffffffffffff
     INITIAL USE at:
                           [<ffffffff810987ee>] __lock_acquire+0x343/0xc11
                           [<ffffffff810991aa>] lock_acquire+0xee/0x12e
                           [<ffffffff8150be97>] _spin_lock+0x45/0x8e
                           [<ffffffff815069ec>] rcu_init_percpu_data+0xe6/0x18b
                           [<ffffffff81506adb>] rcu_cpu_notify+0x4a/0xa7
                           [<ffffffff81a0074d>] __rcu_init+0x168/0x1b3
                           [<ffffffff819fd50b>] rcu_init+0x1c/0x3e
                           [<ffffffff819de8e2>] start_kernel+0x268/0x44c
                           [<ffffffff819ddd70>] x86_64_start_reservations+0xbb/0xd6
                           [<ffffffff819e23b7>] xen_start_kernel+0x5d5/0x5dc
                           [<ffffffffffffffff>] 0xffffffffffffffff
   }
   ... key      at: [<ffffffff817850f0>] rcu_state+0x14f0/0x1580
   ... acquired at:
   [<ffffffff81098f25>] __lock_acquire+0xa7a/0xc11
   [<ffffffff810991aa>] lock_acquire+0xee/0x12e
   [<ffffffff8150be97>] _spin_lock+0x45/0x8e
   [<ffffffff81506a00>] rcu_init_percpu_data+0xfa/0x18b
   [<ffffffff81506adb>] rcu_cpu_notify+0x4a/0xa7
   [<ffffffff81a0074d>] __rcu_init+0x168/0x1b3
   [<ffffffff819fd50b>] rcu_init+0x1c/0x3e
   [<ffffffff819de8e2>] start_kernel+0x268/0x44c
   [<ffffffff819ddd70>] x86_64_start_reservations+0xbb/0xd6
   [<ffffffff819e23b7>] xen_start_kernel+0x5d5/0x5dc
   [<ffffffffffffffff>] 0xffffffffffffffff

  ... acquired at:
   [<ffffffff81098f25>] __lock_acquire+0xa7a/0xc11
   [<ffffffff810991aa>] lock_acquire+0xee/0x12e
   [<ffffffff8150be97>] _spin_lock+0x45/0x8e
   [<ffffffff810cb4c1>] rcu_start_gp+0xc5/0x154
   [<ffffffff810cbb9f>] __call_rcu+0xa8/0x13c
   [<ffffffff810cbc99>] call_rcu+0x28/0x3e
   [<ffffffff8111f1f7>] __free_vmap_area+0x7b/0x98
   [<ffffffff8111f35d>] __purge_vmap_area_lazy+0x149/0x198
   [<ffffffff81120b55>] vm_unmap_aliases+0x18f/0x1b2
   [<ffffffff8100e3de>] xen_alloc_ptpage+0x47/0x75
   [<ffffffff8100e449>] xen_alloc_pte+0x13/0x15
   [<ffffffff8111463f>] __pte_alloc_kernel+0x6d/0xdd
   [<ffffffff8126f1ed>] ioremap_page_range+0x1b1/0x2c8
   [<ffffffff81040ece>] __ioremap_caller+0x2b0/0x32b
   [<ffffffff8104102b>] ioremap_nocache+0x17/0x19
   [<ffffffff81a1f6e9>] pci_mmcfg_arch_init+0xb6/0x162
   [<ffffffff81a205ae>] __pci_mmcfg_init+0x2d8/0x31d
   [<ffffffff81a20611>] pci_mmcfg_late_init+0x1e/0x34
   [<ffffffff81a13b06>] acpi_init+0x1b7/0x288
   [<ffffffff8100a0b3>] do_one_initcall+0x81/0x1b9
   [<ffffffff819de394>] kernel_init+0x195/0x203
   [<ffffffff810160ea>] child_rip+0xa/0x20
   [<ffffffffffffffff>] 0xffffffffffffffff

 ... acquired at:
   [<ffffffff81098f25>] __lock_acquire+0xa7a/0xc11
   [<ffffffff810991aa>] lock_acquire+0xee/0x12e
   [<ffffffff8150be97>] _spin_lock+0x45/0x8e
   [<ffffffff8111f346>] __purge_vmap_area_lazy+0x132/0x198
   [<ffffffff81120b55>] vm_unmap_aliases+0x18f/0x1b2
   [<ffffffff8100e3de>] xen_alloc_ptpage+0x47/0x75
   [<ffffffff8100e449>] xen_alloc_pte+0x13/0x15
   [<ffffffff8111463f>] __pte_alloc_kernel+0x6d/0xdd
   [<ffffffff8126f1ed>] ioremap_page_range+0x1b1/0x2c8
   [<ffffffff81040ece>] __ioremap_caller+0x2b0/0x32b
   [<ffffffff8104102b>] ioremap_nocache+0x17/0x19
   [<ffffffff81a1f6e9>] pci_mmcfg_arch_init+0xb6/0x162
   [<ffffffff81a205ae>] __pci_mmcfg_init+0x2d8/0x31d
   [<ffffffff81a20611>] pci_mmcfg_late_init+0x1e/0x34
   [<ffffffff81a13b06>] acpi_init+0x1b7/0x288
   [<ffffffff8100a0b3>] do_one_initcall+0x81/0x1b9
   [<ffffffff819de394>] kernel_init+0x195/0x203
   [<ffffffff810160ea>] child_rip+0xa/0x20
   [<ffffffffffffffff>] 0xffffffffffffffff


stack backtrace:
Pid: 28, comm: khubd Not tainted 2.6.31-rc8 #1
Call Trace:
 [<ffffffff8109839d>] check_usage+0x29a/0x2bf
 [<ffffffff8109801d>] ? check_noncircular+0xa1/0xe8
 [<ffffffff81098432>] check_irq_usage+0x70/0xe9
 [<ffffffff81098e1f>] __lock_acquire+0x974/0xc11
 [<ffffffff810186e9>] ? dump_trace+0x25c/0x27f
 [<ffffffff81095a00>] ? find_usage_backwards+0xb6/0x14f
 [<ffffffff810991aa>] lock_acquire+0xee/0x12e
 [<ffffffff81126058>] ? dma_pool_alloc+0x46/0x312
 [<ffffffff81126058>] ? dma_pool_alloc+0x46/0x312
 [<ffffffff8150c06a>] _spin_lock_irqsave+0x5d/0xab
 [<ffffffff81126058>] ? dma_pool_alloc+0x46/0x312
 [<ffffffff81126058>] dma_pool_alloc+0x46/0x312
 [<ffffffff8100e9b0>] ? xen_force_evtchn_callback+0x20/0x36
 [<ffffffff8100f3e2>] ? check_events+0x12/0x20
 [<ffffffff813c9c1e>] ehci_qh_alloc+0x37/0xfe
 [<ffffffff8100f3cf>] ? xen_restore_fl_direct_end+0x0/0x1
 [<ffffffff813cb812>] qh_append_tds+0x4c/0x47e
 [<ffffffff8150bc6e>] ? _spin_unlock+0x3a/0x55
 [<ffffffff813cbd36>] ehci_urb_enqueue+0xf2/0xd7c
 [<ffffffff813b0e00>] ? dma_map_single_attrs.clone.1+0x2e/0xf3
 [<ffffffff8109730b>] ? mark_lock+0x36/0x241
 [<ffffffff813b1739>] usb_hcd_submit_urb+0x874/0xa07
 [<ffffffff81097a49>] ? debug_check_no_locks_freed+0x13d/0x16a
 [<ffffffff8109789a>] ? trace_hardirqs_on_caller+0x139/0x175
 [<ffffffff810965df>] ? lockdep_init_map+0xad/0x138
 [<ffffffff813b1f1a>] usb_submit_urb+0x25a/0x2ed
 [<ffffffff81083297>] ? __init_waitqueue_head+0x4d/0x76
 [<ffffffff813b3852>] usb_start_wait_urb+0x71/0x1d4
 [<ffffffff813b3c7c>] usb_control_msg+0x138/0x170
 [<ffffffff813aa918>] hub_port_init+0x344/0x81e
 [<ffffffff813ae002>] hub_events+0x950/0x121e
 [<ffffffff8100f3e2>] ? check_events+0x12/0x20
 [<ffffffff813ae916>] hub_thread+0x46/0x1d0
 [<ffffffff81082df3>] ? autoremove_wake_function+0x0/0x5f
 [<ffffffff813ae8d0>] ? hub_thread+0x0/0x1d0
 [<ffffffff810829a0>] kthread+0xac/0xb4
 [<ffffffff810160ea>] child_rip+0xa/0x20
 [<ffffffff81015a50>] ? restore_args+0x0/0x30
 [<ffffffff810160e0>] ? child_rip+0x0/0x20




--- On Wed, 9/9/09, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:

From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] Dmesg log for 2.6.31-rc8 kernel been built on F12 (rawhide) vs log for same kernel been built on F11 and installed on F12
To: "Boris Derzhavets" <bderzhavets@xxxxxxxxx>
Cc: "Jeremy Fitzhardinge" <jeremy@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, fedora-xen@xxxxxxxxxx
Date: Wednesday, September 9, 2009, 3:04 PM

On Wed, Sep 09, 2009 at 07:55:34AM -0700, Boris Derzhavets wrote:
> >I am not sure if I understand you correctly. Are you saying that 2.6.31-rc8
> >boots without the stack-trace failure? Can you attach the dmesg please
> >and also 'lspci -vvv' output?
>
> If i built rc8 on F11 ( dual booting) with F12 and install kernel and modules via
**************************************************************************
This in done on F12

 # mount /dev/mapper/serverfedora11-lv_root /mnt
 #cd  /mnt/usr/src/linux-2.6-xen

F11's  folder /mnt/usr/src/linux-2.6-xen contains prepared kernel and
modules compiled on F11.
Here we  copy from F11's  folder fo F12 FS

# make modules_install install

( mount was done  on  F12 ),  i get a stable kernel rc8 on F12.

View dmesg.log . It's clean

*************************************************************************
You contradict yourself latter where you say that the 2.6.31-rc8 built
on F12 and installed on F12 has a stack-trace. Or am I misreading it?

>
>   If i compile and and install on F12 kernel has stack trace and is pretty unstable at
> runtime.

Is the kernel you build on F11 (that being called ServerXen35), with
and without Xen producing the same dmesg? That being a dmesg that
did not have a stack trace.

>
> Now i am  sending  two  dmesg reports :-
> 1. Kernel 2.6.31-rc8 been built on F11 and installed on F12
> dmesg.log.gz  (clean)

>  2. Kernel 2.6.31-rc8 been built on F12 and installed on F12
> dmesg.1.gz ( stack trace here) 
> "lspci -vvv"  report has been also sent to you per your request, but i will repeat. This one is for C2D   E8400   box.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel