This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-bugs] [Bug 876] New: xvd kernel thread panic in dom0: Unable to han

To: xen-bugs@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-bugs] [Bug 876] New: xvd kernel thread panic in dom0: Unable to handle kernel paging request
From: bugzilla-daemon@xxxxxxxxxxxxxxxxxxx
Date: Fri, 26 Jan 2007 10:05:14 -0800
Delivery-date: Fri, 26 Jan 2007 10:05:52 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-bugs-request@lists.xensource.com?subject=help>
List-id: Xen Bugzilla <xen-bugs.lists.xensource.com>
List-post: <mailto:xen-bugs@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-bugs>, <mailto:xen-bugs-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-bugs>, <mailto:xen-bugs-request@lists.xensource.com?subject=unsubscribe>
Reply-to: bugs@xxxxxxxxxxxxxxxxxx
Sender: xen-bugs-bounces@xxxxxxxxxxxxxxxxxxx

           Summary: xvd kernel thread panic in dom0: Unable to handle kernel
                    paging request
           Product: Xen
           Version: unspecified
          Platform: x86
        OS/Version: Linux
            Status: NEW
          Severity: major
          Priority: P2
         Component: Unspecified
        AssignedTo: xen-bugs@xxxxxxxxxxxxxxxxxxx
        ReportedBy: ian@xxxxxxxxxx
                CC: ian@xxxxxxxxxx

Software: Custom maintained debian fork (a mixture of woody/sarge/etch/sid),
Xen 3.0.3 32bit (non-PAE), Linux xen vanilla handbuilt kernel, LVM2
logical volume block devices.

Hardware: AMD X2 4200+, ASUS A8V motherboard, 4G RAM, 4 PATA IDE drives.

Problem: This dom0 panic killed off kernel thread "xvd" (pid 22975), which was
one of 5 backend devices for a domain that is now zombied. The "xm" tool was
choking on xenstore listed dom0 backend devices until manually cleared out with

At the time this happened, dirvish would have been in he middle of rsync
backups of the dom0 from an external box, so memory pressure in dom0 would have
been rather high, though total vm exhaustion should not have been a problem as
there was more than enough swap.

 Unable to handle kernel paging request at virtual address c9216000
  printing eip:
 *pde = ma 03021067 pa 00021067
 *pte = ma 00000000 pa fffff000
 Oops: 0000 [#1]
 Modules linked in: xt_physdev dm_snapshot loop tun ipt_LOG ipt_ah ipt_esp
xt_tcpudp xt_state iptable_nat iptable_filter xt_tcpmss ip_nat_ftp ip_nat
ip_conntrack_ftp ip_conntrack nfnetlink ip_tables x_tables ipv6 bridge
i2c_viapro i2c_core ehci_hcd uhci_hcd usbcore skge shpchp pci_hotplug amd64_agp
agpgart serial_core vfat fat nls_cp437 nls_iso8859_1 BusLogic ide_cd cdrom
isofs dm_zero dm_mirror dm_mod sata_promise sata_sil sata_nv tg3 e1000
via_velocity crc_ccitt 8139too 8139cp eepro100 pcnet32 sk98lin forcedeth af_key
ipcomp xfrm4_tunnel via_rhine tulip dmfe 3c59x e100 mii genrtc
 CPU:    0
 EIP:    0061:[__bio_clone+48/176]    Not tainted VLI
 EFLAGS: 00010216   ( #1)
 EIP is at __bio_clone+0x30/0xb0
 eax: 000000c0   ebx: c9215ec0   ecx: 00000002   edx: 000000c0
 esi: c9216000   edi: f3ca2978   ebp: c09afb40   esp: de4efa6c
 ds: 007b   es: 007b   ss: 0069
 Process xvd 17 fd:02 (pid: 22975, threadinfo=de4ee000 task=f3e9e030)
 Stack: <0>00000010 000000c0 f382f7c4 c09afb40 c9215ec0 d26570c0 00000000
        c09afb40 c9215ec0 c6ed0200 00000000 d26570c0 c03bb3a9 c9215ec0 00000010
        f3ca28c0 f3ca2680 c0f816c0 00000100 00000000 00000000 00000800 000000ff
 Call Trace:
  [bio_clone+68/96] bio_clone+0x44/0x60
  [make_request+761/1152] make_request+0x2f9/0x480
  [make_request+1072/1152] make_request+0x430/0x480
  [generic_make_request+223/336] generic_make_request+0xdf/0x150
  [bio_clone+68/96] bio_clone+0x44/0x60
  [<f4e0f4be>] __map_bio+0x4e/0xd0 [dm_mod]
  [<f4e0f793>] __clone_and_map+0x103/0x390 [dm_mod]
  [mempool_alloc+51/224] mempool_alloc+0x33/0xe0
  [<f4e0fae8>] __split_bio+0xc8/0x100 [dm_mod]
  [hypervisor_callback+61/72] hypervisor_callback+0x3d/0x48
  [<f4e0fbdc>] dm_request+0xbc/0xf0 [dm_mod]
  [cache_alloc_refill+190/528] cache_alloc_refill+0xbe/0x210
  [generic_make_request+223/336] generic_make_request+0xdf/0x150
  [mempool_alloc+51/224] mempool_alloc+0x33/0xe0
  [do_IRQ+31/48] do_IRQ+0x1f/0x30
  [submit_bio+99/256] submit_bio+0x63/0x100
  [bio_add_page+63/80] bio_add_page+0x3f/0x50
  [plug_queue+49/64] plug_queue+0x31/0x40
  [dispatch_rw_block_io+1167/1280] dispatch_rw_block_io+0x48f/0x500
  [__switch_to+301/1008] __switch_to+0x12d/0x3f0
  [schedule+986/1808] schedule+0x3da/0x710
  [do_block_io_op+183/208] do_block_io_op+0xb7/0xd0
  [blkif_schedule+96/640] blkif_schedule+0x60/0x280
  [autoremove_wake_function+0/96] autoremove_wake_function+0x0/0x60
  [autoremove_wake_function+0/96] autoremove_wake_function+0x0/0x60
  [blkif_schedule+0/640] blkif_schedule+0x0/0x280
  [kthread+183/192] kthread+0xb7/0xc0
  [kthread+0/192] kthread+0x0/0xc0
  [kernel_thread_helper+5/16] kernel_thread_helper+0x5/0x10
 Code: ec 0c 8b 5c 24 24 8b 6c 24 20 8b 43 0c 8b 40 58 8b 40 34 89 44 24 08 8b
73 30 8b 7d 30 8b 43 2c 8d 04 40 c1 e0 02 89 c1 c1 e9 02 <f3> a5 89 c1 83 e1 03
74 02 f3 a4 8b 53 04 8b 03 89 55 04 8b 55

Configure bugmail: 
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

Xen-bugs mailing list

<Prev in Thread] Current Thread [Next in Thread>