WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] kernel oops with virtual block device driver

To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] kernel oops with virtual block device driver
From: Chris Bainbridge <chris.bainbridge@xxxxxxxxx>
Date: Mon, 19 Sep 2005 17:31:35 +0100
Delivery-date: Mon, 19 Sep 2005 16:29:19 +0000
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=gbLzkCJjuo6R/fHJT5JRIOgtbuo9pNKTpXw+PbGslPZeW1W3HbZ6KmqPzpIGUzBbEx4ZX3rXZTDxJT+LherWVotLK+/Ki0eZFL9C1THp7UEyjbZWU8KKGvu6v+Qyx0IkNH5XJKQWD0Yc3rYZOxiDz5/IaVRTwQNGvSvgLtSHaEQ=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Reply-to: chris.bainbridge@xxxxxxxxx
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi,

Using an unstable snapshot from today (changeset 6940), I'm trying to
use a lvm2 partition block device as root of domU (in config disk =
['phy:vg/img,hda1,w'])

Try to boot domU I get:

Xen virtual console successfully installed as tty1
Event-channel device installed.
xen_blk: Initialising virtual block device driver
xen_blk: Timeout connecting to device!
xen_net: Initialising virtual ethernet driver.
NET: Registered protocol family 2
.....
VFS: Cannot open root device "hda1" or unknown-block(0,0)

Dom0 spits onto its console:

Oops: 0000 [#1]
SMP
CPU:    1
EIP:    0061:[<c0146282>]    Not tainted VLI
EFLAGS: 00010202   (2.6.12.5-xen)
EIP is at generic_page_range+0x23/0x16a
eax: 00000000   ebx: c8c7c000   ecx: 00000000   edx: 00000323
esi: c8c7c000   edi: 05bf3000   ebp: c7ad5684   esp: c7ad5628
ds: 007b   es: 007b   ss: 0069
Process kxbwatch (pid: 2025, threadinfo=c7ad4000 task=c12d2020)
Stack: aa55aa55 aa55aa55 00000000 00000000 00000000 00000000 77dd77dd c8c7d000
       00000000 00001000 c8c7c000 05bf3000 c7ad5684 c0115887 c0115714 c7ad5680
       00000000 c7ad5694 00001000 0001056f 00001000 00000000 c7ad5684 f8181818
Call Trace:
 [<c0115887>] __direct_remap_pfn_range+0x10c/0x14d
 [<c0115714>] direct_remap_area_pte_fn+0x0/0x67
 [<c01a4654>] search_by_key+0x41c/0xc48
 [<c0158126>] bh_lru_install+0xc8/0x125
 [<c0136eb0>] find_get_page+0x39/0x4a
 [<c0158230>] __find_get_block+0xad/0x10c
 [<c01a4654>] search_by_key+0x41c/0xc48
 [<c01a4fd6>] search_for_position_by_key+0x156/0x37a
 [<c0204023>] __make_request+0x3c6/0x46a
 [<c01a4ee0>] search_for_position_by_key+0x60/0x37a
 [<c01a4fd6>] search_for_position_by_key+0x156/0x37a
 [<c01900a9>] make_cpu_key+0x4b/0x5b
 [<c01a3ecd>] pathrelse+0x1e/0x2c
 [<c01907c8>] _get_block_create_0+0x4c6/0x698
 [<c013a52f>] mempool_alloc+0x72/0x100
 [<c01a0bb0>] leaf_copy_items_entirely+0x191/0x1de
 [<c01a2467>] leaf_paste_entries+0x108/0x1b5
 [<c018b2b9>] balance_leaf+0xe50/0x2e89
 [<c01197ef>] __wake_up+0x38/0x4e
 [<c0117c73>] try_to_wake_up+0x2a4/0x2f5
 [<c01197ef>] __wake_up+0x38/0x4e
 [<c021f18a>] wake_waiting+0x1e/0x27
 [<c01357af>] handle_IRQ_event+0x58/0x97
 [<c01358f1>] __do_IRQ+0x103/0x132
 [<c010af8a>] monotonic_clock+0x55/0x95
 [<c02de1d0>] schedule+0x3e4/0xc34
 [<c01197ef>] __wake_up+0x38/0x4e
 [<c0118b08>] find_busiest_group+0x1bc/0x301
 [<c0118e9b>] load_balance_newidle+0x30/0x8c
 [<c010af8a>] monotonic_clock+0x55/0x95
 [<c02de1d0>] schedule+0x3e4/0xc34
 [<c02de203>] schedule+0x417/0xc34
 [<c010af8a>] monotonic_clock+0x55/0x95
 [<c02de203>] schedule+0x417/0xc34
 [<c01197ef>] __wake_up+0x38/0x4e
 [<c021f40e>] xs_input_avail+0x28/0x35
 [<c021f59f>] xb_read+0x184/0x1c4
 [<c0130f06>] autoremove_wake_function+0x0/0x43
 [<c0130f06>] autoremove_wake_function+0x0/0x43
 [<c021f761>] read_reply+0x63/0xa2
 [<c014d19d>] __get_vm_area+0x1b7/0x1ec
 [<c02249eb>] map_frontend_pages+0x33/0x75
 [<c0224a86>] netif_map+0x58/0x133
 [<c022429e>] frontend_changed+0x190/0x222
 [<c0220456>] watch_thread+0xf2/0x168
 [<c0130f06>] autoremove_wake_function+0x0/0x43
 [<c0130f06>] autoremove_wake_function+0x0/0x43
 [<c01198d3>] complete+0x40/0x55
 [<c0220364>] watch_thread+0x0/0x168
 [<c0130a6e>] kthread+0xa0/0xd4
 [<c01309ce>] kthread+0x0/0xd4
 [<c0106c81>] kernel_thread_helper+0x5/0xb
Code: b8 2e c0 e8 74 70 fd ff 55 8d 0c 0a 57 56 53 89 d3 83 ec 24 39
ca 89 44 24 20 89 4c 24 1c 0f 83 19 01 00 00 8b 4c 24 20 c1 ea 16 <8b>
41 1c 83 c1 3c 8d 2c 90 89 c8 89 4c 24 10 e8 dd 96 19 00 8b

Following an older post with a similar oops, I increased dom0 memory
from 128mb->256mb and got a slightly different trace:

Oops: 0000 [#1]
SMP
CPU:    1
EIP:    0061:[<c0146282>]    Not tainted VLI
EFLAGS: 00010202   (2.6.12.5-xen)
EIP is at generic_page_range+0x23/0x16a
eax: 00000000   ebx: d0c7c000   ecx: 00000000   edx: 00000343
esi: d0c7c000   edi: 0dbf3000   ebp: cfa01684   esp: cfa01628
ds: 007b   es: 007b   ss: 0069
Process kxbwatch (pid: 2026, threadinfo=cfa00000 task=cf42da20)
Stack: cd29da20 00000000 c120ac20 00000000 cfa01684 c0117c73 00000000 d0c7d000
       00000000 00001000 d0c7c000 0dbf3000 cfa01684 c0115887 c0115714 cfa01680
       00000000 cfa01694 00001000 00010001 00001000 00000000 cfa01684 cfa016b8
Call Trace:
 [<c0117c73>] try_to_wake_up+0x2a4/0x2f5
 [<c0115887>] __direct_remap_pfn_range+0x10c/0x14d
 [<c0115714>] direct_remap_area_pte_fn+0x0/0x67
 [<c0130f21>] autoremove_wake_function+0x1b/0x43
 [<c0119797>] __wake_up_common+0x35/0x55
 [<c013a63b>] mempool_free+0x7e/0x8e
 [<c01197ef>] __wake_up+0x38/0x4e
 [<c015a148>] end_bio_bh_io_sync+0x0/0x4f
 [<c013a63b>] mempool_free+0x7e/0x8e
 [<c0158126>] bh_lru_install+0xc8/0x125
 [<c0136eb0>] find_get_page+0x39/0x4a
 [<c0158230>] __find_get_block+0xad/0x10c
 [<c01a4654>] search_by_key+0x41c/0xc48
 [<c013a63b>] mempool_free+0x7e/0x8e
 [<c0208384>] as_update_arq+0x1e/0x5e
 [<c0209125>] as_add_request+0x1a6/0x22d
 [<c0209c78>] as_set_request+0x14/0x63
 [<c0209c64>] as_set_request+0x0/0x63
 [<c020053e>] __elv_add_request+0x90/0xc7
 [<c0204023>] __make_request+0x3c6/0x46a
 [<c01a4ee0>] search_for_position_by_key+0x60/0x37a
 [<c02043a6>] generic_make_request+0x9f/0x218
 [<c0130f06>] autoremove_wake_function+0x0/0x43
 [<c0130f06>] autoremove_wake_function+0x0/0x43
 [<c01900a9>] make_cpu_key+0x4b/0x5b
 [<c01a3ecd>] pathrelse+0x1e/0x2c
 [<c01907c8>] _get_block_create_0+0x4c6/0x698
 [<c0130f0f>] autoremove_wake_function+0x9/0x43
 [<c01917e5>] reiserfs_get_block+0xbad/0x11b6
 [<c02de1d0>] schedule+0x3e4/0xc34
 [<c01197ef>] __wake_up+0x38/0x4e
 [<c0108c48>] hypervisor_callback+0x2c/0x34
 [<c0117c73>] try_to_wake_up+0x2a4/0x2f5
 [<c0104866>] force_evtchn_callback+0xc/0xe
 [<c02de203>] schedule+0x417/0xc34
 [<c01358f1>] __do_IRQ+0x103/0x132
 [<c0130f21>] autoremove_wake_function+0x1b/0x43
 [<c0119797>] __wake_up_common+0x35/0x55
 [<c010af8a>] monotonic_clock+0x55/0x95
 [<c02de1d0>] schedule+0x3e4/0xc34
 [<c01358f1>] __do_IRQ+0x103/0x132
 [<c01197ef>] __wake_up+0x38/0x4e
 [<c0118b08>] find_busiest_group+0x1bc/0x301
 [<c0118e9b>] load_balance_newidle+0x30/0x8c
 [<c010af8a>] monotonic_clock+0x55/0x95
 [<c02de1d0>] schedule+0x3e4/0xc34
 [<c02de203>] schedule+0x417/0xc34
 [<c010af8a>] monotonic_clock+0x55/0x95
 [<c02de203>] schedule+0x417/0xc34
 [<c01197ef>] __wake_up+0x38/0x4e
 [<c021f40e>] xs_input_avail+0x28/0x35
 [<c021f59f>] xb_read+0x184/0x1c4
 [<c0130f06>] autoremove_wake_function+0x0/0x43
 [<c0130f06>] autoremove_wake_function+0x0/0x43
 [<c021f761>] read_reply+0x63/0xa2
 [<c014d19d>] __get_vm_area+0x1b7/0x1ec
 [<c02249eb>] map_frontend_pages+0x33/0x75
 [<c0224a86>] netif_map+0x58/0x133
 [<c022429e>] frontend_changed+0x190/0x222
 [<c0220456>] watch_thread+0xf2/0x168
 [<c0130f06>] autoremove_wake_function+0x0/0x43
 [<c0130f06>] autoremove_wake_function+0x0/0x43
 [<c01198d3>] complete+0x40/0x55
 [<c0220364>] watch_thread+0x0/0x168
 [<c0130a6e>] kthread+0xa0/0xd4
 [<c01309ce>] kthread+0x0/0xd4
 [<c0106c81>] kernel_thread_helper+0x5/0xb
Code: b8 2e c0 e8 74 70 fd ff 55 8d 0c 0a 57 56 53 89 d3 83 ec 24 39
ca 89 44 24 20 89 4c 24 1c 0f 83 19 01 00 00 8b 4c 24 20 c1 ea 16 <8b>
41 1c 83 c1 3c 8d 2c 90 89 c8 89 4c 24 10 e8 dd 96 19 00 8b

xend-debug.log has:

ERROR: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1
ERROR: do_evtchn_op: HYPERVISOR_event_channel_op failed: -1

xend.log has:

2005-09-19 17:15:14 xend] DEBUG (image:164) initDomain: cpu=-1
mem_kb=131072 ssidref=0 dom=1
[2005-09-19 17:15:14 xend] DEBUG (XendDomainInfo:702) init_domain>
Created domain=1 name=cbc0 memory=128
[2005-09-19 17:15:14 xend] INFO (image:202) buildDomain os=linux dom=1 vcpus=1
[2005-09-19 17:15:14 xend] DEBUG (image:246) dom            = 1
[2005-09-19 17:15:14 xend] DEBUG (image:247) image          =
/boot_domU/vmlinuz-2.6.12.5-20050919-xenU
[2005-09-19 17:15:14 xend] DEBUG (image:248) store_evtchn   = 1
[2005-09-19 17:15:14 xend] DEBUG (image:249) console_evtchn = 2
[2005-09-19 17:15:14 xend] DEBUG (image:250) cmdline        = 
ip=192.168.1.2:1.2.3.4::::eth0:off root=/dev/hda1 ro
[2005-09-19 17:15:14 xend] DEBUG (image:251) ramdisk        =
[2005-09-19 17:15:14 xend] DEBUG (image:252) flags          = 0
[2005-09-19 17:15:14 xend] DEBUG (image:253) vcpus          = 1
[2005-09-19 17:15:15 xend] DEBUG (blkif:24) exception looking up
device number for hda1: [Errno 2] No such file or directory:
'/dev/hda1'
[2005-09-19 17:15:15 xend] DEBUG (DevController:181) DevController:
writing {'virtual-device': '769', 'backend-id': '0', 'backend':
'/domain/d5de7c20-2e56-4412-aeda-97c8d480c2f1/backend/vbd/0063c554-9062-4268-8dc9-704f4bc26a94/769'}
to /domain/0063c554-9062-4268-8dc9-704f4bc26a94/device/vbd/769.
[2005-09-19 17:15:15 xend] DEBUG (DevController:183) DevController:
writing {'params': 'vg/cbc0', 'domain': 'cbc0', 'type': 'phy',
'frontend': '/domain/0063c554-9062-4268-8dc9-704f4bc26a94/device/vbd/769',
'frontend-id': '1'} to
/domain/d5de7c20-2e56-4412-aeda-97c8d480c2f1/backend/vbd/0063c554-9062-4268-8dc9-704f4bc26a94/769.
[2005-09-19 17:15:15 xend] DEBUG (DevController:181) DevController:
writing {'backend-id': '0', 'mac': 'aa:00:00:04:9e:24', 'handle': '1',
'backend': 
'/domain/d5de7c20-2e56-4412-aeda-97c8d480c2f1/backend/vif/0063c554-9062-4268-8dc9-704f4bc26a94/1'}
to /domain/0063c554-9062-4268-8dc9-704f4bc26a94/device/vif/1.
[2005-09-19 17:15:15 xend] DEBUG (DevController:183) DevController:
writing {'bridge': 'xen-br0', 'mac': 'aa:00:00:04:9e:24', 'handle':
'1', 'script': '/etc/xen/scripts/vif-bridge', 'frontend-id': '1',
'domain': 'cbc0', 'frontend':
'/domain/0063c554-9062-4268-8dc9-704f4bc26a94/device/vif/1'} to
/domain/d5de7c20-2e56-4412-aeda-97c8d480c2f1/backend/vif/0063c554-9062-4268-8dc9-704f4bc26a94/1.
[2005-09-19 17:15:15 xend] INFO (XendRoot:141) EVENT>
xend.domain.create ['cbc0', 1]
[2005-09-19 17:15:15 xend] INFO (XendRoot:141) EVENT>
xend.domain.unpause ['cbc0', 1]
[2005-09-19 17:15:25 xend] DEBUG (XendDomain:232) domain died name=cbc0 domid=1
[2005-09-19 17:15:25 xend] INFO (XendRoot:141) EVENT> xend.domain.died
['cbc0', None]

I've seen bugs #220, and #176 which both look kind of similar, though
176 is supposed to be fixed now, which is why I updated to todays
unstable.

Has anyone got a lvm2 backend working with current unstable?

Thanks,
Chris

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>