WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] kernel oops with GFS in domU

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] kernel oops with GFS in domU
From: Kyler Laird <Kyler@xxxxxxxxxxxxxxx>
Date: Thu, 22 Dec 2005 11:45:28 -0500
Delivery-date: Sun, 25 Dec 2005 14:44:26 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: nn/6.7.2
This is using Xen 3.0 testing (8259).  SMP was disabled in the
domU kernel to allow GFS to compile.

I'm not sure this is a Xen issue but I thought there were some
people running GFS under Xen so it might be worth posting.

update...I tried OCFS2 and FUSE (also in unstable/8438) with the
same result.  I've heard that others have been able to run GFS2
in domU so maybe it just requires the right versions and configs?
I'd be happy to pay for help.

--kyler

===============================================================
Lock_Harness 1.01.00 (built Dec 22 2005 16:27:41) installed
GFS 1.01.00 (built Dec 22 2005 16:28:33) installed
Gulm 1.01.00 (built Dec 22 2005 16:27:59) installed
GFS: Trying to join cluster "lock_gulm", "eng:00"
GFS: fsid=eng:00.0: Joined cluster. Now mounting FS...
GFS: fsid=eng:00.0: jid=0: Trying to acquire journal lock...
GFS: fsid=eng:00.0: jid=0: Looking at journal...
GFS: fsid=eng:00.0: jid=0: Done
GFS: fsid=eng:00.0: jid=1: Trying to acquire journal lock...
GFS: fsid=eng:00.0: jid=1: Looking at journal...
GFS: fsid=eng:00.0: jid=1: Done
GFS: fsid=eng:00.0: jid=2: Trying to acquire journal lock...
GFS: fsid=eng:00.0: jid=2: Looking at journal...
GFS: fsid=eng:00.0: jid=2: Done
GFS: fsid=eng:00.0: jid=3: Trying to acquire journal lock...
GFS: fsid=eng:00.0: jid=3: Looking at journal...
GFS: fsid=eng:00.0: jid=3: Done
GFS: fsid=eng:00.0: jid=4: Trying to acquire journal lock...
GFS: fsid=eng:00.0: jid=4: Looking at journal...
GFS: fsid=eng:00.0: jid=4: Done
Unable to handle kernel paging request at ffff8f000db49000 RIP:
<ffffffff88072ee5>{:gfs:gfs_meta_header_out+4}
PGD 0
Oops: 0002 [1]
CPU 0
Modules linked in: lock_gulm gfs lock_harness md5 ipv6 dm_mod
Pid: 490, comm: mount Not tainted 2.6.12.6-xenU
RIP: e030:[<ffffffff88072ee5>] <ffffffff88072ee5>{:gfs:gfs_meta_header_out+4}
RSP: e02b:ffff88000e54d840  EFLAGS: 00010296
RAX: 0000000070191601 RBX: ffff88000e54d888 RCX: 0000000000001000
RDX: 0000000000000008 RSI: ffff8f000db49000 RDI: ffff88000e54d888
RBP: ffff88000e54d9e8 R08: 0000000000000010 R09: 0000000000000000
R10: ffff88000e54d888 R11: 0000000000000068 R12: ffff8f000db49000
R13: 0000000000000000 R14: ffffc20000084718 R15: 0000000000000001
FS:  00002aaaab00e6d0(0000) GS:ffffffff8036a300(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000
Process mount (pid: 490, threadinfo ffff88000e54c000, task ffff88000ed0a170)
Stack: ffffffff88072f3e ffff88000e54d9e8 0000000000000190 ffff88000db26b00
       ffffffff88070aa4 0000000801161970 ffff88000e54da20 ffffc2000005c000
       0000000000000001 0000000901161970
Call Trace:<ffffffff88072f3e>{:gfs:gfs_desc_out+18} 
<ffffffff88070aa4>{:gfs:unlinked_build_bhlist+230}
       <ffffffff8014f3b2>{__alloc_pages+190} 
<ffffffff8806f007>{:gfs:disk_commit+161}
       <ffffffff80151a4d>{cache_alloc_refill+769} 
<ffffffff880701cd>{:gfs:gfs_log_dump+945}
       <ffffffff88081782>{:gfs:gfs_make_fs_rw+235} 
<ffffffff88077900>{:gfs:gfs_get_sb+2759}
       <ffffffff80151727>{kmem_cache_alloc+57} 
<ffffffff8016ea27>{do_kern_mount+86}
       <ffffffff80181d3d>{do_mount+1488} <ffffffff8015a9d0>{do_no_page+1316}
       <ffffffff8014f273>{buffered_rmqueue+558} 
<ffffffff8014f3b2>{__alloc_pages+190}
       <ffffffff8014f8f4>{__get_free_pages+30} <ffffffff80181e43>{sys_mount+133}
       <ffffffff801111d1>{system_call+117} <ffffffff8011115c>{system_call+0}


Code: 89 06 8b 47 04 0f c8 89 46 04 8b 47 10 0f c8 89 46 10 8b 47
RIP <ffffffff88072ee5>{:gfs:gfs_meta_header_out+4} RSP <ffff88000e54d840>
CR2: ffff8f000db49000



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-users] kernel oops with GFS in domU, Kyler Laird <=