WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] XEN and lvm

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] XEN and lvm
From: "Matthias Pfafferodt" <mapfa@xxxxxx>
Date: Tue, 24 May 2005 08:32:40 +0200 (CEST)
Cc: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>
Delivery-date: Tue, 24 May 2005 06:32:39 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
Importance: Normal
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D1E41A6@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D1E41A6@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: SquirrelMail/1.4.0
Hello Ian,

>> I use the XEN packages from SuSE 9.3 (xen 2.0.5c). The hard
>> disc has one partition for /boot and dom0/domU on lvm. I can
>> boot into dom0 without problems but if I try to start domU I
>> get the kernel error message listed in messaage.kernel.
>>
>> I added another disk so dom0 is not on lvm. With this
>> configuration XEN is working (I can boot domU without
>> problems). Is this a known issue? Are there solutions?
>
> Hmm, I haven't seen this before, and it looks like a bit of a nasty
> crash. Odd that it's a GPF rather than a page fault. Would you mind
> trying again with a kernel from the xen 2.0.6 release?

Sorry, at the moment I don't have the time to compile a new kernel. Are
there any xen-2.0.6 RPM-packages for SuSE 9.3? I can try to compile a new
kernel in 1-2 weeks.

There is same strange behavior of the lvm backend. If I start Xen dom0 (on
lvm) and restart the computer into linux without XEN lvm can't mount the
volumes. The boot process stops in runlevel 1 and I have to manually mount
all partitions/logical volumes. For the next boot without XEN it works
without problems. If I boot XEN again ... (the same problem).

All partitions/volumes are formated with reiserfs 3.6.

Best regards

Matthias

>
> Thanks,
> Ian
>
> May 22 20:51:35 server kernel: general protection fault: 0000 [#1]
> May 22 20:51:35 server kernel: Modules linked in: bridge subfs usbserial
> serial_core floppy nvram nfsd exportfs ipv6 video1394 ohci1394 raw1394
> ieee1394 capability edd evdev joydev sg st sd_mod sr_mod scsi_mod
> via_agp agpgart i2c_viapro i2c_core epic100 mii uhci_hcd usbcore
> parport_pc lp parport reiserfs ide_cd cdrom ide_disk dm_snapshot dm_mod
> via82cxxx ide_core
> May 22 20:51:35 server kernel: CPU:    0
> May 22 20:51:35 server kernel: EIP:    0061:[<c013cddc>]    Not tainted
> VLI
> May 22 20:51:35 server kernel: EFLAGS: 00011286   (2.6.11.4-20a-xen)
> May 22 20:51:35 server kernel: EIP is at set_page_dirty+0x1c/0x50
> May 22 20:51:35 server kernel: eax: ffffff5c   ebx: 0000b0e0   ecx:
> c100b0e0   edx: c0245630
> May 22 20:51:35 server kernel: esi: 00000000   edi: 00000000   ebp:
> c40a5bb4   esp: c5f59e80
> May 22 20:51:35 server kernel: ds: 007b   es: 007b   ss: 0069
> May 22 20:51:35 server kernel: Process python (pid: 5872,
> threadinfo=c5f58000 task=c737f080)
> May 22 20:51:35 server kernel: Stack: c014496f 00000020 00000020
> 00000000 00000000 01987067 c100b0e0 00000000
> May 22 20:51:35 server kernel:        00001000 b76ed000 c03894e0
> b7aed000 c1dcdb78 b76ee000 c03894e0 c0144abb
> May 22 20:51:35 server kernel:        00001000 00000000 00000000
> 00000000 b76ed000 c1dcdb78 b76ee000 c03894e0
> May 22 20:51:35 server kernel: Call Trace:
> May 22 20:51:35 server kernel:  [<c014496f>] zap_pte_range+0x24f/0x350
> May 22 20:51:35 server kernel:  [<c0144abb>] zap_pmd_range+0x4b/0x70
> May 22 20:51:35 server kernel:  [<c0144b1d>] zap_pud_range+0x3d/0x70
> May 22 20:51:35 server kernel:  [<c0147d17>] vma_link+0x77/0xa0
> May 22 20:51:35 server kernel:  [<c0144bb7>] unmap_page_range+0x67/0x80
> May 22 20:51:35 server kernel:  [<c0144cc4>] unmap_vmas+0xf4/0x230
> May 22 20:51:36 server kernel:  [<c01493f6>] unmap_region+0x76/0xf0
> May 22 20:51:36 server kernel:  [<c01496c6>] do_munmap+0xe6/0x130
> May 22 20:51:36 server kernel:  [<c0149750>] sys_munmap+0x40/0x70
> May 22 20:51:36 server kernel:  [<c0109590>] syscall_call+0x7/0xb
> May 22 20:51:36 server kernel: Code: 40 10 89 d0 e9 16 ff ff ff 8d b6 00
> 00 00 00 89 c1 8b 50 10 8b 00 c1 e8 10 83 e0 01 75 26 f6 c2 01 0f 45 d0
> 85 d2 74 23 8b 42 30 <8b> 50 10 85 d2 74 05 89 c8 ff d2 c3 89 c8 8d b6
> 00 00 00 00 e9
>
>
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>