WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

AW: Re: [Xen-devel] Xen BUG in mm / Xen 4.0.1 with 2.6.32.18/21 pvops Ke

To: JBeulich <JBeulich@xxxxxxxxxx>
Subject: AW: Re: [Xen-devel] Xen BUG in mm / Xen 4.0.1 with 2.6.32.18/21 pvops Kernel?
From: "Carsten Schiers" <carsten@xxxxxxxxxx>
Date: Wed, 8 Sep 2010 22:15:57 +0200
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 08 Sep 2010 13:17:04 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C87A2E60200007800014ED5@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi,

no change when I set e.g. to dom0_mem=3000M (see below and in log. In 
log, I also tried 196M, same result).

> you'd have to look at (or
> provide) your DSDT and SSDT(s) to see where this reference comes
> from.

Sorry, my OS knowledge is on Andrew's Minix book niveau: how to provide? 


BR,
Carsten.
 
[    3.830424] ACPI: Power Button [PWRF]
[    3.897249] ACPI: SSDT 00000000bbfb00b0 00235 (v01 DpgPmm  P001Ist 
00000011 INTL 20051117)
[    3.947683] ACPI: SSDT 00000000bbfb02f0 00235 (v01 DpgPmm  P002Ist 
00000012 INTL 20051117)
(XEN) mm.c:860:d0 Error getting mfn 80000 (pfn 7c9ec) from L1 entry 
8000000080000473 for l1e_owner=0, pg_owner=32753
[    3.998293] BUG: unable to handle kernel paging request at 
ffffc90000062000
[    3.998293] IP: [<ffffffff81258492>] 
acpi_ex_system_memory_space_handler+0x16d/0x1df
[    3.998293] PGD bb80d067 PUD bb80e067 PMD bb80f067 PTE 0
[    3.998293] Oops: 0000 [#1] SMP 
[    3.998293] last sysfs file: 
[    3.998293] CPU 0 
[    3.998293] Modules linked in:
[    3.998293] Pid: 1, comm: swapper Not tainted 2.6.32.21 #1 To Be 
Filled By O.E.M.
[    3.998293] RIP: e030:[<ffffffff81258492>]  [<ffffffff81258492>] 
acpi_ex_system_memory_space_handler+0x16d/0x1df
[    3.998293] RSP: e02b:ffff8800bb871970  EFLAGS: 00010246
[    3.998293] RAX: ffffc90000062000 RBX: ffff8800bb89c040 RCX: 
0000000000000000
[    3.998293] RDX: ffff880002de90a0 RSI: 0000000000000001 RDI: 
ffffffff8100f22f
[    3.998293] RBP: ffff8800bb8719b0 R08: ffffffff8169e270 R09: 
0000000000001000
[    3.998293] R10: dead000000100100 R11: ffffffff8100f22f R12: 
ffffc90000062000
[    3.998293] R13: 0000000000000000 R14: 0000000000000008 R15: 
ffff8800bb871a68
[    3.998293] FS:  0000000000000000(0000) GS:ffff880002dde000(0000) 
knlGS:0000000000000000
[    3.998293] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[    3.998293] CR2: ffffc90000062000 CR3: 0000000001001000 CR4: 
0000000000000660
[    3.998293] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 
0000000000000000
[    3.998293] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 
0000000000000400
[    3.998293] Process swapper (pid: 1, threadinfo ffff8800bb870000, 
task ffff8800bb868000)
[    3.998293] Stack:
[    3.998293]  ffff8800bb8719a0 0000000000001000 ffff880000000000 
ffff8800ba29b5a0
[    3.998293] <0> ffffffff81258325 ffff8800bb864b88 ffff8800ba29b240 
0000000000000000
[    3.998293] <0> ffff8800bb871a20 ffffffff81250f08 ffff8800ba32b800 
ffffffff81253f53
[    3.998293] Call Trace:
[    3.998293]  [<ffffffff81258325>] ? 
acpi_ex_system_memory_space_handler+0x0/0x1df
[    3.998293]  [<ffffffff81250f08>] 
acpi_ev_address_space_dispatch+0x16b/0x1b9
[    3.998293]  [<ffffffff81253f53>] ? acpi_os_allocate+0x33/0x35


-----Ursprüngliche Nachricht-----
Von: Jan Beulich [mailto:JBeulich@xxxxxxxxxx] 
Gesendet: Mittwoch, 8. September 2010 14:51
An: Carsten Schiers
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Betreff: Re: [Xen-devel] Xen BUG in mm / Xen 4.0.1 with 2.6.32.18/21 
pvops Kernel?

>>> On 08.09.10 at 14:15, Carsten Schiers <carsten@xxxxxxxxxx> wrote:
> (XEN) mm.c:860:d0 Error getting mfn 80000 (pfn 5555555555555555) from 
L1 
> entry 8000000080000473 for l1e_owner=0, pg_owner=32753

DOMID_IO seen here generally means that Dom0 tried to map a page
it doesn't own (likely because of your use of dom0_mem=). As the
page really is a RAM one, Xen doesn't allow the access. Given that
this apparently happens in the context of
acpi_ex_system_memory_space_handler() you'd have to look at (or
provide) your DSDT and SSDT(s) to see where this reference comes
from. Very likely this is just a bogus reference, that you get away
with on native, perhaps because this code in ioremap.c

        last_pfn = last_addr >> PAGE_SHIFT;
        for (pfn = phys_addr >> PAGE_SHIFT; pfn <= last_pfn; pfn++) {
                int is_ram = page_is_ram(pfn);

                if (is_ram && pfn_valid(pfn) && 
!PageReserved(pfn_to_page(pfn)))
                        return NULL;
                WARN_ON_ONCE(is_ram);
        }

should result in returning NULL there, while it wouldn't cover the
situation under Xen. (While the code is meaningless under Xen in
its current shape anyway, using dom0_mem= with a value above
2G should get you around the issue, as then PFN 0x80000 would
be considered RAM there too.)

Jan




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel