|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] PaX Security w/ Kernel.org DomU 2.6.31.7
On Mon, Jan 4, 2010 at 1:35 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Mon, 2010-01-04 at 13:26 +0000, Keir Fraser wrote:
>> Looks like NULL page is being accessed, which will not have a kernel
>> mapping. Then Xen complains the fault is not handled, probably because this
>> is too early in guest kernel boot for fault handlers to have been registered
>> with Xen. The pgd is clearly not empty, as Xen is able to dump the guest
>> kernel stack contents, which accesses guest kernel address space.
>>
>> Dumping stuff early may be a bit tricky. If you do a debug build of Xen
>> itself then even as domU guest you can print stuff to Xen's own console via
>> the HYPERVISOR_console_io() hypercall (which is what usually only dom0 uses
>> for kernel console writes). This might work in concert with earlyprintk? I'm
>> sure Jeremy will be able to say if I'm wrong, although he might be off email
>> for the next week or so.
>
> earlyprintk=xen works but sometimes even that isn't available early
> enough (IIRC earlyprintk accesses spinlock which isn't yet initialised
> during very early boot, or something like that) in which case you can
> use xen_raw_printk (defined in <xen/hvc-console.h>). This also needs a
> debug=y hypervisor.
>
> Ian.
Daft question, how can I check if the Xen I am running was build with
debug=y or not?
>
>>
>> -- Keir
>>
>> On 31/12/2009 21:39, "John Anderson" <johna@xxxxxxxxxx> wrote:
>>
>> > Greetings Xen Team,
>> >
>> > I am trying to help the PaX Team (http://pax.grsecurity.net/) integrate
>> > their
>> > PaX product into the Kernel.org¹s domU kernel for 2.6.31.7. It seems
>> > however, that we¹ve run into a wall in the process. The GRSecurity/PaX
>> > patch applies and compiles cleanly, but at early boot we get the page fault
>> > below. The PaX Team has narrowed down the cause of the error to
>> > xen_setup_kernel_pagetable while establishing the new pgd. It seems as if
>> > during the initial page table setup the pgd had become completely empty
>> > and on
>> > return from the hypervisor everything triggers various page faults and
>> > kills
>> > the guest kernel. Can anyone describe what happens to the pgd during
>> > this
>> > phase? Also, does anyone know how to get printk or print any information
>> > from
>> > the guest kernel at this early stage?
>> >
>> > Thanks in advance for any help you can offer.
>> >
>> > John A.
>> >
>> > Page Fault Follows:
>> >
>> >
>> > (XEN) Unhandled page fault in domain 26 on VCPU 0 (ec=0000)
>> > (XEN) Pagetable walk from 0000000000000028:
>> > (XEN) L4[0x000] = 0000000000000000 ffffffffffffffff
>> > (XEN) domain_crash_sync called from entry.S
>> > (XEN) Domain 26 (vcpu#0) crashed on cpu#4:
>> > (XEN) ----[ Xen-3.1.3 x86_64 debug=y Not tainted ]----
>> > (XEN) CPU: 4
>> > (XEN) RIP: e033:[<ffffffff81018496>]
>> > (XEN) RFLAGS: 0000000000000282 CONTEXT: guest
>> > (XEN) rax: 0000000000521109 rbx: 0000000000000000 rcx: 0000000000000020
>> > (XEN) rdx: ffffffff82ba6000 rsi: 00000000deadbeef rdi: 0000000000000000
>> > (XEN) rbp: 0000000000000000 rsp: ffffffff81601f50 r8: 0000000000000000
>> > (XEN) r9: ffffffff81817283 r10: ffffffff8102f528 r11: ffffffff81004280
>> > (XEN) r12: 0000000000000000 r13: 0000000000000000 r14: 0000000000000000
>> > (XEN) r15: 0000000000000000 cr0: 000000008005003b cr4: 00000000000006b0
>> > (XEN) cr3: 0000000503189000 cr2: 0000000000000028
>> > (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e02b cs: e033
>> > (XEN) Guest stack trace from rsp=ffffffff81601f50:
>> > (XEN) 0000000000000020 ffffffff81004280 0000000000000000
>> > ffffffff81018496
>> > (XEN) 000000010000e030 0000000000010082 ffffffff81601f98
>> > 000000000000e02b
>> > (XEN) 0000000000000007 ffffffff81004890 ffffffff8181719e
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) ffffffff81816c47 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> > (XEN) 0000000000000000 0000000000000000 0000000000000000
>> > 0000000000000000
>> >
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|