|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/mkelf32: Actually pad load segment to 2 MiB boundary
On 4/17/26 12:03 PM, Andrew Cooper wrote: On 17/04/2026 11:54 am, Ross Lagerwall wrote:Fix the code which tries to pad the load segment to 2 MiB but only pads it to a 1 MiB boundary. This manifested itself as a page fault while scrubbing RAM during boot. Xen failed to mark its location as reserved in the E820 because the last 2 MiB superpage overlapped a reserved region which meant the memory was given to the allocator despite being RO.Do you have the relevant snippet of the E820? AIUI, you're saying that Xen was placed immediately below an E820 reserved region (a valid layout at 1M alignment), where said region was inside the 2M-aligned boundary that Xen was expecting. But I don't quite follow what happened next. Where does read-only-ness come into it? Relevant E820: (XEN) [00000063469ff02c] [000000003f2df000, 000000003f31efff] (ACPI NVS) (XEN) [00000063519dc9f2] [000000003f31f000, 000000004cfebfff] (usable) (XEN) [000000635c504aff] [000000004cfec000, 000000004d07bfff] (ACPI data) (XEN) [00000063677372dc] [000000004d07c000, 000000004d09bfff] (ACPI NVS) With a load size of 0x900000 (padded to a 1 MiB boundary), Xen was placed at 4c600000-4cefffff. In __start_xen(), there is a call... reserve_e820_ram(&boot_e820, __pa(_stext), __pa(__2M_rwdata_end)); ... which tries to reserve the region 4c600000-4cffffff (size 0x1000000), padded to a 2 MiB boundary since it is using superpages. reserve_e820_ram() doesn't reserve anything because the request doesn't fall within a single RAM region. Therefore, the pages get treated as normal RAM and will get scrubbed later. However, __start_xen() also calls modify_xen_mappings() to mark all of .text and .rodata as RO in the direct map so when it actually tries to scrub it it gets a page fault instead (which is I suppose slightly better than just zeroing Xen's .text). Ross
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |