|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] xen/arm: skip holes in physical address space when setting up frametable
On 20/04/2026 16:06, Luca Fancellu wrote:
> HI Michal,
>
>>>> +void __init init_frametable(paddr_t ram_start)
>>>> +{
>>>> + unsigned int sidx, nidx, max_idx;
>>>>
>>>> /*
>>>> * The size of paddr_t should be sufficient for the complete range of
>>>> @@ -26,24 +47,34 @@ void __init setup_frametable_mappings(paddr_t ps,
>>>> paddr_t pe)
>>>> BUILD_BUG_ON((sizeof(paddr_t) * BITS_PER_BYTE) < PADDR_BITS);
>>>> BUILD_BUG_ON(sizeof(struct page_info) != PAGE_INFO_SIZE);
>>>>
>>>> - if ( frametable_size > FRAMETABLE_SIZE )
>>>> - panic("The frametable cannot cover the physical region
>>>> %#"PRIpaddr" - %#"PRIpaddr"\n",
>>>> - ps, pe);
>>>> + max_idx = DIV_ROUND_UP(max_pdx, PDX_GROUP_COUNT);
>>>> + frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ram_start));
>>>>
>>>> - frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
>>>> - /* Round up to 2M or 32M boundary, as appropriate. */
>>>> - frametable_size = ROUNDUP(frametable_size, mapping_size);
>>>> - base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT,
>>>> 32<<(20-12));
>>>> + /*
>>>> + * pdx_to_page(pdx_s) in init_frametable_chunk must be page-aligned
>>>> + * for map_pages_to_xen(). Aligning to PDX_GROUP_COUNT guarantees this
>>>> + * because PDX_GROUP_COUNT * sizeof(page_info) is always a multiple of
>>>> + * PAGE_SIZE by construction.
>>>> + */
>>>> + frametable_base_pdx = ROUNDDOWN(frametable_base_pdx, PDX_GROUP_COUNT);
>>>
>>> We are now rounding down frametable_base_pdx which before this patch it was
>>> the start of the ram,
>>> but in xen/xen/arch/arm/include/asm/mm.h, mfn_valid(mfn) is using
>>> frametable_base_pdx to check for
>>> mfn validity, this means that we could pass an mfn before the start of the
>>> ram and if __mfn_valid is happy,
>>> we are getting a regression.
>>>
>>> Can this happen or am I missing something?
>> mfn_valid() can indeed return true for an MFN below ram_start that falls
>> in the same PDX group, but this is safe. init_frametable_chunk() maps
>> and zeroes the frametable for that range, so mfn_to_page() won't fault.
>> The zeroed page_info has count_info == 0 and no owner, so any get_page()
>> call on it will fail — the page is effectively inert.
>
> Yes, I’ve checked and many path relying on mfn_valid() go also through
> mfn_to_page()
> and/or get_page(), there is only one in process_shm() that potentially could
> add a shared memory page
> given that we are relaxing mfn_valid now.
We are not relaxing mfn_valid now. It has never meant to mean RAM. It means the
MFN has a corresponding frame table entry with struct page_info. This is what
it's comment says.
Static shmem is safe for ghost MFN. In prepare_staticmem_pages() we have a check
for count_info that for ghost MFN is 0.
>
> I’m trying also to follow is_iomem_page(), to check if subsequent
> mfn_to_page() fail safely, but I think that depending
> on that (mfn_valid) the page will be only treated differently, not sure if
> it’s a latent bug to leave mfn_valid() as it is.\
On Arm, is_iomem_page() is just !mfn_valid(), so yes, ghost MFNs
would be classified as not IOMEM. But this is harmless because is_iomem_page()
on ARM is only called from grant table paths that operate on pages obrtained by
get_page(), so no Arm code can reach is_iomem_page with a ghost MFN.
>
> Would it be valid to have something like mfn_to_page() != 0 to be part of
> mfn_valid() to ensure it’s a real host ram page?
> I’m truly asking here because I didn’t check if it’s doable.
As already said, mfn_valid was never supposed to mean RAM page. It is just that
on ARM the two happened to coincide until this patch.
>
> Otherwise we could split the round down and the frametable_base_pdx in this
> way maybe?
I don't think we need this extra complexity for something that is safe and has
always been in use. We've never had mfn_valid == RAM guarantee.
~Michal
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |