[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 11/18] xen/riscv: Implement p2m_free_subtree() and related helpers
On 9/20/25 1:57 AM, Jan Beulich wrote:
On 17.09.2025 23:55, Oleksii Kurochko wrote:@@ -342,11 +354,147 @@ static int p2m_next_level(struct p2m_domain *p2m, bool alloc_tbl, return P2M_TABLE_MAP_NONE; } +static void p2m_put_foreign_page(struct page_info *pg) +{ + /* + * It’s safe to call put_page() here because arch_flush_tlb_mask() + * will be invoked if the page is reallocated before the end of + * this loop, which will trigger a flush of the guest TLBs. + */ + put_page(pg); +}What is "this loop" referring to in the comment? There's no loop here. The loop is inside the caller (p2m_put_2m_superpage()): ... for ( i = 0; i < P2M_PAGETABLE_ENTRIES(1); i++, pg++ ) p2m_put_foreign_page(pg); Agree, that comment is pretty confusing. I am not sure it is necessary to mention a specific loop — the comment would still be correct without referring to "this loop". So I will rewrite the comment as: /* * It’s safe to call put_page() here because arch_flush_tlb_mask() * will be invoked if the page is reallocated, which will trigger a * flush of the guest TLBs. */ +/* Put any references on the page referenced by pte. */ +static void p2m_put_page(const pte_t pte, unsigned int level, p2m_type_t p2mt) +{ + mfn_t mfn = pte_get_mfn(pte); + + ASSERT(pte_is_valid(pte)); + + /* + * TODO: Currently we don't handle level 2 super-page, Xen is not + * preemptible and therefore some work is needed to handle such + * superpages, for which at some point Xen might end up freeing memory + * and therefore for such a big mapping it could end up in a very long + * operation. + */ + switch ( level ) + { + case 1: + return p2m_put_2m_superpage(mfn, p2mt); + + case 0: + return p2m_put_4k_page(mfn, p2mt); + + default: + assert_failed("Unsupported level");I don't think assert_failed() is supposed to be used directly. What's wrong with using ASSERT_UNREACHABLE() here? Nothing, I just wanted to have some custom massage. I am okay with ASSERT_UNREACHABLE(), anyway it will print where ASSERT occurred. --- a/xen/arch/riscv/paging.c +++ b/xen/arch/riscv/paging.c @@ -107,6 +107,14 @@ int paging_ret_pages_to_domheap(struct domain *d, unsigned int nr_pages) return 0; } +void paging_free_page(struct domain *d, struct page_info *pg) +{ + spin_lock(&d->arch.paging.lock); + page_list_add_tail(pg, &d->arch.paging.freelist); + ACCESS_ONCE(d->arch.paging.total_pages)++;More a question to other REST maintainers than to you: Is this kind of use of ACCESS_ONCE() okay? By the wording, one might assume a single memory access, yet only x86 can actually carry out an increment (or alike) of an item in memory in a single insn. I thought that ACCESS_ONCE() is more about preventing compiler optimizations than about ensuring atomicity. In this specific case, I don’t think ACCESS_ONCE() is really needed since a spin lock is already being used. ~ Oleksii
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |