On 11/01/2011 11:00, "Olaf Hering" <olaf@xxxxxxxxx> wrote:
> On Tue, Jan 11, Keir Fraser wrote:
>
>> Could we do this in free_heap_pages() instead? That definitely catches
>> everything that gets placed in Xen's free pool.
>
> Yes, this is a compile-tested patch.
Thanks. I fixed a subtle bug (rather disgustingly, set_gpfn_from_mfn()
depends on page_get_owner(mfn_to_page(mfn))), and checked it in as c/s
22706.
-- Keir
>
> The machine_to_phys_mapping[] array needs updating during page
> deallocation. If that page is allocated again, a call to
> get_gpfn_from_mfn() will still return an old gfn from another guest.
> This will cause trouble because this gfn number has no or different
> meaning in the context of the current guest.
>
> This happens when the entire guest ram is paged-out before
> xen_vga_populate_vram() runs. Then XENMEM_populate_physmap is called
> with gfn 0xff000. A new page is allocated with alloc_domheap_pages.
> This new page does not have a gfn yet. However, in
> guest_physmap_add_entry() the passed mfn maps still to an old gfn
> (perhaps from another old guest). This old gfn is in paged-out state in
> this guests context and has no mfn anymore. As a result, the ASSERT()
> triggers because p2m_is_ram() is true for p2m_ram_paging* types.
> If the machine_to_phys_mapping[] array is updated properly, both loops
> in guest_physmap_add_entry() turn into no-ops for the new page and the
> mfn/gfn mapping will be done at the end of the function.
>
> If XENMEM_add_to_physmap is used with XENMAPSPACE_gmfn,
> get_gpfn_from_mfn() will return an appearently valid gfn. As a result,
> guest_physmap_remove_page() is called. The ASSERT in p2m_remove_page
> triggers because the passed mfn does not match the old mfn for the
> passed gfn.
>
>
> Signed-off-by: Olaf Hering <olaf@xxxxxxxxx>
>
> ---
> v2:
> move from free_domheap_pages() to free_heap_pages() as suggested by Keir
>
> xen/common/page_alloc.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> --- xen-unstable.hg-4.1.22697.orig/xen/common/page_alloc.c
> +++ xen-unstable.hg-4.1.22697/xen/common/page_alloc.c
> @@ -527,6 +527,7 @@ static int reserve_offlined_page(struct
> static void free_heap_pages(
> struct page_info *pg, unsigned int order)
> {
> + unsigned long mfn;
> unsigned long mask;
> unsigned int i, node = phys_to_nid(page_to_maddr(pg)), tainted = 0;
> unsigned int zone = page_to_zone(pg);
> @@ -536,6 +537,11 @@ static void free_heap_pages(
>
> spin_lock(&heap_lock);
>
> + /* this page is not a gfn anymore */
> + mfn = page_to_mfn(pg);
> + for ( i = 0; i < (1 << order); i++ )
> + set_gpfn_from_mfn(mfn + i, INVALID_M2P_ENTRY);
> +
> for ( i = 0; i < (1 << order); i++ )
> {
> /*
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|