On Mon, Oct 3, 2011 at 3:43 PM, Olaf Hering <olaf@xxxxxxxxx> wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@xxxxxxxxx>
> # Date 1317652812 -7200
> # Node ID b05ede64aaf5f5090fdb844c3a58f1f92d9b3588
> # Parent 13872c432c3807e0f977d9c1311801179807ece2
> xenpaging: handle paged pages in p2m_pod_decrease_reservation
>
> As suggested by <hongkaixing@xxxxxxxxxx>, handle paged pages in PoD code.
>
> Signed-off-by: Olaf Hering <olaf@xxxxxxxxx>
>
> diff -r 13872c432c38 -r b05ede64aaf5 xen/arch/x86/mm/p2m-pod.c
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -567,6 +567,21 @@ p2m_pod_decrease_reservation(struct doma
> BUG_ON(p2m->pod.entry_count < 0);
> pod--;
> }
> + else if ( steal_for_cache && p2m_is_paging(t) )
> + {
> + struct page_info *page;
> + /* alloc a new page to compensate the pod list */
This can't be right. The whole point of the "populate on demand" was
to pre-allocate a fixed amount of memory, and not need to have to
allocate any more. What happens if this allocation fails?
It seems like a better thing to do might be this: If we get a request
to swap out a page, and we still have PoD entries present, we "swap
out" that page as a zero page.
Hmm -- this will take some careful thought...
> + page = alloc_domheap_page(d, 0);
> + if ( !page )
> + goto out_entry_check;
> + set_p2m_entry(p2m, gpfn + i, _mfn(INVALID_MFN), 0, p2m_invalid,
> p2m->default_access);
> + p2m_mem_paging_drop_page(d, gpfn+i);
> + p2m_pod_cache_add(p2m, page, 0);
> + steal_for_cache = ( p2m->pod.entry_count > p2m->pod.count );
> + nonpod--;
> + ram--;
> + }
> + /* for other ram types */
> else if ( steal_for_cache && p2m_is_ram(t) )
> {
> struct page_info *page;
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|