WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Error restoring DomU when using GPLPV

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Error restoring DomU when using GPLPV
From: ANNIE LI <annie.li@xxxxxxxxxx>
Date: Wed, 26 Aug 2009 19:04:24 +0800
Cc: Joshua West <jwest@xxxxxxxxxxxx>, James Harper <james.harper@xxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 26 Aug 2009 04:06:14 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4A8E1E85.6020902@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Oracle Corporation
References: <C6B2FFE1.12753%keir.fraser@xxxxxxxxxxxxx> <4A8E1E85.6020902@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.23 (Windows/20090812)
Hi,

Yes, that's weird. Do you know what condition causes guest memory allocation
failure on xc_domain_restore? Is it due to hitting the guest maxmem limit in
Xen? If so, is maxmem the same value across multiple iterations of
save/restore or migration?
Sorry, i have no idea about it. Maybe I need to print more log in for(;;) in xc_domain_restore to see what is the difference between without and with balooning down pages.
I did some migration test on linux/windows PVHVM on Xen3.4.

* I printed the value of "pfn = region_pfn_type[i] & ~XEN_DOMCTL_PFINFO_LTAB_MASK;" in xc_domain_restore.c. When restoring fails with error "Failed allocation for dom 2: 33 extents of order 0", the value of pfn is less than that of restoring successfully. So i think it should not due to hitting the guest maxmem limit in Xen. Is it correct?

* After comparing difference between with and without ballooning down (gnttab+shinfo) pages, i find that:

If the windows pv driver balloon down those pages, there will be more pages with XEN_DOMCTL_PFINFO_XTAB type in saving process. Furthermore, more bogus/unmapped page are skipped in restoring process. If the winpv driver do not balloon down those pages, there are only a little such pages with XEN_DOMCTL_PFINFO_XTAB type to be processed during save/restore process.

* Another result about winpv driver with ballooning down those pages
When doing save/restore for the second time, i find p2msize in restoring process become 0xfefff which is less than the normal size 0x100000.

Any suggestion about those test result? Or any idea to resolve this problem in winpv or xen?

Thanks
Annie.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel