This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Error restoring DomU when using GPLPV

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Error restoring DomU when using GPLPV
From: ANNIE LI <annie.li@xxxxxxxxxx>
Date: Fri, 21 Aug 2009 12:11:49 +0800
Cc: Joshua West <jwest@xxxxxxxxxxxx>, James Harper <james.harper@xxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 20 Aug 2009 21:13:41 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C6B2FFE1.12753%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Oracle Corporation
References: <C6B2FFE1.12753%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird (Windows/20090605)

It seems that a 
restored vm lost those pages ballooned down. For migration, destination
does not have those pages which ballooned down on source.

Right, that's the correct behaviour isn't it? Pages freed on source VM do
not magically reappear on destination VM?

Yes,  so this method can not fix this problem.

But if i balloon down those pages every time(not driver first load),  i
tested save/restore/migration for several times, and all work fine.  But
the domu will waste lots of memory in this situation.

Yes, that's weird. Do you know what condition causes guest memory allocation
failure on xc_domain_restore? Is it due to hitting the guest maxmem limit in
Xen? If so, is maxmem the same value across multiple iterations of
save/restore or migration?
Sorry, i have no idea about it. Maybe I need to print more log in for(;;) in xc_domain_restore to see what is the difference between without and with balooning down pages.


Xen-devel mailing list