WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] live migration can fail due to XENMEM_maximum_gpfn

To: John Levon <levon@xxxxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] live migration can fail due to XENMEM_maximum_gpfn
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Tue, 07 Oct 2008 08:19:58 +0100
Cc:
Delivery-date: Tue, 07 Oct 2008 00:20:33 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20081006164753.GA7589@xxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AckoTRqPWT+AJJRAEd2h+wAWy6hiGQ==
Thread-topic: [Xen-devel] live migration can fail due to XENMEM_maximum_gpfn
User-agent: Microsoft-Entourage/11.4.0.080122
On 6/10/08 17:47, "John Levon" <levon@xxxxxxxxxxxxxxxxx> wrote:

> dom 11 max gpfn 262143
> dom 11 max gpfn 262143
> dom 11 max gpfn 262143
> ....
> dom 11 max gpfn 985087
> 
> (1Gb Solaris HVM domU).
> 
> I'm not sure how this should be fixed?

You are correct that there is a general issue here, if the guest arbitrarily
increases max_mapped_pfn. However, yours is more likely a specific problem
-- mappings being added in the 'I/O hole' 0xF0000000-0xFFFFFFFF by PV
drivers. This is strictly easier because we can fix it by assuming that no
new mappings will be created above 4GB after the domain starts/resumes
running. A simple fix, then, is for xc_domain_restore() to map something at
page 0xFFFFF (e.g., shared_info) if max_mapped_pfn is smaller than that.
This will bump max_mapped_pfn as high as necessary. Note that a newly-built
HVM guest will always have 0xFFFFF as minimum max_mapped_pfn since
xc_hvm_build() maps shared_info at 0xFFFFF to initialise it (arguably
xc_domain_restore() should be doing the same!).

 -- Keir





_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>