This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Design question for PV superpage support

To: Mick.Jordan@xxxxxxx
Subject: Re: [Xen-devel] Design question for PV superpage support
From: Dave McCracken <dcm@xxxxxxxx>
Date: Mon, 2 Mar 2009 13:14:33 -0600
Cc: Xen Developers List <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 02 Mar 2009 11:15:12 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <49AC220E.9030000@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <200903020754.23534.dcm@xxxxxxxx> <200903021200.09999.dcm@xxxxxxxx> <49AC220E.9030000@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.9
On Monday 02 March 2009, Mick Jordan wrote:
> Ok. So I want to re-iterate my question from a previous post. After the
> patch allowing mixed mappings, what exactly went wrong on save/restore.
> And would my special case of 1-1 physival/virtual mappings with
> additional 2MB VM mappings adddress after domain start suffer in that case?

My understanding of save/restore is that it will save your carefully selected 
2M pages, cheerfully restore them onto a random set of mfns, then expect your 
guest to continue running.  I haven't studied it enough to know whether your 
guest at least gets a chance to intervene and fix things after the restore.

Dave McCracken
Oracle Corp.

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>