This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: revisit the super page support in HVM restore

To: "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>
Subject: [Xen-devel] Re: revisit the super page support in HVM restore
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Wed, 19 Aug 2009 10:04:50 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 19 Aug 2009 02:05:23 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4A8BB00B.6050000@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcogooGx5nQrvvtCT2GMjuX5dlOMVwACZm4X
Thread-topic: revisit the super page support in HVM restore
User-agent: Microsoft-Entourage/
On 19/08/2009 08:55, "Zhai, Edwin" <edwin.zhai@xxxxxxxxx> wrote:

>> You wil fail to restore a guest which has ballooned down its memory as there
>> will be 4k holes in its memory map.
> I see. But current PV guest has same issue also. If set superpages for
> the PV guest, allocate_mfn in xc_domain_restore.c would try to allocate
> 2M page for each of pfn regardless of the holes. Per my understanding,
> this is more serious issue for PV guest, as it uses balloon driver more
> frequently.

I don't think this has been addressed yet for PV guests. But then again
noone much is using the PV superpage support. Whereas this HVM superpage
logic will be always on. So it needs to work reliably!

> If we have to use this algorithm, back to my complicated code -- do you
> have any suggestion to simplify the logic?

I wasn't clear where your pseudocode fits into xc_domain_restore. My view is
that we would probably stuff the logic inside allocate_physmem(), or near
the call to allocate_physmem(). The logic added would look for start of a
superpage, and look for a straight run of pages to the end of the superpage
(or until we hit the end of the batch, which would need special treatment).

As for other points:
 * "Need tell if it's a super page or not" -- superpages in the guest
physmap are only an optimisation. We can introduce them where possible,
regardless of which regions were or weren't superpage-backed in the original
source domain.
 * "Need know if page has already been populated, and if populated as a
normal page or superpage" -- p2m[] array tells us what is already populated.
And we do not need care after the allocation has happened whether it was a
superpage or not: a superpage will simply fill 512 entries in the p2m[]. Our
try-to-allocate-superpage logic will simply bail if it detects any entry in
the p2m[] range of interest is already populated.

Basically all we need is a "good-enough" heuristic for allocating
superpages, as they are an optimisation only. If measurement tells us our
heuristic is failing too often, then we can get more

 -- Keir

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>