WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: PoD issue

To: <george.dunlap@xxxxxxxxxxxxx>
Subject: [Xen-devel] Re: PoD issue
From: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Date: Sun, 31 Jan 2010 17:48:14 +0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 31 Jan 2010 09:48:35 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> George Dunlap  01/29/10 7:30 PM >>>
>PoD is not critical to balloon out guest memory.  You can boot with mem 
>== maxmem and then balloon down afterwards just as you could before, 
>without involving PoD.  (Or at least, you should be able to; if you 
>can't then it's a bug.)  It's just that with PoD you can do something 
>you've always wanted to do but never knew it: boot with 1GiB with the 
>option of expanding up to 2GiB later. :-)

Oh, no, that's not what I meant. What I really wanted to say is that
with PoD, a properly functioning balloon driver in the guest is crucial
for it to stay alive long enough.

>With the 54 megabyte difference: It's not like a GiB vs GB thing, is 
>it?  (i.e., 2^30 vs 10^9?)  The difference between 1GiB (2^30) and 1 GB 
>(10^9) is about 74 megs, or 18,000 pages.

No, that's not the problem. As I understand it now, the problem is
that totalram_pages (which the balloon driver bases its calculations
on) reflects all memory available after all bootmem allocations were
done (i.e. includes neither the static kernel image nor any memory
allocated before or from the bootmem allocator).

>I guess that is a weakness of PoD in general: we can't control the guest 
>balloon driver, but we rely on it to have the same model of how to 
>translate "target" into # pages in the balloon as the PoD code.

I think this isn't a weakness of PoD, but a design issue in the balloon
driver's xenstore interface: While a target value shown in or obtained
from the /proc and /sys interfaces naturally can be based on (and
reflect) any internal kernel state, the xenstore interface should only
use numbers in terms of full memory amount given to the guest.
Hence a target value read from the memory/target node should be
adjusted before put in relation to totalram_pages. And I think this
is a general misconception in the current implementation (i.e. it
should be corrected not only for the HVM case, but for the pv one
as well).

The bad aspect of this is that it will require a fixed balloon driver
in any HVM guest that has maxmem>mem when the underlying Xen
gets updated to a version that supports PoD. I cannot, however,
see an OS and OS-version independent alternative (i.e. something
to be done in the PoD code or the tools).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] Re: PoD issue, Jan Beulich <=