WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] RE: Ballooning up

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Subject: Re: [Xen-devel] RE: Ballooning up
From: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Date: Tue, 14 Sep 2010 09:34:26 +0100
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <Stefano.Stabellini@xxxxxxxxxxxxx>, Konrad Wilk <konrad.wilk@xxxxxxxxxx>, Daniel, Kiper <dkiper@xxxxxxxxxxxx>
Delivery-date: Tue, 14 Sep 2010 01:35:03 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <54eebb3a-f539-43be-8134-a969a4f671c4@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <4C85F973.2030007@xxxxxxxx> <54eebb3a-f539-43be-8134-a969a4f671c4@default>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Mon, 2010-09-13 at 22:17 +0100, Dan Magenheimer wrote:
> > From: Jeremy Fitzhardinge [mailto:jeremy@xxxxxxxx]
> > Cc: Dan Magenheimer; Daniel Kiper; Stefano Stabellini; Konrad Rzeszutek
> > Wilk
> > Subject: Ballooning up
> > 
> >  I finally got around to implementing "ballooning up" in the pvops
> > kernels.  Now if you start a domain with "memory=X maxmem=Y", the
> > domain
> > will start with X MB of memory, but you can use "x[ml] mem-set" to
> > expand the domain up to Y.
> 
> Nice!
> 
> > As a side-effect, it also works for dom0.  If you set dom0_mem on the
> > Xen command line, then nr_pages is limited to that value, but the
> > kernel
> > can still see the system's real E820 map, and therefore adds all the
> > system's memory to its own balloon driver, potentially allowing dom0 to
> > expand up to take all physical memory.
> > 
> > However, this may caused bad side-effects if your system memory is much
> > larger than your dom0_mem, especially if you use a 32-bit dom0.  I may
> > need to add a kernel command line option to limit the max initial
> > balloon size to mitigate this...
> 
> I would call this dom0 functionality a bug.  I think both Citrix
> and Oracle use dom0_mem as a normal boot option for every
> installation and, while I think both employ heuristics to choose
> a larger dom0_mem for larger physical memory, I don't think it
> grows large enough for, say, >256GB physical memory, to accommodate
> the necessarily large number of page tables.

FWIW XenServer statically uses dom0_mem=752M and then balloons down on
smaller systems where so much domain 0 memory is not required, the
minimum is 128M or 256M or something.

A 32on64 domain 0 kernel fails to boot if dom0_mem is more than around
~56G because it runs out of lowmem for the page array. I suspect that
for some period before that the system isn't terribly usable due to low
amounts of available lowmem, even if it does manage to boot.

> So, I'd vote for NOT allowing dom0 to balloon up to physical
> memory if dom0_mem is specified, and possibly a kernel command
> line option that allows it to grow beyond.  Or, possibly, no
> option and never allow dom0 memory to grow beyond dom0_mem
> unless (possibly) it grows with hot-plug.
> 
> Dan
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>