WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] xen: memory initialization/balloon fixes (#3)

To: David Vrabel <david.vrabel@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-devel] xen: memory initialization/balloon fixes (#3)
From: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Date: Wed, 21 Sep 2011 15:29:38 -0700 (PDT)
Cc: Konrad Wilk <konrad.wilk@xxxxxxxxxx>
Delivery-date: Wed, 21 Sep 2011 15:30:44 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <63997331-e53b-48e5-bf7c-87141aae49d6@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1316089768-22461-1-git-send-email-david.vrabel@xxxxxxxxxx 63997331-e53b-48e5-bf7c-87141aae49d6@default>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> From: Dan Magenheimer
> Sent: Tuesday, September 20, 2011 10:58 AM
> To: David Vrabel; xen-devel@xxxxxxxxxxxxxxxxxxx
> Cc: Konrad Wilk
> Subject: RE: [Xen-devel] xen: memory initialization/balloon fixes (#3)
> 
> > From: David Vrabel [mailto:david.vrabel@xxxxxxxxxx]
> > Sent: Thursday, September 15, 2011 6:29 AM
> > To: xen-devel@xxxxxxxxxxxxxxxxxxx
> > Cc: Konrad Rzeszutek Wilk
> > Subject: [Xen-devel] xen: memory initialization/balloon fixes (#3)
> >
> > This set of patches fixes some bugs in the memory initialization under
> > Xen and in Xen's memory balloon driver.  They can make 100s of MB of
> > additional RAM available (depending on the system/configuration).
> 
> Hi David --
> 
> Thanks for your patches!  I am looking at a memory capacity/ballooning
> weirdness that I hoped your patchset might fix, but so far it has not.
> I'm wondering if there was an earlier fix that you are building upon
> and that I am missing.
> 
> My problem occurs in a PV domU with an upstream-variant kernel based
> on 3.0.5.  The problem is that the total amount of memory as seen
> from inside the guest is always substantially less than the amount
> of memory seen from outside the guest.  The difference seems to
> be fixed within a given boot, but assigning a different vm.cfg mem=
> changes the amount.  (For example, the difference D is about 18MB on
> a mem=128 boot and about 36MB on a mem=1024 boot.)
> 
> Part B of the problem (and the one most important to me) is that
> setting /sys/devices/system/xen_memory/xen_memory0/target_kb
> to X results in a MemTotal inside the domU (as observed by
> "head -1 /proc/meminfo") of X-D.  This can be particularly painful
> when X is aggressively small as X-D may result in OOMs.
> To use kernel function/variable names (and I observed this with
> some debugging code), when balloon_set_new_target(X) is called
> totalram_pages gets driven to X-D.
> 
> I am using xm, but I don't think this is a toolchain problem because
> the problem can be provoked and observed entirely within the guest...
> though I suppose it is possible that that the initial "mem=" is the
> origin of the problem and the balloon driver just perpetuates
> the initial difference.  (I tried xl... same problem... my
> Xen/toolset version is 4.1.2-rc1-pre cset 23102)
> 
> The descriptions in your patchset sound exactly as if you are
> attacking the same problem, but I'm not seeing any improved
> result.  Any thoughts or ideas?

Hi David (and Konrad) --

Don't know if you are looking at this or not (or if your patchset
was intended to fix this problem or not).  Looking into Part B
of the problem, it appears that in balloon_init() the initial
value of balloon_stats.current_pages may be set incorrectly.
I'm finding that (for a PV domain), both nr_pages and max_pfn
match mem=, but totalram_pages is substantially less.
Current_pages should never be higher than the actual number
of pages of RAM seen by the kernel, should it?

By changing max_pfn to totalram_pages in the initialization of
balloon_stats.current_pages in balloon_init(), my problem goes
away... almost.  With that fix, setting the balloon target to NkB
results in totalram_pages (as seen from "head -1 /proc/meminfo")
going to (N+6092)kB.  The value 6092 appears to be fixed
regardless of mem= and the fact that the result is off by
6092kB is annoying but, since it is higher rather than lower
and it is fixed, it is not nearly as dangerous IMHO.

Since I'm not as sure of the RAM-ifications (pun intended) of
this change, I'd appreciate any comments you might have.

Also, this doesn't fix the large difference between MEM(K)
reported by the domain in xentop (which matches mem=) and
totalram_pages but, though also annoying, that's not such
a big problem IMHO.  I'm guessing this may be space taken
for PV pagetables or something like that, though the amount
of RAM that "disappears" on a small-RAM guest (e.g. mem=128)
is very high (e.g. ~18MB).  But for my purposes (selfballooning),
this doesn't matter (much) so I don't plan to pursue this
right now.

Thanks for any feedback!
Dan

P.S. I also haven't looked at the HVM code in balloon_init.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>