|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Populate-on-demand memory problem
Sorry, I've been trying to test all of the p2m/pod patches on a machine
with HAP (since some paches, like the one that enables replacing 4k
pages with a superpage, can only be tested on HAP), and running into a
bunch of problems.
But this patch can clearly stand on its own, so I'll post it later today.
-George
On 09/08/10 10:29, Keir Fraser wrote:
On 09/08/2010 09:48, "Jan Beulich"<JBeulich@xxxxxxxxxx> wrote:
Keir,
with Dietmar having tested this successfully, is there anything that
keeps this from being applied to -unstable (and perhaps also 4.0.1)?
George needs to resubmit it for inclusion, with a proper changeset comment
and a signed-off-by line.
-- Keir
Jan
On 27.07.10 at 15:10, George Dunlap<George.Dunlap@xxxxxxxxxxxxx> wrote:
Hmm, looks like I neglected to push a fix upstream. Can you test it
with the attached patch, and tell me if that fixes your problem?
-George
On Tue, Jul 27, 2010 at 8:48 AM, Dietmar Hahn
<dietmar.hahn@xxxxxxxxxxxxxx> wrote:
Hi list,
we ported our system from Novel SLES11 using xen-3.3 to SLES11 SP1 using
xen-4.0 and ran into some trouble with the pod stuff.
We have a HVM guest and already used target_mem< max_mem on startup of
the guest.
With the new xen version we get
(XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! tot_pages
792792 pod_entries 800
I did some code revisions and looking at pod patches
(http://lists.xensource.com/archives/html/xen-devel/2008-12/msg01030.html)
to understand the behavior. We use the following configuration:
maxmem = 4096
memory = 3096
What I see is:
- our guest boots with e820 map showing maxmem.
- reading xenstore memory/target returns '3170304' means 3096MB, 792576
pages
Now our guest uses the target memory and gives back 1000MB via
hypervisor call XENMEM_decrease_reservation to the hypervisor.
Later I try to map the complete domU memory into dom0 kernel space and here
I
get the 'Out of populate-on-demand memory' crash.
As far as I understand (ignoring the p2m_pod_emergency_sweep)
- on populating a page
- the page is taken from the pod cache
- p2md->pod.count--
- p2md->pod.entry_count--
- page gets type p2m_ram_rw
- decreasing a page
- p2md->pod.entry_count--
- page gets type p2m_invalid
So if the guest uses all the target memory and gave back all
the (maxmem-target) memory p2md->pod.count and p2md->pod.entry_count should
be
zero.
I added some tracing in the hypervisor and see on start of the guest:
p2m_pod_set_cache_target: p2md->pod.count: 791264 tot_pages: 791744
This pod.count is lower then the target seen in the guest!
On the first call of p2m_pod_demand_populate() I can see
p2m_pod_demand_populate: p2md->pod.entry_count: 1048064 p2md->pod.count:
791264
tot_pages: 792792
So pod.entry_count=1048064 (4096MB) complies to maxmem but
pod.count=791264 is lower then the target memory in xenstore.
Any help is welcome!
Thanks.
Dietmar.
--
Company details: http://ts.fujitsu.com/imprint.html
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|