WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Re: 3.0.0-rc2: Xen: High amount of kernel "reserved" memory,

To: Tobias Diedrich <ranma+xen@xxxxxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx
Subject: [Xen-users] Re: 3.0.0-rc2: Xen: High amount of kernel "reserved" memory, about 33% in 256MB DOMU [workaround included]
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Tue, 14 Jun 2011 15:48:11 -0400
Cc:
Delivery-date: Tue, 14 Jun 2011 12:49:50 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110614001055.GB7417@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <20110613205003.GD20616@xxxxxxxxxxxxxxxxx> <20110613215900.GB19117@xxxxxxxxxxxx> <20110614001055.GB7417@xxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.21 (2010-09-15)
On Tue, Jun 14, 2011 at 02:10:55AM +0200, Tobias Diedrich wrote:
> Konrad Rzeszutek Wilk wrote:
> > On Mon, Jun 13, 2011 at 10:50:03PM +0200, Tobias Diedrich wrote:
> > > Hi,
> > > 
> > > another issue I'm seeing with 3.0-rc2 and Xen is that there is an
> > > unexpectedly high amount of kernel reserved memory.
> > 
> > > 
> > > I suspect that Linux allocates page table entries and corresponding
> > > data structures for the whole 6GB areas of the provided 'physical
> > > RAM map' even though it has rather big unusable holes in it.
> > 
> > Can you run it with 'memblock=debug debug loglevel=8 initcall_debug'?
> > It should tell you where it tries (and for much space) the pagetables.

Ugh. In the meantime I would suggest you do the combination of:

Xen hypervisor line: "dom0_mem=max:512M" and on the Linux line: "mem=512M" to
cut down on the extra pagetable creation..

.. snip ..
> [    0.000000] init_memory_mapping: 0000000000000000-0000000010000000
> [    0.000000]  0000000000 - 0010000000 page 4k
> [    0.000000] kernel direct mapping tables up to 10000000 @ ff7e000-10000000
> [    0.000000]     memblock_x86_reserve_range: [0x0ff7e000-0x0ffe9fff]        
>   PGTABLE
> [    0.000000] xen: setting RW the range ffea000 - 10000000

So ff7e->ffea pages, or 432kBytes
> [    0.000000] init_memory_mapping: 0000000100000000-000000016fef0000
> [    0.000000]  0100000000 - 016fef0000 page 4k
> [    0.000000] kernel direct mapping tables up to 16fef0000 @ f3f7000-ff7e000
> [    0.000000]     memblock_x86_reserve_range: [0x0f3f7000-0x0f778fff]        
>   PGTABLE
> [    0.000000] xen: setting RW the range f779000 - ff7e000

And here from f3f7 through f779, so 3592kB or 3.5MB for un-used potential 
balloon memory.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users