WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] numa=on broken

To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] numa=on broken
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: Sun, 1 Apr 2007 13:53:28 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 01 Apr 2007 19:53:50 +0100
Envelope-to: Keir.Fraser@xxxxxxxxxxxx
In-reply-to: <C235938E.5398%Keir.Fraser@xxxxxxxxxxxx>
References: <20070401134629.GB28736@xxxxxxxxxx> <C235938E.5398%Keir.Fraser@xxxxxxxxxxxx>
User-agent: Mutt/1.5.6+20040907i
* Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> [2007-04-01 10:50]:
> On 1/4/07 14:46, "Ryan Harper" <ryanh@xxxxxxxxxx> wrote:
> 
> >> I don't think that auto-ballooning is a particularly sensible setting for
> >> serious use of Xen. I'd always advise to work out how much memory your dom0
> >> actually needs and make that a static allocation at boot time. But it is 
> >> our
> >> out-of-the-box default: another thing that needs explicit changing (via
> >> dom0_mem= in this case).
> > 
> > Right.  It looks like then that it would make sense to leave numa off by
> > default leaving the admin to specify both numa=on and a sensible
> > dom0_mem in the absence of a mechanism for dom0 to hand back memory from
> > a specific node, or some page migration mechanism.
> 
> That's my thinking. I'll see about getting some numa=on testing mixed into
> our regression tests, however. There's no reason not to run some proportion
> of them with numa=on, although actually most of our test systems are not
> NUMA (a few are though).

Thanks.  I should have a patchset for exposing the topology and heap
information cooked up this week for post 3.0.5 consideration.

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@xxxxxxxxxx


<Prev in Thread] Current Thread [Next in Thread>