This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [XenPPC] Out of memory issues with xen

To: butrico@xxxxxxxxxxxxxx, Jimi Xenidis <jimix@xxxxxxxxxxxxxx>
Subject: Re: [XenPPC] Out of memory issues with xen
From: Hollis Blanchard <hollisb@xxxxxxxxxx>
Date: Mon, 28 Aug 2006 18:19:02 -0500
Cc: xen-ppc-devel <xen-ppc-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 28 Aug 2006 16:18:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1156532442.656.27.camel@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-ppc-devel-request@lists.xensource.com?subject=help>
List-id: Xen PPC development <xen-ppc-devel.lists.xensource.com>
List-post: <mailto:xen-ppc-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ppc-devel>, <mailto:xen-ppc-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ppc-devel>, <mailto:xen-ppc-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: IBM Linux Technology Center
References: <44EF3EF9.8010304@xxxxxxxxxxxxxx> <1156532442.656.27.camel@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-ppc-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Fri, 2006-08-25 at 14:00 -0500, Hollis Blanchard wrote:
> > The second problem is that when we change xen to give dom0 128M dom0
> we 
> > do not have enough memory to make another partition even though the 
> > machine has 512M.  This is a bug that Jimi is currently looking into
> (I 
> > think).
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=744

This should be fixed with the large commit I just made. However, when I
re-read the bug report I'm not sure what it's trying to say. Jimi, you
opened it, can you verify it's fixed?

Hollis Blanchard
IBM Linux Technology Center

Xen-ppc-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>