This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH] Fix auto-ballooning of dom0 for HVM domains

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Fix auto-ballooning of dom0 for HVM domains
From: "Charles Coffing" <ccoffing@xxxxxxxxxx>
Date: Thu, 18 May 2006 12:11:23 -0600
Delivery-date: Thu, 18 May 2006 11:12:05 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <446C5554.D169.003C.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <446C5554.D169.003C.0@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Sorry, I forgot the Signed-off-by line.  Consider this my sign-off on
the previous email:

Signed-off-by: Charles Coffing <ccoffing@xxxxxxxxxx>

>>> On Thu, May 18, 2006 at 11:04 AM, in message
"Charles Coffing" <ccoffing@xxxxxxxxxx> wrote: 
> Hi,
> I've been trying to make the auto- ballooning of domain 0 work
> for HVM domains.
> Patch #1 is a simple bug fix, in preparation for patch #2.  Patch #2
> tries to calculate how much memory is needed for HVM domains
> bug #521) and is certainly open for discussion.  Patches apply
> to both 3.0- testing and unstable.
> Patch #1 (xen- hvm- auto- balloon.diff):
> When a domain (whether para-  or fully- virtualized) reports how
> overhead memory it requires (via getDomainMemory in image.py), all
> memory was immediately allocated to the domain itself.  This is
> certainly incorrect for HVM domains, since additional
> increase_reservation calls are made later in qemu.  Since all
> memory is already taken, qemu will fail.  The fix is to treat the
> requested memory size and the overhead size as separate values.  The
> requested memory size is immediately allocated to the new domain;
> overhead is left unallocated for whatever else might need it later.
> Patch #2 (xen- get- dom- memory.diff):
> This patch calculates the overhead needed for HVM domains.  If HVM
> supported by the hardware, I add a little ballooning overhead to
> paravirtualized VMs also, to avoid low- memory situations.  (There
> various unchecked alloc_domheap_pages calls in shadow*.c that I am
> trying to avoid tripping over for now...)  The values in this patch
> fine on 32 bit; I may update them later based on feedback and/or
> on 64 bit.
> Thanks,
> Chuck

Xen-devel mailing list