This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH] fix zone-over-node preference when allocating me

To: Andre Przywara <andre.przywara@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] fix zone-over-node preference when allocating memory
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Thu, 28 Feb 2008 13:38:15 +0000
Delivery-date: Thu, 28 Feb 2008 05:38:48 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <476C3DEF.7090306@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Ach6DytSadwTKOYCEdyTNwAX8io7RQ==
Thread-topic: [Xen-devel] [PATCH] fix zone-over-node preference when allocating memory
User-agent: Microsoft-Entourage/
You have no guarantee that the DMA pool memory belongs to the allocating
node either (although it happens to be the case in the scenario you are
trying to fix). Instead I suggest that default dma_bitsize should depend on
NUMA characteristics of the system. For example, we could specify that
dma_bitsize should not cover more than 25% of the memory of any one NUMA
node. In your example this would cause you to have dma_bitsize=30.

I was going to suggest we get rid of dma_bitsize now we have the
per-bitwidth zones, but actually it probably is needed specifically for NUMA
systems. If we have one NUMA node with all memory below 4GB, we'd probably
like it to fall back to allocating memory from other nodes before it
exhausts all the available below-4GB memory.

 -- Keir

On 21/12/07 22:27, "Andre Przywara" <andre.przywara@xxxxxxx> wrote:

> When Xen allocates the guest's memory, it will try to use non-DMA-able
> zones first (probably because they are less precious). If there are no
> such pages available on a certain node, Xen will revert to allocating
> low pages from another node and thus ignoring the node-preference. This
> patch fixes this by first checking if non-DMA pages are available on a
> node and reverting to DMA-able pages if not. This fixes incorrect NUMA
> memory allocation on nodes with memory below the DMA border (4GB on
> x86-64, affects for instance dual-node machines with 4gig on each node).
> Andre.
> P.S. This fix was already part of my NUMA guest patches back in August,
> this is just an extract of these.
> Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>
  • Re: [Xen-devel] [PATCH] fix zone-over-node preference when allocating memory, Keir Fraser <=