When Xen allocates the guest's memory, it will try to use non-DMA-able
zones first (probably because they are less precious). If there are no
such pages available on a certain node, Xen will revert to allocating
low pages from another node and thus ignoring the node-preference. This
patch fixes this by first checking if non-DMA pages are available on a
node and reverting to DMA-able pages if not. This fixes incorrect NUMA
memory allocation on nodes with memory below the DMA border (4GB on
x86-64, affects for instance dual-node machines with 4gig on each node).
Andre.
P.S. This fix was already part of my NUMA guest patches back in August,
this is just an extract of these.
Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 277-84917
----to satisfy European Law for business letters:
AMD Saxony Limited Liability Company & Co. KG,
Wilschdorfer Landstr. 101, 01109 Dresden, Germany
Register Court Dresden: HRA 4896, General Partner authorized
to represent: AMD Saxony LLC (Wilmington, Delaware, US)
General Manager of AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy
diff -r 1f4b29eaf7f4 xen/common/page_alloc.c
--- a/xen/common/page_alloc.c Thu Dec 20 17:30:27 2007 +0000
+++ b/xen/common/page_alloc.c Fri Dec 21 22:58:58 2007 +0100
@@ -797,7 +797,12 @@ struct page_info *__alloc_domheap_pages(
if ( (zone_hi + PAGE_SHIFT) >= dma_bitsize )
{
- pg = alloc_heap_pages(dma_bitsize - PAGE_SHIFT, zone_hi, cpu, order);
+ if (avail_heap_pages(dma_bitsize - PAGE_SHIFT, zone_hi,
+ cpu_to_node (cpu)) >= ( 1UL << order ))
+ {
+ pg = alloc_heap_pages(dma_bitsize - PAGE_SHIFT, zone_hi,
+ cpu, order);
+ }
/* Failure? Then check if we can fall back to the DMA pool. */
if ( unlikely(pg == NULL) &&
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|