WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-changelog

[Xen-changelog] [xen-unstable] xmalloc_tlsf: Fall back to xmalloc_whole_

To: xen-changelog@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-changelog] [xen-unstable] xmalloc_tlsf: Fall back to xmalloc_whole_pages() if xmem_pool_alloc() fails.
From: Xen patchbot-unstable <patchbot-unstable@xxxxxxxxxxxxxxxxxxx>
Date: Wed, 21 Oct 2009 01:25:12 -0700
Delivery-date: Wed, 21 Oct 2009 01:28:04 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-changelog-request@lists.xensource.com?subject=help>
List-id: BK change log <xen-changelog.lists.xensource.com>
List-post: <mailto:xen-changelog@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=unsubscribe>
Reply-to: xen-devel@xxxxxxxxxxxxxxxxxxx
Sender: xen-changelog-bounces@xxxxxxxxxxxxxxxxxxx
# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1256113261 -3600
# Node ID 87bc0d49137bb1d66758766b39dbaf558aabd043
# Parent  9ead82c46efd7f95428a186e3dd3e8587ec9d811
xmalloc_tlsf: Fall back to xmalloc_whole_pages() if xmem_pool_alloc() fails.

This was happening for xmalloc request sizes between 3921 and 3951
bytes. The reason being that xmem_pool_alloc() may add extra padding
to the requested size, making the total block size greater than a
page.

Rather than add yet more smarts about TLSF to _xmalloc(), we just
dumbly attempt any request smaller than a page via xmem_pool_alloc()
first, then fall back on xmalloc_whole_pages() if this fails.

Based on bug diagnosis and initial patch by John Byrne <john.l.byrne@xxxxxx>

Signed-off-by: Keir Fraser <keir.fraser@xxxxxxxxxx>
---
 xen/common/xmalloc_tlsf.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff -r 9ead82c46efd -r 87bc0d49137b xen/common/xmalloc_tlsf.c
--- a/xen/common/xmalloc_tlsf.c Wed Oct 21 08:51:10 2009 +0100
+++ b/xen/common/xmalloc_tlsf.c Wed Oct 21 09:21:01 2009 +0100
@@ -553,7 +553,7 @@ static void tlsf_init(void)
 
 void *_xmalloc(unsigned long size, unsigned long align)
 {
-    void *p;
+    void *p = NULL;
     u32 pad;
 
     ASSERT(!in_irq());
@@ -566,10 +566,10 @@ void *_xmalloc(unsigned long size, unsig
     if ( !xenpool )
         tlsf_init();
 
-    if ( size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+    if ( size < PAGE_SIZE )
+        p = xmem_pool_alloc(size, xenpool);
+    if ( p == NULL )
         p = xmalloc_whole_pages(size);
-    else
-        p = xmem_pool_alloc(size, xenpool);
 
     /* Add alignment padding. */
     if ( (pad = -(long)p & (align - 1)) != 0 )
@@ -603,7 +603,7 @@ void xfree(void *p)
         ASSERT(!(b->size & 1));
     }
 
-    if ( b->size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+    if ( b->size >= PAGE_SIZE )
         free_xenheap_pages((void *)b, get_order_from_bytes(b->size));
     else
         xmem_pool_free(p, xenpool);

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-changelog] [xen-unstable] xmalloc_tlsf: Fall back to xmalloc_whole_pages() if xmem_pool_alloc() fails., Xen patchbot-unstable <=