WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [RFC] Replacing Xen's xmalloc engine and(?) API

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, "Xen-Devel (E-mail)" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC] Replacing Xen's xmalloc engine and(?) API
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Sun, 12 Oct 2008 18:42:59 +0100
Cc: Diwaker Gupta <dgupta@xxxxxxxxxxx>, nitingupta910@xxxxxxxxx, kurt.hackel@xxxxxxxxxx
Delivery-date: Sun, 12 Oct 2008 10:43:11 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <f8f630e6-00ca-4fae-9356-8481d65556e1@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AckskfdwNdDXDpiFEd22twAWy6hiGQ==
Thread-topic: [Xen-devel] [RFC] Replacing Xen's xmalloc engine and(?) API
User-agent: Microsoft-Entourage/11.4.0.080122
On 12/10/08 18:31, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote:

>> This wastage should be empirically measured by
>> instrumentation and then
>> optimisations made where worthwhile. That is somewhat
>> orthogonal to the
>> issue of what represents a sensible interface to xmalloc(),
>> except to say
>> that I would generally prefer a more sophisticated and
>> efficient mechanism
>> behind a simple interface, rather than punt complexity into
>> callers (by
>> making the costs hidden by the simple interface excessive and hence
>> unusable; or by complicating the interface with weird constraints).
> 
> Fair enough.  I will mimic the xmalloc API then.  However, I *would*
> like to export (via #define in xmalloc.h or function call or something)
> the definition of DELTA (e.g. the xmalloc space overhead) so my
> caller-side code can avoid the wastage.  I never want to accidentally
> xmalloc two pages when heap-alloc'ing one page will do.

A better xmalloc() implementation would allocate the necessary two pages
from alloc_heap_pages() and put the remaining just-less-than-a-page on its
free lists, rather than waste it. Or put the allocation metadata out-of-band
(e.g., in page_info, like SLUB) so that there is no DELTA.

*However*, exposing DELTA for those callers who care about it, given
limitations of current xmalloc() implementation, is a reasonable way to go
as far as I'm concerned.

>> My point in bringing up SLUB is that I assume it's an
>> allocator designed to
>> work reasonably well across a range of allocation-request-size
>> distributions, including those containing requests of size
>> x-pages-minus-a-bit. I'd rather have a more complicated
>> allocator than a
>> more complicated xmalloc() interface.
> 
> I'm no slab/slub expert but I think the interface only works
> well with fixed-size objects and when several of the fixed-size
> objects can be crammed into a single page.  I have a large set
> of objects that are essentially random in size (but all less
> than or equal to a page).

Well, you'd end up using the power-of-two sized caches. And since SLUB
doesn't put metadata in the data pages, the 4096-byte object cache will
serve up true single pages. But yes, this is an aspect I hadn't fully
considered. I'm not wedded to use of SLUB, it was just my arguing stick. :-)

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel