WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: [RFC] transcendent memory for Linux

> > It is documented currently at:
> > 
> > http://oss.oracle.com/projects/tmem/documentation/api/
> > 
> > (just noticed I still haven't posted version 0.0.2 which
> > has a few minor changes).
> > 
> > I will add a briefer description of this API in Documentation/
> 
> Please do.

OK, will do.

> At least TMEM_NEW_POOL() looks quite ugly. Why uuid? Mixing flags into
> size argument is strange.

The uuid is only used for shared pools.  If two different
"tmem clients" (guests) agree on a 128-bit "shared secret",
they can share a tmem pool.  For ocfs2, the 128-bit uuid in
the on-disk superblock is used for this purpose to implement
shared precache.  (Pages evicted by one cluster node
can be used by another cluster node that co-resides on
the same physical system.)

The (page)size argument is always fixed (at PAGE_SIZE) for
any given kernel.  The underlying implementation can
be capable of supporting multiple pagesizes.

So for the basic precache and preswap uses, "new pool"
has a very simple interface.

> > It is in-kernel only because some of the operations have
> > a parameter that is a physical page frame number.
> 
> In-kernel API is probably better described as function prototypes.

Good idea.  I will do that.

Thanks,
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel