> The Xen patch is currently-based on 3.3.0+
> and I am in the process of updating it and cleaning it up, so
> will post it in the near future, but can provide it to anyone
> who is very interested in seeing/trying it now. I could
> use some help on the "control plane" python software,
> in performance evaluation, and in "porting".
For those interested in tmem, I have ported the Xen 3.3
patch to xen-unstable (cset 19043) and the monolithic patch
(plus the Linux patch) can be viewed at:
http://oss.oracle.com/projects/tmem/files/
After I've completed the control plane, I'll post the patch
more formally and less monolithic.
Note that I am still trying to track down an ASSERT bug
that wasn't present before the port, but I don't think
anyone is going to apply this to a production system
anyway :-) I also haven't tested it recently on
a 32-bit hypervisor, but it's a bit of a toy on 32-bit
anyway because of the 12MB limit on the xenheap.
More patch and usage documentation will be forthcoming but
a quick run-through of the patch is below for those that don't
want to dig through 2500 lines of code.
Comments and questions are very welcome!
Thanks,
Dan
P.S. If you reply-all to this message, ignore bounces from
tmem-devel; I am the moderator and will approve your post.
If you are interested in other (e.g. non-Xen) discussion of
tmem, please feel free to subscribe to tmem-devel via:
http://oss.oracle.com/projects/tmem/mailman/
Direct link to Xen patch:
http://oss.oracle.com/projects/tmem/dist/files/xen-unstable/tmem-xen-unstable-19043-090115.patch
Core functionality:
===================
common/tmem.c: implementation of transcendent memory
common/radix-tree.c: heavily leveraged from Linux
(see comment near beginning for differences)
include/xen/tmem.h: defines and declarations for tmem
include/xen/radix-tree.h: heavily leveraged from Linux
common/Makefile: add tmem.o and radix-tree.o
New hypercall: (only one new hypercall!)
========================================
include/public/xen.h: new tmem hypercall and interface
include/xen/hypercall.h: ditto
various/entry.S: ditto
Misc interface stuff:
=====================
arch/x86/setup.c: call init_tmem()
common/domain.c: destroy a domain's pool when it dies
common/page_alloc.c: comment out an annoying printk
common/xmalloc_tlsf.c: use domheap instead of xenheap
for xmalloc/xfree and add some useful measurements
(metrics will be removed in final patch)
include/xen/hash.h: identical to Linux version
include/xen/sched.h: add per-domain tmem container pointer
> -----Original Message-----
> From: Dan Magenheimer
> Sent: Thursday, January 08, 2009 10:27 AM
> To: Xen-Devel (E-mail)
> Subject: [RFC] Transcendent Memory ("tmem"): a new approach
> to physical
> memory management
>
>
> At last year's Xen North America Summit in Boston, I gave a talk
> about memory overcommitment in Xen. I showed that the basic
> mechanisms for moving memory between domains were already present
> in Xen and that, with a few scripts, it was possible to roughly
> load-balance memory between domains. During this effort, I
> discovered that "ballooning" had a lot of weaknesses, even
> though it is the foundation for "time-sharing" physical
> memory in every major virtualization system existing today.
> These weaknesses have led in many cases to unacceptable performance
> issues when VMs are densely packed; as a result, memory is becoming
> the bottleneck in many deployments of virtualization.
>
> Transcendent Memory -- or "tmem" for short -- is phase II of that
> work and it essentially augments ballooning and "fixes" many of
> its weaknesses. It requires paravirtualization, but the changes
> (to Linux) are fairly small and minimally-invasive. The changes
> to Xen are larger, but also fairly non-invasive. (No shell scripts
> this time! :-) The concept and code is modular and may easily
> port to Windows, as well as KVM. It may even be useful in
> containers and in a native physical operating system. And,
> yes, it is machine-independent so should be easily portable
> to ia64 too!
>
> Basically, instead of moving the ownership of all physical memory
> between one domain and another, tmem instead collects system-wide
> underutilized memory into a "pool" in the hypervisor and provides
> indirect access to that memory so that it can serve the needs
> of domains without necessarily being directly addressible by the
> domains it serves. It is implemented with a small set of
> (hyper)calls that enable pages to be copied between a domain
> and Xen, controlled by a carefully-crafted set of semantics that
> make it easy in most cases for memory to be easily reclaimed
> by Xen as memory needs vary (as they often do -- rapidly and
> unpredictably). As a result, physical memory is utilized more
> efficiently, reducing unnecessary paging and the likelihood
> of thrashing and thus increasing performance and/or allowing
> greater VM density.
>
> If you are interested in this topic, please see:
>
> http://oss.oracle.com/projects/tmem
> (note, site is sometimes slow)
>
> for more information. This site will be updated frequently,
> with patches, documentation, and FAQs. The site also
> supports mailing lists, though I'd prefer to have all
> Xen-related discussions start on xen-devel.
>
> Linux patches based on 2.6.18-xen, 2.6.27-xen, and 2.6.28
> are available. The Xen patch is currently-based on 3.3.0+
> and I am in the process of updating it and cleaning it up, so
> will post it in the near future, but can provide it to anyone
> who is very interested in seeing/trying it now. I could
> use some help on the "control plane" python software,
> in performance evaluation, and in "porting".
>
> Comments and questions welcome. I also plan to submit an
> abstract for the upcoming Xen summit and, if accepted, give
> a talk about tmem there.
>
> Thanks,
> Dan
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|