> Subject: Queries on Tmem and Difference Engine.
> I have gone through presentation on "Transcendent Memory on Xen"
> read some paper on tmem and have some good idea about tmem.But still
> have few questions on it.
Hi Ashwin --
Thank you for your interest in tmem!
> In tmem pool the deduplication is performed only on pages in
> ephemeral pools why it's not performed on
> persistent pool?"Since by deduplication we are saving memory.
First, there is an accounting issue. Persistent pages "owned" by
a domain count against each domain's maxmem allocation. If a
domain attempts to put a persistent page and the domain has
already used up its maxmem, the put fails. This is important for
avoiding denial-of-service attacks. So if persistent
pages are deduplicated, what happens in the following:
- domX puts a persistent page with contents ABC
- domY puts a persistent page with contents ABC, but domY
is already at maxmem... but since the page can be deduplicated
and takes no additional memory it is accepted by tmem
- domX flushes the page containing ABC
- who owns the persistent ABC page in tmem? Has domY exceeded
maxmem or not?
and there are other similar scenarios.
Second, I wasn't sure that there would be many opportunities
for deduplication in swap pages which are, by definition, dirty.
Deduplication takes some additional memory for data structures
and may take a great deal of additional CPU time, even if
no deduplication occurs. So it is important to use it only
if it is fairly certain that there will be some value.
This is something you could measure for your project since,
in a test environment, you do not need to worry about
Interestingly, if the accounting problem were solved, the
flexibility tmem has defined for handling "duplicate puts"
nicely avoids the CoW-overcommitment problem seen by Difference
Engine and Satori and VMware. If memory is exhausted and
a domain attempts a persistent put that would cause a CoW,
the put can be simply rejected by tmem. So host swapping
is never required.
> In difference engine by Diwaker Gupta,Guest VM shares the pages.
> www.usenix.org/publications/login/2009-04/openpdfs/gupta.pdf This
> seems kind of over-committing RAM
> for Guest OS.There are many discussions going on VMware
> memory-overcommit feature using sharing.
> Over-committing exists in Xen-server5.0 at HVM,Why such feature is not
> provided at PV? and what is the
> status of the difference engine? Is it included in Xen ?
You can find my opinion of host swapping in the linux kernel mailing
list here: http://lkml.org/lkml/2010/5/2/49
(and you might find the entire thread interesting).
The Difference Engine code was never submitted to Xen. A
version of the Satori code was submitted to Xen in December 2009
and is in Xen 4.0.
> In above presentation, it has mention that "inter guest shared
> memory" is under investigation and fragmentation
> is an outstanding issues .What is the status of implementation ? I
> would like to carry out this project.
> What are your suggestions on it?
Shared persistent pools have never been implemented in tmem
although most of the code is already there because shared ephemeral
pools and non-shared persistent pools are supported.
I am not a networking expert, but I believe they would be useful
for networking between two guests: If two guests discover they are
on the same host, tmem can serve as a transport layer. If you
are very interested in networking, exploring this might make
a good project.
Fragmentation: since tmem absorbs all free memory in the system one
page at a time, if Xen attempts to allocate memory of order>0 (order==1
means two consecutive physical pages, order==2 means four consecutive
physical pages, order==3 means eight, etc), the allocation will fail.
The worst problem may be fixed soon though others still must be fixed:
This might also make a good Xen-related project.
I hope this answers your questions!
Xen-devel mailing list