WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: Queries on Tmem and Difference Engine.

To: ashwin wasani <vasani.ashwin@xxxxxxxxx>, Xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] RE: Queries on Tmem and Difference Engine.
From: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Date: Tue, 31 Aug 2010 07:50:26 -0700 (PDT)
Cc: Konrad Wilk <konrad.wilk@xxxxxxxxxx>
Delivery-date: Tue, 31 Aug 2010 07:51:40 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTinzPpiPy2ivYApxu3iGm5cWpKfw7R77NjJ-cRPB@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTinzPpiPy2ivYApxu3iGm5cWpKfw7R77NjJ-cRPB@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> Subject: Queries on Tmem and Difference Engine.
> 
> Hi,
>     I have gone through presentation on "Transcendent Memory on Xen"
> 
> http://oss.oracle.com/projects/tmem/dist/documentation/presentations/Tr
> anscendentMemoryXenSummit2010.pdf
>  read some paper on tmem and have some good idea about tmem.But still
> have few questions on it.

Hi Ashwin --

Thank you for your interest in tmem!

>  In tmem pool the deduplication is performed only on pages in
> ephemeral pools why it's not performed on
>  persistent pool?"Since by deduplication we are saving memory.

First, there is an accounting issue.  Persistent pages "owned" by
a domain count against each domain's maxmem allocation.  If a
domain attempts to put a persistent page and the domain has
already used up its maxmem, the put fails.  This is important for
avoiding denial-of-service attacks.  So if persistent
pages are deduplicated, what happens in the following:

- domX puts a persistent page with contents ABC
- domY puts a persistent page with contents ABC, but domY
  is already at maxmem... but since the page can be deduplicated
  and takes no additional memory it is accepted by tmem
- domX flushes the page containing ABC
- who owns the persistent ABC page in tmem?  Has domY exceeded
  maxmem or not?
and there are other similar scenarios.

Second, I wasn't sure that there would be many opportunities
for deduplication in swap pages which are, by definition, dirty.
Deduplication takes some additional memory for data structures
and may take a great deal of additional CPU time, even if
no deduplication occurs.  So it is important to use it only
if it is fairly certain that there will be some value.
This is something you could measure for your project since,
in a test environment, you do not need to worry about
denial-of-service.

Interestingly, if the accounting problem were solved, the
flexibility tmem has defined for handling "duplicate puts"
nicely avoids the CoW-overcommitment problem seen by Difference
Engine and Satori and VMware.  If memory is exhausted and
a domain attempts a persistent put that would cause a CoW,
the put can be simply rejected by tmem.  So host swapping
is never required.
 
>     In difference engine by  Diwaker Gupta,Guest VM shares the pages.
> www.usenix.org/publications/login/2009-04/openpdfs/gupta.pdf  This
> seems kind of over-committing RAM
> for Guest OS.There are many discussions going on VMware
> memory-overcommit feature using sharing.
> Over-committing exists in Xen-server5.0 at HVM,Why such feature is not
> provided at PV? and what is the
> status of the difference engine? Is it included in Xen ?

You can find my opinion of host swapping in the linux kernel mailing
list here: http://lkml.org/lkml/2010/5/2/49 
(and you might find the entire thread interesting).

The Difference Engine code was never submitted to Xen.  A
version of the Satori code was submitted to Xen in December 2009
and is in Xen 4.0.

>     In above presentation, it has mention that "inter guest shared
> memory" is under investigation and fragmentation
>  is an outstanding issues .What is the status of implementation ? I
> would like to carry out this project.
> What are your suggestions on it?

Shared persistent pools have never been implemented in tmem
although most of the code is already there because shared ephemeral
pools and non-shared persistent pools are supported.

I am not a networking expert, but I believe they would be useful
for networking between two guests:  If two guests discover they are
on the same host, tmem can serve as a transport layer.  If you
are very interested in networking, exploring this might make
a good project.

Fragmentation: since tmem absorbs all free memory in the system one
page at a time, if Xen attempts to allocate memory of order>0 (order==1
means two consecutive physical pages, order==2 means four consecutive
physical pages, order==3 means eight, etc), the allocation will fail.
The worst problem may be fixed soon though others still must be fixed:
http://lists.xensource.com/archives/html/xen-devel/2010-08/msg01350.html 
This might also make a good Xen-related project.

I hope this answers your questions!
Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>