WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] About resource reclamation

To: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] About resource reclamation
From: Xuxian Jiang <jiangx@xxxxxxxxxxxxx>
Date: Sat, 11 Oct 2003 23:13:29 -0500 (EST)
Cc: xen-devel@xxxxxxxxxxxxxxxxxxxxx, Xuxian Jiang <jiangx@xxxxxxxxxxxxx>
Delivery-date: Sun, 12 Oct 2003 05:14:14 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: <E1A8SS4-0006jQ-00@xxxxxxxxxxxxxxxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
References: <E1A8SS4-0006jQ-00@xxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
On Sat, 11 Oct 2003, Ian Pratt wrote:

>
> > Another idea related to this is to provide one memory pool, which can be
> > shared across multiple domains according to some policies, like
> > proportional sharing. Quick check of Xen shows that Xen is providing
> > *very* strong memory protection via prior reservation. Such clear-cut
> > *slicing * of memory may not achieve overall optimal performance.
>
> Our intention is to expose real resources to domains and have
> them optimise their behaviour accordingly. If you are `charging'
> domains for the memory they use, it's in their interests to use
> the balloon driver to try to reduce their memory footprint and
> give pages back the system, or buy more pages if they can
> usefully exploit it. Our intention would be to run the system at
> something like 75% memory utilization, making the remaining
> memory available for use as shared buffer cache and swap cache
> (on a proportional share basis).

This is exactly what I want. It is able to create strong resource
isolation if required, but still has the flexibility to accommodate peak
load with dynamic resource allocation from shared pool. Good job, Xen! :-)

>
> > Another questions related to the scalability of Xen approach. I believe
> > the most limiting factor for scalability would be the amount of memory
> > available. Considering fixed memory size, could it be potentially possible
> > to emulate the GuestOS memory with disk files with mmap-similar mechanism.
> > It is not necessary that whole GuestOS memory be emulated, but even
> > partial emulation could provide *nice* and *desirable* tradeoff between
> > performance and scalability.
>
> Our goal was to have Xen comfortably support 100 domains. Results
> in the SOSP paper show that using the balloon driver to swap out
> all pageable memory its possible to get the memory footprint of
> quiescent domains down to just 2-3MB. Hence, running 1000 domains
> is already a possibility.

In some sense, swap-out all pageable memory to disk is the reverse side
of mmap'ing disk file as part of memory ( UML adopted this approach?).
Based on mmap, every GuestOS can be assigned some amount of
exclusive memory, and is still able to create some *memory disk* as
complements. Mmap-based approach
has the potential to increase memory available (unlimited due to virtual
virtual memory?) to GuestOS, though the memory is not uniform in terms of
access latency and throughput. The nonuniformity may not be desired, and
might complicate the implementatio. Also performance is certain to be hurt.
But it may satisfy some *unreasonable* memory operations. Honestly,
I don't know whether such applications exist. But the motication may be
similar to original virtual memory idea - emulate memory with disk
file though there is potenial *two-level* vitual memory. Anyway,
balloon dirver has been proven to be quite effective and achieve
the design goal. Mmap-based idea may seem weired and I might be
totally wrong. And I just want to bring it up. Any criticism and comments
are welcome.

> > I have experiemented Xen to create 10 domains each with 60M on top of one
> > host node with 750M and afterwards failed to create new one without
> > destroying exiting ones. In some cases, we may want to degrade the
> > performance of new created domains but not obvious *rejection* to create
> > new domains.
>
> Within Xen, its our view that we want strong admission control
> rather than going down the rat hole of implementing paging within
> the VMM. It's down to domain0 to request other domains to free up
> some memory if it wants to create a new domain.

Just like you mentioned above, resource can be partitioned into two parts,
private pool and shared pool. Private pool is reserved to individual
domain, and shared pool can be used to shared buffer cache and swap cache.
Best effort or proportional policy can be applied to shared pool. Another
*weird* thought would be to emulate shared pool with seemingly *unlimited*
disk space, but at degraded performance.

The above ideas are just for your reference and may not be practical and
even could be invalid. I really like Xen work! It is so solid and cool:-)

Thanks!

Xuxian


-------------------------------------------------------
This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel

<Prev in Thread] Current Thread [Next in Thread>