WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] About resource reclamation

To: Xuxian Jiang <jiangx@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] About resource reclamation
From: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>
Date: Sat, 11 Oct 2003 23:43:08 +0100
Cc: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxxx, Ian.Pratt@xxxxxxxxxxxx
Delivery-date: Sat, 11 Oct 2003 23:44:16 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: Your message of "Sat, 11 Oct 2003 16:35:35 CDT." <Pine.GSO.4.58.0310111617330.19481@xxxxxxxxxxxxxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
> Another idea related to this is to provide one memory pool, which can be
> shared across multiple domains according to some policies, like
> proportional sharing. Quick check of Xen shows that Xen is providing
> *very* strong memory protection via prior reservation. Such clear-cut
> *slicing * of memory may not achieve overall optimal performance.

Our intention is to expose real resources to domains and have
them optimise their behaviour accordingly. If you are `charging'
domains for the memory they use, it's in their interests to use
the balloon driver to try to reduce their memory footprint and
give pages back the system, or buy more pages if they can
usefully exploit it. Our intention would be to run the system at
something like 75% memory utilization, making the remaining
memory available for use as shared buffer cache and swap cache
(on a proportional share basis).
 
> Another questions related to the scalability of Xen approach. I believe
> the most limiting factor for scalability would be the amount of memory
> available. Considering fixed memory size, could it be potentially possible
> to emulate the GuestOS memory with disk files with mmap-similar mechanism.
> It is not necessary that whole GuestOS memory be emulated, but even
> partial emulation could provide *nice* and *desirable* tradeoff between
> performance and scalability.

Our goal was to have Xen comfortably support 100 domains. Results
in the SOSP paper show that using the balloon driver to swap out
all pageable memory its possible to get the memory footprint of
quiescent domains down to just 2-3MB. Hence, running 1000 domains
is already a possibility. 

> I have experiemented Xen to create 10 domains each with 60M on top of one
> host node with 750M and afterwards failed to create new one without
> destroying exiting ones. In some cases, we may want to degrade the
> performance of new created domains but not obvious *rejection* to create
> new domains.

Within Xen, its our view that we want strong admission control
rather than going down the rat hole of implementing paging within
the VMM. It's down to domain0 to request other domains to free up
some memory if it wants to create a new domain.


Ian




-------------------------------------------------------
This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel