WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [XEN] using shmfs for swapspace

To: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Re: [XEN] using shmfs for swapspace
From: Puer Triste <sadlittleboy@xxxxxxxxx>
Date: Sat, 22 Jan 2005 09:49:24 -0500
Delivery-date: Sun, 23 Jan 2005 09:05:21 +0000
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:references; b=hWCYaabV8CLMYnDSjg/CSPbNg0+dC4oyoDE/rVcYjhm6pOB19FBjXrV9ZUSdnwOI2bMpuXqU6rBYjzIMrD2QfCSbDSpSfV2exO7fmg4QRRGT9rPr1YEGdOG4Rv1WW0Yq5FbTyEvLUHFWiQPLopijxn6ggKU2gbQ6U8o1f+QQmdY=
Envelope-to: xen+James.Bulpin@xxxxxxxxxxxx
In-reply-to: <Pine.LNX.4.61.0501211634380.15744@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
References: <20050102162652.GA12268@xxxxxxxx> <1104785749.13302.26.camel@xxxxxxxxxxxxxxxxxxxxx> <200501040304.10128.maw48@xxxxxxxxxxxx> <200501050111.59072.arnd@xxxxxxxx> <Pine.LNX.4.61.0501211634380.15744@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Reply-to: Puer Triste <sadlittleboy@xxxxxxxxx>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
I could be wrong, but I think the significance was that on the s390,
the kernel (periodically) gave pages back to the hypervisor, and
requested memory back via the balloon driver only when needed.

I don't know how the balloon driver is implimented here, but in the
past I had wondered if it would be possible for the kernel to try and
increase memory via the balloon driver before calling the oom killer. 
It seems to me like giving memory to the hypervisor when it wasn't
needed could be handled in userspace by monitoring /proc/meminfo, but
I think requesting memory would have to be within the kernel in order
to be able to make the attempt when there is no memory free but before
the oom killer kicks in.  I was considering trying to impliment a
daemon like that in userspace, but I don't think it would be reliable
and would depend a lot on guesswork to try and pull in memory before
it was needed.

On Fri, 21 Jan 2005 16:37:09 -0500 (EST), Rik van Riel <riel@xxxxxxxxxx> wrote:
> On Wed, 5 Jan 2005, Arnd Bergmann wrote:
> > - Ballooning:
> 
> Xen already has this.  I wonder if it makes sense to
> consolidate the various balloon approaches into a single
> driver, and keep the amount of ballooned memory into
> account when reporting statistics in /proc/meminfo.

-- 
Puer Misellus Triste


-------------------------------------------------------
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag-&-drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel