WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Swap space for DomU

To: Luciano Rocha <strange@xxxxxxxxxxxxx>
Subject: Re: [Xen-users] Swap space for DomU
From: Stefan de Konink <skinkie@xxxxxxxxx>
Date: Fri, 28 Dec 2007 15:43:37 +0100
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 28 Dec 2007 06:45:11 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20071228143945.GA19928@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4D47FF57-0316-4049-9198-FC432FD0EDF1@xxxxxxxxxxx> <20071228143945.GA19928@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.9 (X11/20071225)
Luciano Rocha schreef:
On Fri, Dec 28, 2007 at 08:44:11AM -0500, Matthew Crocker wrote:
Should I create a file on Dom0 and assign it for a DomU for use as swap space or should I just overcommit my system RAM and have Dom0 handle all of the swap?

There is no memory overcommit in Xen.

There is in Linux. And if you don't believe it :) Try to do Bonnie++ in DomU, twice the amount of memory as Dom0, disable Swap on Dom0. And 'host' the filesystem on NFS. Fun for everyone!


Stefan

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>