WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [RFC][PATCH] 0/9 Populate-on-demand memory

To: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [RFC][PATCH] 0/9 Populate-on-demand memory
From: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Date: Wed, 24 Dec 2008 07:54:24 -0800 (PST)
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 24 Dec 2008 08:25:09 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <de76405a0812240713q4bf99ce4p50bd27aab29ca537@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> On Wed, Dec 24, 2008 at 2:32 PM, Dan Magenheimer
> <dan.magenheimer@xxxxxxxxxx> wrote:
> > Yes, its just that with your fix, Windows VM users are much more
> > likely to use memory overcommit and will need to be "trained" to
> > always configure a swap disk to ensure bad things don't happen.
> > And this swap disk had better be on a network-based medium or
> > live migration won't work.
> 
> You mean they may be much more likely to under-provision memory to
> their VMs, booting with (say) 64M on the assumption that they can
> balloon it up to 512M if they want to?  That seems rather unlikely to
> me... if they're not likely to start a Windows VM with 64M normally,
> why would they be more likely to start with 64M now?  I'd've thought
> it would be likely to go the other way: if they normally boot a guest
> with 256M, they can now start with maxmem=1G and memory=256M, and
> balloon it up if they want.

What I mean is that now that they CAN start with memory=256M and
maxmem=1G, it is now much more likely that ballooning and memory
overcommit will be used, possibly hidden by vendors' tools.

Once ballooning is used at all, memory can not only go above
the starting memory= threshold but can also go below.

Thus, your patch will make it more likely that "memory pressure"
will be dynamically applied to Windows VMs, which means swapping
is more likely to occur, which means there had better be a
properly-sized swap disk.

For example, on a 2GB system, a reasonable configuration might be:

Windows VM1: memory=256M maxmem=1GB
Windows VM2: memory=256M maxmem=1GB
Windows VM3: memory=256M maxmem=1GB
Windows VM4: memory=256M maxmem=1GB
(dom0_mem=256M, Xen+heap=256M for the sake of argument)

Assume that VM1 and VM2 are heavily loaded and VM3 and VM4
are idle (or nearly so).  So VM1 and VM2 are ballooned up
towards 1G by taking memory away from VM3 and VM4.  Say
VM3 and VM4 are ballooned down to about 128M each.  Now
VM3 and VM4 suddenly get loaded and need more memory.
But VM1 and VM2 are hesitant to surrender memory because
it is fully utilized.  SOME VM is going to have to start
swapping!

So, I'm just saying that your patch makes this kind of
scenario more likely, so listing the need for a swap disk
in your README would be a good idea.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel