This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-API] [RFC] Ballooning and live migration

To: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "xen-api@xxxxxxxxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxxxxxxxx>, Jonathan Knowles <Jonathan.Knowles@xxxxxxxxxxxxx>
Subject: RE: [Xen-API] [RFC] Ballooning and live migration
From: Dave Scott <Dave.Scott@xxxxxxxxxxxxx>
Date: Tue, 15 Jun 2010 18:15:56 +0100
Accept-language: en-US
Acceptlanguage: en-US
Delivery-date: Tue, 15 Jun 2010 10:16:09 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C0E29CF.10407@xxxxxxxxxxxxx>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <4C0E29CF.10407@xxxxxxxxxxxxx>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcsHFGRfzQC3wFYASBm5vx7THs5czQFl8Q9A
Thread-topic: [Xen-API] [RFC] Ballooning and live migration
Hi George,

> At the moment (and as of the latest XenServer release), when migrating
> a
> VM, it is first ballooned down to static-min before the migration
> happens.  I don't think this is the right behavior; my sense is that
> there's little or no benefit, and a big potential cost. So I'd like to
> open up a discussion on the topic.
> I don't know the reason this behavior was decided, so I have go guess.
> Two reasons that come to mind are:
> 1. To reduce the network cost of migrating (lower memory -> fewer pages
> transferred)
> 2. To simplify logic of memory

Yeah, I think it was a mix of these.

> #1 I don't think is actually true.  Migration can only happen when the
> disk is on shared storage across a network.  When the balloon driver
> inflates the balloon, dirty pages will be written to disk; and assuming
> the data was somehow useful, they'll be read back in on the other side
> when the balloon is deflated.  So on the whole, the total number of
> pages over the network won't decrease; they'll just switch from
> migration traffic to disk traffic.

I think we hoped that less useful data would be discarded... but I take your 
point that, if the data is useful, then it doesn't save anything :)

> As for #2, I think that the following is simple enough:
> * If free memory on migration target host >= VM memory target, just
> transfer as is.
> * If free memory on migration taget host < VM memory target, balloon
> down to free memory.

I think I agree that if the memory is completely free on the target then it's 
sensible to just transfer as-is.

If the memory isn't free on the target then we might have a choice about which 
VMs to balloon: the VMs running on the target or the VM-to-be-migrated or both. 
Do you have an opinion as to which is better? In the case of VM.start we figure 
out what our target would be after the VM has started and host memory is 
rebalanced, assuming no other VMs appear -- we could do a similar thing in 
migrate perhaps.


> As for the cost: of course ballooning down has a cost, both in terms of
> cpu cycles spent, and dirty pages transferred over the network; and
> there's the cost on the other side of re-filling any pages that were
> swapped out.  Furthermore, if the working set is greater than than
> static min, then ballooning down will cause the VM to thrash for the
> period of time while it's migrating, which is much worse than having a
> slightly longer migration time.
> Thoughts?
>  -George
> _______________________________________________
> xen-api mailing list
> xen-api@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/mailman/listinfo/xen-api

xen-api mailing list

<Prev in Thread] Current Thread [Next in Thread>