This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-users] Live migration?

To: <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] Live migration?
From: Daniel Nielsen <djn@xxxxxxxxxx>
Date: Fri, 15 Sep 2006 14:10:59 +0200
Delivery-date: Fri, 15 Sep 2006 05:11:57 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcbYwAETP10DSkSzEduXXwAUUWMFRg==
Thread-topic: Live migration?
User-agent: Microsoft-Entourage/

We are currently migrating to Xen for our production servers, version
3.0.2-2. But we are having problems with the live-migration feature.

Our setup is this;

We run debian-stable (sarge), with selected packages from backports.org. Our
glibc is patched to be "Xen-friendly". In our test-setup, we have two dom0's
both netbooting from a central NFS/tftpboot server e.g. not storing anything
locally. Both dom0's have two ethernet ports. eth0 is used by the dom0 and
eth1 is bridged to Xen.

Our domUs also use a NFS-root, also debian sarge. They use the same kernel.
They have no "ties" to the local machine, except for network access, they do
not mount any localdrives or files as drives. All is exclusively run through
NFS and in RAM.

When migrating machines (our dom0's are named after fictional planets, and
virtual machines after fictional spaceships):

geonosis:/ root# xm migrate --live serenity lv426
it just hangs. 

A machine called serenity pops up on lv426:

lv426:/ root# xm list
Name                              ID Mem(MiB) VCPUs State  Time(s)
Domain-0                           0      128     4 r----- 21106.6
serenity                           8     2048     1 --p---     0.0
lv426:/ root# 
But nothing happens.

If we migrate a lower mem domU with eg. 256MiB it works without a hitch.
If we migrate a domU with eg. 512 MiB it sometimes works, othertimes it
doesn't. But for domUs with 2GiB ram, it consistently fails.

In the above example, if we wait quite some hours, then serenity will stop
responding, and geonosis will be left with a

genosis:/ root# xm list
Name                              ID Mem(MiB) VCPUs State  Time(s)
Domain-0                           0      128     4 r----- 21106.6
Zombie-serenity                    8      2048    2 -----d  3707.8
geonosis:/ root# 

I have attached the relevant entries from the xend.log files from both
geonosis and lv426.

I hope somebody is able to clear up what we are missing.

I noticed in geonosis.log, that it wants 2057 MiB. I'm unsure of what it


Attachment: lv426.log
Description: Binary data

Attachment: geonosis.log
Description: Binary data

Xen-users mailing list
<Prev in Thread] Current Thread [Next in Thread>