WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] xm migrate --live fails

On Wed, Jul 27, 2011 at 08:25:24PM +0200, Walter Robert Ditzler wrote:
> pasi,
> 
> why asking for the big problem when even the basic doesnt work :-)
> save/restore failed too! below the output.
> 
> thanks walter
> 
> 
> ***
> root@srv-ldeb-xen001:/etc/xen# xm list
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0  1024     4     r-----
> 39313.5
> server01                                     8  2047     1     -b----
> 799.8
> server03                                     9   511     1     -b----
> 55.9
> root@srv-ldeb-xen001:/etc/xen#
> ***
> 
> ***
> root@srv-ldeb-xen001:/etc/xen# xl save 9
> Unable to get config file


You shouldn't mix xm and xl! They cannot be used at the same time.


> root@srv-ldeb-xen001:/etc/xen#
> ***
> 
> ***
> root@srv-ldeb-xen001:/etc/xen# xm safe 9
> Error: Subcommand safe not found!

It's "save", not "safe".


> Usage: xm <subcommand> [args]
> 
> <Domain> can either be the Domain Name or Id.
> For more help on 'xm' see the xm(1) man page.
> For more help on 'xm create' see the xmdomain.cfg(5)  man page.
> 
> For a complete list of subcommands run 'xm help'.
> root@srv-ldeb-xen001:/etc/xen#
> ***
> 


-- Pasi


> -----Original Message-----
> From: Pasi Kärkkäinen [mailto:pasik@xxxxxx] 
> Sent: Mittwoch, 27. Juli 2011 18:58
> To: Walter Robert Ditzler
> Cc: xen-users@xxxxxxxxxxxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] xm migrate --live fails
> 
> On Wed, Jul 27, 2011 at 03:21:53PM +0200, Walter Robert Ditzler wrote:
> > hi all,
> > 
> > i would like to do a live migration to another host which fails. 
> > bellow the console output.
> > 
> > host    : debian squeeze 6.0.2.1 amd64
> > kernel  : 3.0.0
> > hw      : hp dl 320g5
> > 
> > any glue on that behavior?
> > 
> 
> Does simple save/restore work? 
> 
> -- Pasi
> 
> 
> > by the way, I use it cause it should be a test for remus. remus 
> > doesn't work too! the console output below as well.
> > 
> > thanks a lot guy's ...
> > 
> > walter
> > 
> > ***
> > root@srv-ldeb-xen001:/# xm list
> > Name                                        ID   Mem VCPUs      State
> > Time(s)
> > Domain-0                                     0  1024     4     r-----
> > 35269.3
> > server01                                     8  2047     1     -b----
> > 663.5
> > server03                                     9   511     1     -b----
> > 39.8
> > ***
> > 
> > ***
> > root@srv-ldeb-xen001:/# xm migrate --live 9 10.255.255.2
> > Error: timed out
> > Usage: xm migrate <Domain> <Host>
> > 
> > Migrate a domain to another machine.
> > 
> > Options:
> > 
> > -h, --help           Print this help.
> > -l, --live           Use live migration.
> > -p=portnum, --port=portnum
> >                      Use specified port for migration.
> > -n=nodenum, --node=nodenum
> >                      Use specified NUMA node on target.
> > -s, --ssl            Use ssl connection for migration.
> > -c, --change_home_server
> >                      Change home server for managed domains.
> > 
> > root@srv-ldeb-xen001:/# xm migrate --live server03 10.255.255.2
> > Error: timed out
> > Usage: xm migrate <Domain> <Host>
> > 
> > Migrate a domain to another machine.
> > 
> > Options:
> > 
> > -h, --help           Print this help.
> > -l, --live           Use live migration.
> > -p=portnum, --port=portnum
> >                      Use specified port for migration.
> > -n=nodenum, --node=nodenum
> >                      Use specified NUMA node on target.
> > -s, --ssl            Use ssl connection for migration.
> > -c, --change_home_server
> >                      Change home server for managed domains.
> > 
> > root@srv-ldeb-xen001:/#
> > ***
> > 
> > ***
> > root@srv-ldeb-xen001:/# remus --no-net server03 10.41.10.42 qemu 
> > logdirty mode: enable
> > xc: error: Error when writing to state file (4a) (errno 104) (104 = 
> > Connection reset by peer): Internal error qemu logdirty mode: disable
> > PROF: resumed at 1311772859.060909
> > resuming QEMU
> > root@srv-ldeb-xen001:/#
> > ***
> > 
> > ***
> > root@srv-ldeb-xen001:/# xl info
> > host                   : srv-ldeb-xen001
> > release                : 3.0.0
> > version                : #1 SMP Mon Jul 25 03:34:26 CEST 2011
> > machine                : x86_64
> > nr_cpus                : 4
> > nr_nodes               : 1
> > cores_per_socket       : 4
> > threads_per_core       : 1
> > cpu_mhz                : 2128
> > hw_caps                :
> > bfebfbff:20000800:00000000:00000940:0000e3bd:00000000:00000001:00000000
> > virt_caps              : hvm
> > total_memory           : 8190
> > free_memory            : 4477
> > free_cpus              : 0
> > xen_major              : 4
> > xen_minor              : 2
> > xen_extra              : -unstable
> > xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> > hvm-3.0-x86_32p hvm-3.0-x86_64
> > xen_scheduler          : credit
> > xen_pagesize           : 4096
> > platform_params        : virt_start=0xffff800000000000
> > xen_changeset          : Fri Jul 22 08:55:19 2011 +0100 23734:42edf1481c57
> > xen_commandline        : placeholder dom0_mem=1024M
> > cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
> > cc_compile_by          : root
> > cc_compile_domain      : local.net
> > cc_compile_date        : Tue Jul 26 03:07:32 CEST 2011
> > xend_config_format     : 4
> > root@srv-ldeb-xen001:/#
> > ***
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>