Hi, all!
To start with: 'xm save' is supposed to work for 32-bit hvm domU's on a
64-bit dom0, right?
I'm having serious problems doing 'xm save' for a 32-bit hvm domU
running RHEL4.
I've tried on two different servers (3.0.3 and 3.3.1) that have no
problems saving other domU's.
Running the command 'xm save xenLinux10 /opt/backup/xenLinux10.save'
just hangs until interrupted by CTRL-C.
Output in xend.log:
[2009-04-17 10:34:10 4070] DEBUG (XendDomainInfo:1359) Storing domain
details: {'console/port': '3', 'name': 'migrating-xenLinux10',
'console/limit': '1048576', 'store/port': '2', 'vm':
'/vm/67e1abcc-b41e-486e-ef29-4b7fac2386cd', 'domid': '4',
'image/suspend-cancel': '1', 'cpu/0/availability': 'online',
'memory/target': '3145728',
'control/platform-feature-multiprocessor-suspend': '1',
'store/ring-ref': '786429', 'console/type': 'ioemu'}
[2009-04-17 10:34:10 4070] DEBUG (XendCheckpoint:103) [xc_save]:
/usr/lib64/xen/bin/xc_save 50 4 0 0 4
[2009-04-17 10:34:10 4070] DEBUG (XendCheckpoint:374) suspend
[2009-04-17 10:34:10 4070] DEBUG (XendCheckpoint:106) In
saveInputHandler suspend
[2009-04-17 10:34:10 4070] DEBUG (XendCheckpoint:108) Suspending 4 ...
[2009-04-17 10:34:10 4070] DEBUG (XendDomainInfo:494)
XendDomainInfo.shutdown(suspend)
[2009-04-17 10:34:10 4070] INFO (XendCheckpoint:403) xc_save: could not
read suspend event channel
[2009-04-17 10:34:10 4070] INFO (XendCheckpoint:403) xc_save: suspend
event channel initialization failed, using slow path
[2009-04-17 10:34:10 4070] DEBUG (XendDomainInfo:1443)
XendDomainInfo.handleShutdownWatch
[2009-04-17 10:34:10 4070] DEBUG (XendDomainInfo:1443)
XendDomainInfo.handleShutdownWatch
Final lines from an strace of the 'xm save xenLinux10
/opt/backup/xenLinux10.save' command:
access("/opt/backup", W_OK) = 0
futex(0x1e32970, FUTEX_WAKE, 1) = 0
futex(0x1e32970, FUTEX_WAKE, 1) = 0
futex(0x1e32970, FUTEX_WAKE, 1) = 0
socket(PF_FILE, SOCK_STREAM, 0) = 3
connect(3, {sa_family=AF_FILE, path="/var/run/xend/xmlrpc.sock"}, 27) = 0
sendto(3, "POST /RPC2 HTTP/1.0\r\nHost: \r\nUse"..., 132, 0, NULL, 0) = 132
sendto(3, "<?xml version='1.0'?>\n<methodCal"..., 300, 0, NULL, 0) = 300
recvfrom(3,
Output in xend-debug.log that appears *after* the domU is rebooted:
(Rebooting it is the only way I've found to get it out of the
'migrating-domU' state.)
Traceback (most recent call last):
File "/usr/lib64/python2.4/SocketServer.py", line 463, in
process_request_thread
self.finish_request(request, client_address)
File "/usr/lib64/python2.4/SocketServer.py", line 254, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib64/python2.4/site-packages/xen/util/xmlrpclib2.py", line
105, in <lambda>
(lambda x, y, z:
File "/usr/lib64/python2.4/site-packages/xen/util/xmlrpclib2.py", line
65, in __init__
server)
File "/usr/lib64/python2.4/SocketServer.py", line 521, in __init__
self.handle()
File "/usr/lib64/python2.4/BaseHTTPServer.py", line 316, in handle
self.handle_one_request()
File "/usr/lib64/python2.4/BaseHTTPServer.py", line 310, in
handle_one_request
method()
File "/usr/lib64/python2.4/site-packages/xen/util/xmlrpclib2.py", line
82, in do_POST
self.send_response(200)
File "/usr/lib64/python2.4/BaseHTTPServer.py", line 367, in send_response
self.wfile.write("%s %d %s\r\n" %
File "/usr/lib64/python2.4/socket.py", line 256, in write
self.flush()
File "/usr/lib64/python2.4/socket.py", line 243, in flush
self._sock.sendall(buffer)
error: (32, 'Broken pipe')
Looks like it's hanging waiting for input on a socket. (There is working
socket communication earlier in the strace but I left that out here.)
Anyone have any ideas why this is happening? File a bug-report?
I have tried the following combinations:
Server1: CentOS5.2 x86_64 kernel 2.6.18-92.1.10.el5xen (Xen 3.0.3,
default CentOS version)
Server2: CentOS5.2 x86_64 kernel 2.6.18-128.1.6.el5xen (Xen 3.3.1,
installed from Gitco repo)
domU1: RHEL4 i386 HVM. 'xm save' hangs on both servers.
domU2: CentOS5.3 i386 HVM. 'xm save' works on Server2. Not tested on
Server1.
domU3: CentOS5.2 x84_64 PV. 'xm save' works on both servers.
Regards,
Patrik
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|