xen-devel
Re: [Xen-devel] HVM restore broken?
To: |
"Zhai, Edwin" <edwin.zhai@xxxxxxxxx> |
Subject: |
Re: [Xen-devel] HVM restore broken? |
From: |
Stefan Berger <stefanb@xxxxxxxxxx> |
Date: |
Sat, 27 Jan 2007 11:57:09 -0500 |
Cc: |
"Petersson, Mats" <Mats.Petersson@xxxxxxx>, Xen Development Mailing List <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>, xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
Delivery-date: |
Sat, 27 Jan 2007 08:56:54 -0800 |
Envelope-to: |
www-data@xxxxxxxxxxxxxxxxxx |
In-reply-to: |
<20070127135307.GZ15711@xxxxxxxxxxxxxxxxxxxxxx> |
List-help: |
<mailto:xen-devel-request@lists.xensource.com?subject=help> |
List-id: |
Xen developer discussion <xen-devel.lists.xensource.com> |
List-post: |
<mailto:xen-devel@lists.xensource.com> |
List-subscribe: |
<http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe> |
List-unsubscribe: |
<http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> |
Sender: |
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
I tried suspend/resume with a Linux
guest with 1 processor.
I get the following errors in xend's
log file upon resume.
[...]
[2007-01-27 11:44:06 3629] DEBUG (XendDomainInfo:775)
Storing domain details: {'console/port': '3', 'name': 'TCG-TEST', 'console/limit':
'1048576', 'vm': '/vm/b0aaf4ff-ede8-994d-551c-23da391edbf2', 'domid': '4',
'cpu/0/availability': 'online', 'memory/target': '196608', 'store/port':
'2'}
[2007-01-27 11:44:06 3629] INFO (XendCheckpoint:207) restore hvm domain
4, mem=192, apic=1, pae=1
[2007-01-27 11:44:06 3629] DEBUG (XendCheckpoint:226) restore:shadow=0x3,
_static_max=0xd0, _static_min=0xc0, nr_pfns=0xc000.
[2007-01-27 11:44:06 3629] DEBUG (balloon:127) Balloon: 258044 KiB free;
need 199680; done.
[2007-01-27 11:44:06 3629] DEBUG (XendCheckpoint:236) [xc_restore]: /usr/lib/xen/bin/xc_restore
22 4 49152 2 3 192 1 1
[2007-01-27 11:44:06 3629] INFO (XendCheckpoint:340) xc_hvm_restore:dom=4,
nr_pfns=0xc000, store_evtchn=2, *store_mfn=192, console_evtchn=3, *console_mfn=-1208080352,
pae=1, apic=1.
[2007-01-27 11:44:06 3629] INFO (XendCheckpoint:340) xc_hvm_restore start:
max_pfn = c000, max_mfn = 3f700, hvirt_start=f5800000, pt_levels=3
[2007-01-27 11:44:07 3629] INFO (XendCheckpoint:340) hvm restore:calculate
new store_mfn=0xbffe,v_end=0xc000000..
[2007-01-27 11:44:07 3629] INFO (XendCheckpoint:340) hvm restore:get nr_vcpus=1.
[2007-01-27 11:44:07 3629] INFO (XendCheckpoint:340) Restore exit with
rc=0
[2007-01-27 11:44:07 3629] DEBUG (XendCheckpoint:311) store-mfn 49150
[2007-01-27 11:44:07 3629] DEBUG (XendCheckpoint:311) console-mfn -1208080352
[2007-01-27 11:44:07 3629] DEBUG (XendDomainInfo:1558) XendDomainInfo.destroy:
domid=4
[2007-01-27 11:44:07 3629] DEBUG (XendDomainInfo:1566) XendDomainInfo.destroyDomain(4)
[2007-01-27 11:44:07 3629] ERROR (XendDomain:1030) Restore failed
Traceback (most recent call last):
File "//usr/lib/python/xen/xend/XendDomain.py", line 1025,
in domain_restore_fd
return XendCheckpoint.restore(self, fd, paused=paused)
File "//usr/lib/python/xen/xend/XendCheckpoint.py", line
243, in restore
raise XendError('Could not read store/console MFN')
XendError: Could not read store/console MFN
In the qemu dm's log file I see this
here --- notice the 'error 22':
domid: 3
qemu: the number of cpus is 1
qemu_map_cache_init nr_buckets = c00
shared page at pfn:bfff
buffered io page at pfn:bffd
xs_read(): vncpasswd get error. /vm/b0aaf4ff-ede8-994d-551c-23da391edbf2/vncpasswd.
I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0
suspend sig handler called with requested=0!
device model received suspend signal!
set maxmem returned error 22
cirrus_stop_acc:unset_vram_mapping.
'xm dmesg' shows this here (trying multiple
times to resume):
(XEN) HVM S/R Loading "xen_hvm_i8259" instance
0x20
(XEN) HVM S/R Loading "xen_hvm_i8259" instance 0xa0
(XEN) HVM S/R Loading "xen_hvm_ioapic" instance 0x0
(XEN) HVM S/R Loading "xen_hvm_cpu" instance 0x0
(XEN) HVM S/R Loading "xen_hvm_lapic" instance 0x0
(XEN) HVM S/R Loading "xen_hvm_i8254" instance 0x40
(XEN) HVM S/R Loading "xen_hvm_shpage" instance 0x10
(XEN) HVM S/R Loading "xen_hvm_i8259" instance 0x20
(XEN) HVM S/R Loading "xen_hvm_i8259" instance 0xa0
(XEN) HVM S/R Loading "xen_hvm_ioapic" instance 0x0
(XEN) HVM S/R Loading "xen_hvm_cpu" instance 0x0
(XEN) HVM S/R Loading "xen_hvm_lapic" instance 0x0
(XEN) HVM S/R Loading "xen_hvm_i8254" instance 0x40
(XEN) HVM S/R Loading "xen_hvm_shpage" instance 0x10
Stefan
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx wrote on 01/27/2007
08:53:07 AM:
> Mats,
>
> at least 32b up windows on 32b HV save/restore works here.
>
> what's your configuration? 32 or 64 windows/HV? up or smp?
>
> BTW, can you have a try with linux guest?
>
>
> On Fri, Jan 26, 2007 at 06:41:13PM +0100, Petersson, Mats wrote:
> > I got latest (13601) yesterday evening. This doesn't seem to
work to do
> > Restore (at least of the Windows test-image that I've been using
for
> > testing previously).
> >
> > The VM restores reasonably OK, but it jumps to an invalid address
> > shortly after restoring, giving a D1 blue-screen error
> > (DRIVER_IRQL_LESS_OR_EQUAL), which turns out to be "page-fault
in
> > driver" after I looked at the memory dump in windbg. (The
address it
> > jumps to is consistenly a0000ca5, if that's of any meaning to
anyone).
> >
> > I've compared my svm.c that I had previously and the current
one that I
> > got from mercurial, and they are identical.
> >
> > I went back to my 13568 build of the hypervisor, and it works
there...
> > There's no obvious changes in between...
> >
> > Has anyone else tried this, does anyone have an idea of what's
going
> > wrong?
> >
> > --
> > Mats
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> >
>
> --
> best rgds,
> edwin
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|