My windows problem is definitely 64b HV + 32-bit guest, and none of
Stefan's reported error messages in my log either.
--
Mats
> -----Original Message-----
> From: Zhai, Edwin [mailto:edwin.zhai@xxxxxxxxx]
> Sent: 30 January 2007 11:33
> To: Stefan Berger
> Cc: Zhai, Edwin; Petersson, Mats; Xen Development Mailing
> List; xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] HVM restore broken?
>
> berger,
>
> you run 32 linux on 64b HV, right?
>
> i can reproduce this bug in such combination on latest change
> set, but have no
> console MFN & set maxmem err msg.
>
> i doubt some memory restore issue here.
>
> thanks,
>
> On Sat, Jan 27, 2007 at 11:57:09AM -0500, Stefan Berger wrote:
> > I tried suspend/resume with a Linux guest with 1 processor.
> >
> > I get the following errors in xend's log file upon resume.
> >
> > [...]
> > [2007-01-27 11:44:06 3629] DEBUG (XendDomainInfo:775)
> Storing domain
> > details: {'console/port': '3', 'name': 'TCG-TEST',
> 'console/limit':
> > '1048576', 'vm':
> '/vm/b0aaf4ff-ede8-994d-551c-23da391edbf2', 'domid': '4',
> > 'cpu/0/availability': 'online', 'memory/target':
> '196608', 'store/port':
> > '2'}
> > [2007-01-27 11:44:06 3629] INFO (XendCheckpoint:207)
> restore hvm domain 4,
> > mem=192, apic=1, pae=1
> > [2007-01-27 11:44:06 3629] DEBUG (XendCheckpoint:226)
> restore:shadow=0x3,
> > _static_max=0xd0, _static_min=0xc0, nr_pfns=0xc000.
> > [2007-01-27 11:44:06 3629] DEBUG (balloon:127) Balloon:
> 258044 KiB free;
> > need 199680; done.
> > [2007-01-27 11:44:06 3629] DEBUG (XendCheckpoint:236)
> [xc_restore]:
> > /usr/lib/xen/bin/xc_restore 22 4 49152 2 3 192 1 1
> > [2007-01-27 11:44:06 3629] INFO (XendCheckpoint:340)
> xc_hvm_restore:dom=4,
> > nr_pfns=0xc000, store_evtchn=2, *store_mfn=192, console_evtchn=3,
> > *console_mfn=-1208080352, pae=1, apic=1.
> > [2007-01-27 11:44:06 3629] INFO (XendCheckpoint:340)
> xc_hvm_restore start:
> > max_pfn = c000, max_mfn = 3f700, hvirt_start=f5800000,
> pt_levels=3
> > [2007-01-27 11:44:07 3629] INFO (XendCheckpoint:340) hvm
> restore:calculate
> > new store_mfn=0xbffe,v_end=0xc000000..
> > [2007-01-27 11:44:07 3629] INFO (XendCheckpoint:340) hvm
> restore:get
> > nr_vcpus=1.
> > [2007-01-27 11:44:07 3629] INFO (XendCheckpoint:340)
> Restore exit with
> > rc=0
> > [2007-01-27 11:44:07 3629] DEBUG (XendCheckpoint:311)
> store-mfn 49150
> > [2007-01-27 11:44:07 3629] DEBUG (XendCheckpoint:311) console-mfn
> > -1208080352
> > [2007-01-27 11:44:07 3629] DEBUG (XendDomainInfo:1558)
> > XendDomainInfo.destroy: domid=4
> > [2007-01-27 11:44:07 3629] DEBUG (XendDomainInfo:1566)
> > XendDomainInfo.destroyDomain(4)
> > [2007-01-27 11:44:07 3629] ERROR (XendDomain:1030) Restore failed
> > Traceback (most recent call last):
> > File "//usr/lib/python/xen/xend/XendDomain.py", line 1025, in
> > domain_restore_fd
> > return XendCheckpoint.restore(self, fd, paused=paused)
> > File "//usr/lib/python/xen/xend/XendCheckpoint.py",
> line 243, in restore
> > raise XendError('Could not read store/console MFN')
> > XendError: Could not read store/console MFN
> >
> > In the qemu dm's log file I see this here --- notice
> the 'error 22':
> >
> > domid: 3
> > qemu: the number of cpus is 1
> > qemu_map_cache_init nr_buckets = c00
> > shared page at pfn:bfff
> > buffered io page at pfn:bffd
> > xs_read(): vncpasswd get error.
> > /vm/b0aaf4ff-ede8-994d-551c-23da391edbf2/vncpasswd.
> > I/O request not ready: 0, ptr: 0, port: 0, data: 0,
> count: 0, size: 0
> > suspend sig handler called with requested=0!
> > device model received suspend signal!
> > set maxmem returned error 22
> > cirrus_stop_acc:unset_vram_mapping.
> >
> > 'xm dmesg' shows this here (trying multiple times to resume):
> >
> > (XEN) HVM S/R Loading "xen_hvm_i8259" instance 0x20
> > (XEN) HVM S/R Loading "xen_hvm_i8259" instance 0xa0
> > (XEN) HVM S/R Loading "xen_hvm_ioapic" instance 0x0
> > (XEN) HVM S/R Loading "xen_hvm_cpu" instance 0x0
> > (XEN) HVM S/R Loading "xen_hvm_lapic" instance 0x0
> > (XEN) HVM S/R Loading "xen_hvm_i8254" instance 0x40
> > (XEN) HVM S/R Loading "xen_hvm_shpage" instance 0x10
> > (XEN) HVM S/R Loading "xen_hvm_i8259" instance 0x20
> > (XEN) HVM S/R Loading "xen_hvm_i8259" instance 0xa0
> > (XEN) HVM S/R Loading "xen_hvm_ioapic" instance 0x0
> > (XEN) HVM S/R Loading "xen_hvm_cpu" instance 0x0
> > (XEN) HVM S/R Loading "xen_hvm_lapic" instance 0x0
> > (XEN) HVM S/R Loading "xen_hvm_i8254" instance 0x40
> > (XEN) HVM S/R Loading "xen_hvm_shpage" instance 0x10
> >
> > Stefan
> >
> > xen-devel-bounces@xxxxxxxxxxxxxxxxxxx wrote on
> 01/27/2007 08:53:07 AM:
> >
> > > Mats,
> > >
> > > at least 32b up windows on 32b HV save/restore works here.
> > >
> > > what's your configuration? 32 or 64 windows/HV? up or smp?
> > >
> > > BTW, can you have a try with linux guest?
> > >
> > >
> > > On Fri, Jan 26, 2007 at 06:41:13PM +0100, Petersson,
> Mats wrote:
> > > > I got latest (13601) yesterday evening. This doesn't
> seem to work to
> > do
> > > > Restore (at least of the Windows test-image that
> I've been using for
> > > > testing previously).
> > > >
> > > > The VM restores reasonably OK, but it jumps to an
> invalid address
> > > > shortly after restoring, giving a D1 blue-screen error
> > > > (DRIVER_IRQL_LESS_OR_EQUAL), which turns out to be
> "page-fault in
> > > > driver" after I looked at the memory dump in windbg.
> (The address it
> > > > jumps to is consistenly a0000ca5, if that's of any
> meaning to anyone).
> > > >
> > > > I've compared my svm.c that I had previously and the
> current one that
> > I
> > > > got from mercurial, and they are identical.
> > > >
> > > > I went back to my 13568 build of the hypervisor, and
> it works there...
> > > > There's no obvious changes in between...
> > > >
> > > > Has anyone else tried this, does anyone have an idea
> of what's going
> > > > wrong?
> > > >
> > > > --
> > > > Mats
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > > > http://lists.xensource.com/xen-devel
> > > >
> > >
> > > --
> > > best rgds,
> > > edwin
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > > http://lists.xensource.com/xen-devel
>
> --
> best rgds,
> edwin
>
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|