On Thu, May 5, 2011 at 7:42 AM, Shriram Rajagopalan <
rshriram@xxxxxxxxx> wrote:
> On Tue, May 3, 2011 at 2:17 PM, AP Xen <
apxeng@xxxxxxxxx> wrote:
>>
>> On Tue, May 3, 2011 at 7:09 AM, Shriram Rajagopalan <
rshriram@xxxxxxxxx>
>> wrote:
>> > On Tue, May 3, 2011 at 6:01 AM, Ian Campbell <
Ian.Campbell@xxxxxxxxxx>
>> > wrote:
>> >>
>> >> On Fri, 2011-04-29 at 20:28 +0100, AP Xen wrote:
>> >> > I am trying to do an "xl save -c" on a Windows 7 Ultimate domain that
>> >> > will leave the domain running at the end of the save operation.
>> >>
>> >> Do you have pv drivers installed which support checkpoint suspends? I'm
>> >> not sure if such a thing even exists for Windows.
>> >>
>> >> I'm also not entirely sure that checkpointing was ever supported for
>> >> HVM
>> >> domains without PV drivers (e.g. via emulated hibernation). Perhaps the
>> >> Remus guys know?
>> >>
>> > Remus works with HVM domains via normal xenstore based suspend/resume.
>> > Only PV-HVM support is "disabled" for the moment.
>> >>
>> >> [...]
>> >> > At the end of this the domain is frozen. Is this a known issue? Any
>> >> > pointers as to how to debug this? Where does xl pipe its debug
>> >> > messages to?
>> >>
>> >> /var/log/xen/xl-<domname>.log. You can also do "xl -vvv <command>" to
>> >> get some additional debug output.
>> >>
>> > Yes. the logs would be great. Also, by frozen, do you mean the domain
>> > remains
>> > in "suspended" state? or is Windows hung?
>>
>> Not sure what the difference between Windows being suspended and hung
>> is. Here is the xl list output:
>> Name ID Mem VCPUs State
>> Time(s)
>> Domain-0 0 2914 4 r-----
>> 1259.0
>> win7 15 1019 2 ---ss-
>> 0.3
>>
>> Here is the log:
>> Saving to win7chk new xl format (info 0x0/0x0/255)
>> libxl: debug: libxl_dom.c:378:libxl__domain_suspend_common_callback
>> Calling xc_domain_shutdown on HVM domain
>> libxl: debug: libxl_dom.c:438:libxl__domain_suspend_common_callback
>> wait for the guest to suspend
>> libxl: debug: libxl_dom.c:450:libxl__domain_suspend_common_callback
>> guest has suspended
>> xc: debug: outbuf_write: 4194304 > 90092@16687124
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: debug: outbuf_write: 4194304 > 4169716@12607500
>> xc: detail: delta 9991ms, dom0 27%, target 0%, sent 863Mb/s, dirtied
>> 0Mb/s 0 pages
>> xc: detail: Total pages sent= 263168 (0.25x)
>> xc: detail: (of which 0 were fixups)
>> xc: detail: All memory is saved
>> xc: detail: Save exit rc=0
>> libxl: debug: libxl_dom.c:534:libxl__domain_save_device_model Saving
>> device model state to /var/lib/xen/qemu-save.15
>> libxl: debug: libxl_dom.c:546:libxl__domain_save_device_model Qemu
>> state is 7204 bytes
>>
>>
> Ok. I see a "HVM shutdown". But where is the resume?
> Going through the libxl code, one obvious difference I see between xl's
> implementation of save
> and the old xm implementation is, xl calls "xc_domain_unpause" while the xm
> implementation
> calls xc_domain_resume.
>
> Just in case, have you tried the same with xm save -c ?
> shriram