Now I'm seeing the same thing but on vector 0x93 instead. There is
nothing on that vector. It appears that when xen is restoring my domain,
an interrupt line is getting 'stuck' somehow, as the hang occurs as soon
as I enable interrupts after doing the restore... any suggestions?
Can anyone confirm that "INJ_VIRQ [ dom:vcpu = 0x00000062, vector =
0x83, fake = 0 ]" does actually imply that an interrupt is being set in
my DomU, and that the vector is the actual offset into the vector table?
Thanks
James
> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of James Harper
> Sent: Tuesday, 10 February 2009 12:46
> To: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-devel] hang on restore in 3.3.1
>
> I am having problems with save/restore under 3.3.1 in the GPLPV
drivers.
> I call hvm_shutdown(xpdd, SHUTDOWN_suspend), but as soon as I lower
IRQL
> (enabling interrupts), qemu goes to 100% CPU and the DomU load goes
> right up too.
>
> Xentrace is showing a whole lot of this going on:
>
>
> CPU0 200130258143212 (+ 770) hypercall [ rip =
> 0x000000008020632a, eax = 0xffffffff ]
> CPU0 200130258151107 (+ 7895) hypercall [ rip =
> 0x00000000802062eb, eax = 0xffffffff ]
> CPU0 200130258156293 (+ 5186) hypercall [ rip =
> 0x00000000802062eb, eax = 0xffffffff ]
> CPU0 200130258161233 (+ 4940) hypercall [ rip =
> 0x00000000802062eb, eax = 0xffffffff ]
> CPU0 200130258165467 (+ 4234) hypercall [ rip =
> 0x000000008020640a, eax = 0xffffffff ]
> CPU0 200130258167202 (+ 1735) domain_wake [ domid =
> 0x00000062, edomid = 0x00000000 ]
> CPU0 200130258168511 (+ 1309) switch_infprev [ old_domid =
> 0x00000000, runtime = 31143 ]
> CPU0 200130258168716 (+ 205) switch_infnext [ new_domid =
> 0x00000062, time = 786, r_time = 30000000 ]
> CPU0 200130258169338 (+ 622) __enter_scheduler [
> prev<domid:edomid> = 0x00000000 : 0x00000000, next<domid:edomid> =
> 0x00000062 : 0x00000000 ]
> CPU0 200130258175532 (+ 6194) VMENTRY [ dom:vcpu = 0x00000062
]
> CPU0 200130258179633 (+ 4101) VMEXIT [ dom:vcpu =
0x00000062,
> exitcode = 0x0000004e, rIP = 0x0000000080a562b9 ]
> CPU0 0 (+ 0) MMIO_AST_WR [ address = 0xfee000b0, data =
> 0x00000000 ]
> CPU0 0 (+ 0) PF_XEN [ dom:vcpu = 0x00000062, errorcode =
> 0x0b, virt = 0xfffe00b0 ]
> CPU0 0 (+ 0) INJ_VIRQ [ dom:vcpu = 0x00000062, vector =
0x00,
> fake = 1 ]
> CPU0 200130258185932 (+ 6299) VMENTRY [ dom:vcpu = 0x00000062
]
> CPU0 200130258189737 (+ 3805) VMEXIT [ dom:vcpu =
0x00000062,
> exitcode = 0x00000064, rIP = 0x0000000080a560ad ]
> CPU0 0 (+ 0) INJ_VIRQ [ dom:vcpu = 0x00000062, vector =
0x83,
> fake = 0 ]
> CPU0 200130258190990 (+ 1253) VMENTRY [ dom:vcpu = 0x00000062
]
> CPU0 200130258194791 (+ 3801) VMEXIT [ dom:vcpu =
0x00000062,
> exitcode = 0x0000007b, rIP = 0x0000000080a5a29e ]
> CPU0 0 (+ 0) IO_ASSIST [ dom:vcpu = 0x0000c202, data =
0x0000
> ]
> CPU0 200130258198944 (+ 4153) switch_infprev [ old_domid =
> 0x00000062, runtime = 17087 ]
> CPU0 200130258199132 (+ 188) switch_infnext [ new_domid =
> 0x00000000, time = 17087, r_time = 30000000 ]
> CPU0 200130258199702 (+ 570) __enter_scheduler [
> prev<domid:edomid> = 0x00000062 : 0x00000000, next<domid:edomid> =
> 0x00000000 : 0x00000000 ]
> CPU0 200130258206470 (+ 6768) hypercall [ rip =
> 0x00000000802062eb, eax = 0xffffffff ]
> CPU0 200130258210964 (+ 4494) hypercall [ rip =
> 0x00000000802062eb, eax = 0xffffffff ]
> CPU0 200130258214767 (+ 3803) hypercall [ rip =
> 0x00000000802062eb, eax = 0xffffffff ]
> CPU0 200130258218019 (+ 3252) hypercall [ rip =
> 0x00000000802062eb, eax = 0xffffffff ]
> CPU0 200130258227419 (+ 9400) hypercall [ rip =
> 0x00000000802062eb, eax = 0xffffffff ]
>
> It kind of looks like vector 0x83 is being fired over and over, which
> would explain why things hang once I enable interrupts again. I will
> look into what vector 0x83 is attached to, but does anyone have any
> ideas?
>
> Thanks
>
> James
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|