|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] Need help with fixing the Xen waitqueue feature
The patch 'mem_event: use wait queue when ring is full' I just sent out
makes use of the waitqueue feature. There are two issues I get with the
change applied:
I think I got the logic right, and in my testing vcpu->pause_count drops
to zero in p2m_mem_paging_resume(). But for some reason the vcpu does
not make progress after the first wakeup. In my debugging there is one
wakeup, the ring is still full, but further wakeups dont happen.
The fully decoded xentrace output may provide some hints about the
underlying issue. But its hard to get due to the second issue.
Another thing is that sometimes the host suddenly reboots without any
message. I think the reason for this is that a vcpu whose stack was put
aside and that was later resumed may find itself on another physical
cpu. And if that happens, wouldnt that invalidate some of the local
variables back in the callchain? If some of them point to the old
physical cpu, how could this be fixed? Perhaps a few "volatiles" are
needed in some places.
I will check wether pinning the guests vcpus to physical cpus actually
avoids the sudden reboots.
Olaf
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] Need help with fixing the Xen waitqueue feature,
Olaf Hering <=
|
|
|
|
|