On 16/04/2010 09:16, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
> Oops, yes, this should be there already, without c_h_o_c.
> So, if another vcpu is trying to vcpu_pause() this vcpu, it will deadlock
> becaues this vcpu can't be paused, right? (I assume the same reason to
> hvmop_flush_tlb_all).
Yes.
> If this is true, then how about checking !vcpu_runnable(current) in
> stop_machine_run()'s stopmachine_set_state()? If this check failed, it means
> someone try to change our state and potential deadlock may happen, we can
> cancel this stop_machine_run()? This is at least a bit cleaner than the
> timeout mechanism.
Interesting... That could work. But...
>>> Maybe the cleaner way is your previous suggestion, that is, put the
>>> stop_machine_run() in the idle_vcpu(), another way is, turn back to the
>>> original method, i.e. do it in the schedule_tail.
>>
>> Argh. That would be annoying!
>
> Seems do it in the schedule_tail will not benifit this deadlock issues.
Yes, but now I think about it, scheduling c_h_o_c() and s_m_r() in idle-vcpu
context instead of softirq context might not be *that* hard to do, and it
avoids all these subtle deadlock possibilities. I think I'm warming to it
compared with the alternatives.
> Yes, c_h_o_c does not introduce any more deadlock, my concern is, it's name
> maybe confusing if one try to use it for other hypercall . After all, if a
> hypercall and function used by the hypercall depends on current, it should not
> use this c_h_o_c.
Well, I think there are two issues: (1) we run the continuation as a
softirq; (2) we don't run the continuation in the context of the original
vcpu. I think (1) is a bigger problem than (2) as it introduces the
possibility of all these nasty subtle deadlocks. (2) is obviously not
desirable, *but* I don't think any callers much care about the context of
the original caller, *and* if they do the resulting bug is generally going
to be pretty obvious. That is, the hypercall just won't work, ever -- it's
much less likely than (1) to result in very rare very subtle bugs.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|