|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] [PATCH][1/3] evtchn race condition
On 25 Jan 2006, at 15:11, Woller, Thomas wrote:
The patch is an attempt to combine the local domain spinlock and the
remote domain spinlock similar to the evtchn_bind_interdomain() logic.
The patch removes the initial ld spin_lock() from protecting the
port_is_valid() and the evtchn_from_port() functions, which allows
using
the "ld < rd" spinlock ordering technique... Not sure if this is really
ok or even necessary. Opinions are welcome here. Another solution
might be to move the remote domain (rd) spinlock exclusively around the
ECS_INTERDOMAIN case, and the ld spinlock exclusively around the
ECS_IPI
case - without any nested spinlocks.
The problem is that the hvm/io.c code is quite simply broken. A
correctly-implemented event recipient does not need to be serialised
w.r.t. evtchn_send() to work correctly. After all, in the case of a
paravirtualised guest, the recipient is not in Xen at all!
The correct ordering for the recipient to clear an event is:
clear evtchn_upcall_pending
clear bits in evtchn_pending_sel before acting on them
clear bits in evtchn_pending before acting on them
So, for example, the code that checks evtchn_pending[] and then clears
a bit in evtchn_pending_sel is totally screwed. It races evtchn_send()
res-setting the evtchn_pending[] bit! Fortunately, the comment that
'evtchn_pending_sel is shared by other event channels' is actually
false right now. The *only* event channel a VMX domain cares about is
its iopacket_port.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|