WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [PATCH 09/14] xen/pvticketlock: Xen implementation

To: Jan Beulich <JBeulich@xxxxxxxxxx>
Subject: Re: [Xen-devel] Re: [PATCH 09/14] xen/pvticketlock: Xen implementation for PV ticket locks
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Wed, 17 Nov 2010 09:41:07 -0800
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>, Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxx>, Nick Piggin <npiggin@xxxxxxxxx>, Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Srivatsa Vaddagiri <vatsa@xxxxxxxxxxxxxxxxxx>, Eric Dumazet <dada1@xxxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Avi Kivity <avi@xxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, xiyou.wangcong@xxxxxxxxx, Linux Virtualization <virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 17 Nov 2010 09:42:03 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4CE3BDCF0200007800022BF1@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <cover.1289940821.git.jeremy.fitzhardinge@xxxxxxxxxx><cover.1289940821.git.jeremy.fitzhardinge@xxxxxxxxxx> <aa32da076143b8e13c23c1f589d7e6cbedb22907.1289940821.git.jeremy.fitzhardinge@xxxxxxxxxx> <4CE39C3C0200007800022AE2@xxxxxxxxxxxxxxxxxx> <4CE397E7.2010107@xxxxxxxx> <4CE3A714.9010509@xxxxxxxx> <4CE3BDCF0200007800022BF1@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.12) Gecko/20101027 Fedora/3.1.6-1.fc13 Lightning/1.0b3pre Thunderbird/3.1.6
On 11/17/2010 02:34 AM, Jan Beulich wrote:
>> Actually, on second thoughts, maybe it doesn't matter so much.  The main
>> issue is making sure that the interrupt will make the VCPU drop out of
>> xen_poll_irq() - if it happens before xen_poll_irq(), it should leave
>> the event pending, which will cause the poll to return immediately.  I
>> hope.  Certainly disabling interrupts for some of the function will make
>> it easier to analyze with respect to interrupt nesting.
> That's not my main concern. Instead, what if you get interrupted
> anywhere here, the interrupt handler tries to acquire another
> spinlock and also has to go into the slow path? It'll overwrite part
> or all of the outer context's state.

That doesn't matter if the outer context doesn't end up blocking.  If it
has already blocked then it will unblock as a result of the interrupt;
if it hasn't yet blocked, then the inner context will leave the event
pending and cause it to not block.  Either way, it no longer uses or
needs that per-cpu state: it will return to the spin loop and (maybe)
get re-entered, setting it all up again.

I think there is a problem with the code as posted because it sets up
the percpu data before clearing the pending event, so it can end up
blocking with bad percpu data.

>> Another issue may be making sure the writes and reads of "w->want" and
>> "w->lock" are ordered properly to make sure that xen_unlock_kick() never
>> sees an inconsistent view of the (lock,want) tuple.  The risk being that
>> xen_unlock_kick() sees a random, spurious (lock,want) pairing and sends
>> the kick event to the wrong VCPU, leaving the deserving one hung.
> Yes, proper operation sequence (and barriers) is certainly
> required here. If you allowed nesting, this may even become
> simpler (as you'd have a single write making visible the new
> "head" pointer, after having written all relevant fields of the
> new "head" structure).

Yes, simple nesting should be quite straightforward (ie allowing an
interrupt handler to take some other lock than the one the outer context
is waiting on).

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>