WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while

To: Don Zickus <dzickus@xxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Tue, 06 Sep 2011 11:07:26 -0700
Cc: Marcelo Tosatti <mtosatti@xxxxxxxxxx>, Nick Piggin <npiggin@xxxxxxxxx>, KVM <kvm@xxxxxxxxxxxxxxx>, Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>, Peter Zijlstra <peterz@xxxxxxxxxxxxx>, the arch/x86 maintainers <x86@xxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Andi Kleen <andi@xxxxxxxxxxxxxx>, Avi Kivity <avi@xxxxxxxxxx>, Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, Ingo Molnar <mingo@xxxxxxx>, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>, Xen Devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 06 Sep 2011 11:09:26 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110906151408.GA7459@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <cover.1314922370.git.jeremy.fitzhardinge@xxxxxxxxxx> <38bb37e15f6e5056d5238adac945bc1837a996ec.1314922370.git.jeremy.fitzhardinge@xxxxxxxxxx> <1314974826.1861.1.camel@twins> <4E612EA1.20007@xxxxxxxx> <1314996468.8255.0.camel@twins> <4E614FBD.2030509@xxxxxxxx> <20110906151408.GA7459@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:6.0) Gecko/20110816 Thunderbird/6.0
On 09/06/2011 08:14 AM, Don Zickus wrote:
> On Fri, Sep 02, 2011 at 02:50:53PM -0700, Jeremy Fitzhardinge wrote:
>> On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
>>> On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
>>>>> I know that its generally considered bad form, but there's at least one
>>>>> spinlock that's only taken from NMI context and thus hasn't got any
>>>>> deadlock potential.
>>>> Which one? 
>>> arch/x86/kernel/traps.c:nmi_reason_lock
>>>
>>> It serializes NMI access to the NMI reason port across CPUs.
>> Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
>> ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
>> so I guess there's at least some chance there could be a virtual
>> emulated NMI.  Maybe?  Does qemu do that kind of thing?
>>
>> But, erm, does that even make sense?  I'm assuming the NMI reason port
>> tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
>> there's only a single reason port, then doesn't that mean that either 1)
>> they all got the NMI for the same reason, or 2) having a single port is
>> inherently racy?  How does the locking actually work there?
> The reason port is for an external/system NMI.  All the IPI-NMI don't need
> to access this register to process their handlers, ie perf.  I think in
> general the IOAPIC is configured to deliver the external NMI to one cpu,
> usually the bsp cpu.  However, there has been a slow movement to free the
> bsp cpu from exceptions like this to allow one to eventually hot-swap the
> bsp cpu.  The spin locks in that code were an attempt to be more abstract
> about who really gets the external NMI.  Of course SGI's box is setup to
> deliver an external NMI to all cpus to dump the stack when the system
> isn't behaving.
>
> This is a very low usage NMI (in fact almost all cases lead to loud
> console messages).
>
> Hope that clears up some of the confusion.

Hm, not really.

What does it mean if two CPUs go down that path?  Should one do some NMI
processing while the other waits around for it to finish, and then do
some NMI processing on its own?

It sounds like that could only happen if you reroute NMI from one CPU to
another while the first CPU is actually in the middle of processing an
NMI - in which case, shouldn't the code doing the re-routing be taking
the spinlock?

Or perhaps a spinlock isn't the right primitive to use at all?  Couldn't
the second CPU just set a flag/counter (using something like an atomic
add/cmpxchg/etc) to make the first CPU process the second NMI?

But on the other hand, I don't really care if you can say that this path
will never be called in a virtual machine.

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>