WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

To: Stephan Diestelhorst <stephan.diestelhorst@xxxxxxx>
Subject: Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Wed, 28 Sep 2011 09:44:25 -0700
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Nick Piggin <npiggin@xxxxxxxxx>, KVM <kvm@xxxxxxxxxxxxxxx>, Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Marcelo Tosatti <mtosatti@xxxxxxxxxx>, the arch/x86 maintainers <x86@xxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, Andi Kleen <andi@xxxxxxxxxxxxxx>, Avi Kivity <avi@xxxxxxxxxx>, Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, Ingo Molnar <mingo@xxxxxxx>, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 28 Sep 2011 10:20:36 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <33782147.oLTY4kzH1r@chlor>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <cover.1315878463.git.jeremy.fitzhardinge@xxxxxxxxxx> <3300108.XQUp9Wrktc@chlor> <4E81FD52.50106@xxxxxxxx> <33782147.oLTY4kzH1r@chlor>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:6.0.2) Gecko/20110906 Thunderbird/6.0.2
On 09/28/2011 06:58 AM, Stephan Diestelhorst wrote:
> I have tested this and have not seen it fail on publicly released AMD
> systems. But as I have tried to point out, this does not mean it is
> safe to do in software, because future microarchtectures may have more
> capable forwarding engines.

Sure.

>> Have you tested this, or is this just from code analysis (which I
>> agree with after reviewing the ordering rules in the Intel manual).
> We have found a similar issue in Novell's PV ticket lock implementation
> during internal product testing.

Jan may have picked it up from an earlier set of my patches.

>>> Since you want to get that addb out to global memory before the second
>>> read, either use a LOCK prefix for it, add an MFENCE between addb and
>>> movzwl, or use a LOCKed instruction that will have a fencing effect
>>> (e.g., to top-of-stack)between addb and movzwl.
>> Hm.  I don't really want to do any of those because it will probably
>> have a significant effect on the unlock performance; I was really trying
>> to avoid adding any more locked instructions.  A previous version of the
>> code had an mfence in here, but I hit on the idea of using aliasing to
>> get the ordering I want - but overlooked the possible effect of store
>> forwarding.
> Well, I'd be curious about the actual performance impact. If the store
> needs to commit to memory due to aliasing anyways, this would slow down
> execution, too. After all it is better to write working than fast code,
> no? ;-)

Rule of thumb is that AMD tends to do things like lock and fence more
efficiently than Intel - at least historically.  I don't know if that's
still true for current Intel microarchitectures.

>> I guess it comes down to throwing myself on the efficiency of some kind
>> of fence instruction.  I guess an lfence would be sufficient; is that
>> any more efficient than a full mfence?
> An lfence should not be sufficient, since that essentially is a NOP on
> WB memory. You really want a full fence here, since the store needs to
> be published before reading the lock with the next load.

The Intel manual reads:

    Reads cannot pass earlier LFENCE and MFENCE instructions.
    Writes cannot pass earlier LFENCE, SFENCE, and MFENCE instructions.
    LFENCE instructions cannot pass earlier reads.

Which I interpreted as meaning that an lfence would prevent forwarding. 
But I guess it doesn't say "lfence instructions cannot pass earlier
writes", which means that the lfence could logically happen before the
write, thereby allowing forwarding?  Or should I be reading this some
other way?

>> Could you give me a pointer to AMD's description of the ordering rules?
> They should be in "AMD64 Architecture Programmer's Manual Volume 2:
> System Programming", Section 7.2 Multiprocessor Memory Access Ordering.
>
> http://developer.amd.com/documentation/guides/pages/default.aspx#manuals
>
> Let me know if you have some clarifying suggestions. We are currently
> revising these documents...

I find the English descriptions of these kinds of things frustrating to
read because of ambiguities in the precise meaning of words like "pass",
"ahead", "behind" in these contexts.  I find the prose useful to get an
overview, but when I have a specific question I wonder if something more
formal would be useful.
I guess it's implied that anything that is not prohibited by the
ordering rules is allowed, but it wouldn't hurt to say it explicitly.
That said, the AMD description seems clearer and more explicit than the
Intel manual (esp since it specifically discusses the problem here).

Thanks,
    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>