WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: Could you reinstate ticketlock cleanups in tip.git

To: Ingo Molnar <mingo@xxxxxxx>
Subject: [Xen-devel] Re: Could you reinstate ticketlock cleanups in tip.git
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Tue, 27 Sep 2011 23:40:41 -0700
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, the arch/x86 maintainers <x86@xxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>
Delivery-date: Tue, 27 Sep 2011 23:41:26 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110928063115.GA24510@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <alpine.LFD.2.02.1109280209430.2711@ionos> <20110928063115.GA24510@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:6.0.2) Gecko/20110906 Thunderbird/6.0.2
On 09/27/2011 11:31 PM, Ingo Molnar wrote:
> * Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote:
>
>> Hi Thomas,
>>
>> Could you reinstate the ticketlock cleanup series in tip.git (I think it
>> was in x86/spinlocks) from
>>     git://github.com/jsgf/linux-xen.git upstream/ticketlock-cleanup
>>
>> This branch is the final result of all the discussions and revisions and
>> is ready for the next merge window.  It's the one that's been in
>> linux-next for a couple of weeks (at least), and won't cause any
>> conflicts there.
>>
>> Or if you prefer I can submit it myself for next merge window.
>>
>> Thanks,
>>     J
> I'd prefer if this went via x86/spinlocks. Could you please send a 
> proper pull request with diffstat, shortlog, etc?
>

Sure, here you are:

The following changes since commit c6a389f123b9f68d605bb7e0f9b32ec1e3e14132:

  Linux 3.1-rc4 (2011-08-28 21:16:01 -0700)

are available in the git repository at:
  git://github.com/jsgf/linux-xen upstream/ticketlock-cleanup
(commit 4a7f340c6a75ec5fca23d9c80a59f3f28cc4a61e)

Jeremy Fitzhardinge (12):
      x86, cmpxchg: <linux/alternative.h> has LOCK_PREFIX
      x86, cmpxchg: Move 32-bit __cmpxchg_wrong_size to match 64 bit.
      x86, cmpxchg: Move 64-bit set64_bit() to match 32-bit
      x86, cmpxchg: Unify cmpxchg into cmpxchg.h
      x86: Add xadd helper macro
      x86: Use xadd helper more widely
      x86, ticketlock: Clean up types and accessors
      x86, ticketlock: Convert spin loop to C
      x86, ticketlock: Convert __ticket_spin_lock to use xadd()
      x86, ticketlock: Make __ticket_spin_trylock common
      x86, cmpxchg: Use __compiletime_error() to make usage messages a bit nicer
      x86, ticketlock: remove obsolete comment

 arch/x86/include/asm/atomic.h         |    8 +-
 arch/x86/include/asm/atomic64_64.h    |    6 +-
 arch/x86/include/asm/cmpxchg.h        |  205 +++++++++++++++++++++++++++++++++
 arch/x86/include/asm/cmpxchg_32.h     |  114 ------------------
 arch/x86/include/asm/cmpxchg_64.h     |  131 ---------------------
 arch/x86/include/asm/rwsem.h          |    8 +-
 arch/x86/include/asm/spinlock.h       |  114 +++++--------------
 arch/x86/include/asm/spinlock_types.h |   22 +++-
 arch/x86/include/asm/uv/uv_bau.h      |    6 +-
 9 files changed, 257 insertions(+), 357 deletions(-)

Thanks,
        J


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>