|   | 
      | 
  
  
      | 
      | 
  
 
     | 
    | 
  
  
     | 
    | 
  
  
    |   | 
      | 
  
  
    | 
         
xen-devel
Re: [Xen-devel] Performance overhead of paravirt_ops on			nativeidentifi
 
| 
To:  | 
Jan Beulich <JBeulich@xxxxxxxxxx> | 
 
| 
Subject:  | 
Re: [Xen-devel] Performance overhead of paravirt_ops on			nativeidentified | 
 
| 
From:  | 
Jeremy Fitzhardinge <jeremy@xxxxxxxx> | 
 
| 
Date:  | 
Fri, 15 May 2009 11:50:48 -0700 | 
 
| 
Cc:  | 
Nick Piggin <npiggin@xxxxxxx>, Xiaohui Xin <xiaohui.xin@xxxxxxxxx>,	Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>,	Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>,	Xin Li <xin.li@xxxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>,	"H. Peter Anvin" <hpa@xxxxxxxxx>, Ingo Molnar <mingo@xxxxxxx> | 
 
| 
Delivery-date:  | 
Fri, 15 May 2009 11:52:56 -0700 | 
 
| 
Envelope-to:  | 
www-data@xxxxxxxxxxxxxxxxxxx | 
 
| 
In-reply-to:  | 
<4A0D3F8C02000078000010A7@xxxxxxxxxxxxxxxxxx> | 
 
| 
List-help:  | 
<mailto:xen-devel-request@lists.xensource.com?subject=help> | 
 
| 
List-id:  | 
Xen developer discussion <xen-devel.lists.xensource.com> | 
 
| 
List-post:  | 
<mailto:xen-devel@lists.xensource.com> | 
 
| 
List-subscribe:  | 
<http://lists.xensource.com/mailman/listinfo/xen-devel>,	<mailto:xen-devel-request@lists.xensource.com?subject=subscribe> | 
 
| 
List-unsubscribe:  | 
<http://lists.xensource.com/mailman/listinfo/xen-devel>,	<mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> | 
 
| 
References:  | 
<4A0B62F7.5030802@xxxxxxxx>	<4A0BED040200007800000DB0@xxxxxxxxxxxxxxxxxx>	<4A0C58BB.3090303@xxxxxxxx>	<4A0D3F8C02000078000010A7@xxxxxxxxxxxxxxxxxx> | 
 
| 
Sender:  | 
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx | 
 
| 
User-agent:  | 
Thunderbird 2.0.0.21 (X11/20090320) | 
 
 
 
Jan Beulich wrote:
 
A patch for the pv-ops kernel would require some time. What I can give you
right away - just for reference - are the sources we currently use in our 
kernel:
attached.
 
 
 Hm, I see.  Putting a call out to a pv-ops function in the ticket lock 
slow path looks pretty straightforward.  The need for an extra lock on 
the contended unlock side is a bit unfortunate; have you measured to see 
what hit that has?  Seems to me like you could avoid the problem by 
using per-cpu storage rather than stack storage (though you'd need to 
copy the per-cpu data to stack when handling a nested spinlock).
What's the thinking behind the xen_spin_adjust() stuff?
 static __always_inline void __ticket_spin_lock(raw_spinlock_t *lock) { 
unsigned int token, count; bool free; __ticket_spin_lock_preamble; if 
(unlikely(!free)) token = xen_spin_adjust(lock, token); do { count = 1 
<< 10; __ticket_spin_lock_body; } while (unlikely(!count) && 
!xen_spin_wait(lock, token)); } 
 
 How does this work?  Doesn't it always go into the slowpath loop even if 
the preamble got the lock with no contention?
   J
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
 
 |   
 
 | 
    | 
  
  
    |   | 
    |