[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

spinlock requests (was RE: [Xen-devel] [Patch] don't spin with irq disabled)



Keir (and/or others) --

If you are messing around in the spinlock code anyway,
I have a couple of requests.  Tmem needs:

rw_is_write_locked(rwlock_t *lock)

and

write_trylock(rwlock_t *lock)

I implemented the latter by grabbing the C code from Linux
and it seems to work, but it would be nice if it was
consistent with the other lock code in xen and
my asm statement understanding is too poor to try.

For rw_is_write_locked(), I had a devil of a time
because of what appeared to be a weird code generated
race; the obvious simple implementation failed
periodically... apparently due to racing against
try_readlock attempts!  (I use it in an ASSERT so it
was a rather noticeable and spectacular failure!)
The workaround I used below is a horrible hack
but I haven't had problems since.

Thanks,
Dan

(include/asm-x86/spinlock.h)
+static inline int write_trylock(rwlock_t *lock)
+{
+       atomic_t *count = (atomic_t *)lock;
+       if (atomic_sub_and_test(RW_LOCK_BIAS, count))
+               return 1;
+       atomic_add(RW_LOCK_BIAS, count);
+       return 0;
+}

(include/asm-x86/spinlock.h)
+#define _raw_rw_is_write_locked(x) (((x)->lock) <= 0)

(common/spinlock.c))
+int _rw_is_write_locked(rwlock_t *lock)
+{
+#if 0
+    check_lock(&lock->debug);
+    return _raw_rw_is_write_locked(&lock->raw);
+#else
+    int val;
+
+    /* some weird code generation race? this works, above doesn't */
+    check_lock(&lock->debug);
+    val = (&lock->raw)->lock;
+    if (val > 0)  /* FIXME remove if ever called when expects not locked */
+        printk("*** val=%d\n",val);
+    return val <= 0;
+#endif
+}


> -----Original Message-----
> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
> Sent: Friday, March 27, 2009 3:10 AM
> To: Jan Beulich; Juergen Gross
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [Patch] don't spin with irq disabled
> 
> 
> On 27/03/2009 07:41, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
> 
> >> spin_lock_irq disables always IRQs. spin_unlock_irq 
> enables always IRQs. They
> >> are always used in pairs, so IRQs should always be enabled 
> on entry of
> >> spin_lock_irq.
> > 
> > No, I wouldn't suggest making an assumption like this - 
> some code could allow
> > interrupts to be disabled when acquiring the lock, but 
> intentionally enabling
> > them when releasing it. (Personally, I think there 
> shouldn't be any users of
> > this function pair in the first place, as I don't think 
> forcibly enabling
> > interrupts
> > is a correct thing to do in all but *very* few cases.)
> 
> It seems to me a perfectly reasonable function pair to use 
> when using an
> IRQ-safe lock from code that you know cannot possibly already 
> have IRQs
> disabled. Anyone using the function pair to implicitly enable 
> interrupts is
> an evil person, and I already put assertions in to prevent 
> that kind of
> thing, in _{spin,read,write}_lock_irq(). The assertions only 
> fired in a few
> cases, and all of them looked like genuine bugs.
> 
> Anyway, as to the more general question of is this all 
> worthwhile, I'm not
> sure. I actually like the potential advantage of needing 
> fewer and simpler
> asm stubs in arch-specific spinlock.h, since Juergen has 
> accidentally fallen
> on an opportunity to get rid of _raw_spin_lock. I'm not sure whether
> _raw_trylock is more expensive than _raw_lock though. On x86 it's the
> difference between lock;dec and xchg -- I don't see any 
> reason why one would
> be more expensive than the other.
> 
>  -- Keir
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.