[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [PATCH] [RFC] Credit2 scheduler prototype


  • To: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
  • From: Dulloor <dulloor@xxxxxxxxx>
  • Date: Thu, 28 Jan 2010 18:27:44 -0500
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 28 Jan 2010 15:28:11 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=JGJDDerRCGKjp62Y63QyUcGtdJdKAExSHmbpipAsv5xMNtwU1V5PjhHoT10xvrtnuX 5qFzqsB88Kcc7juRzTeOHjTfwjiO/0tAWFVQQ0LKcRB4i2d84fKa3WDpFiGN7QvD1SEl Vn+Rv/XaddEBi/0o4zdUkifSNvqgrIosLZ1hs=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

George,

With your patches and sched=credit2, xen crashes on a failed assertion :
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) Assertion '_spin_is_locked(&(*({ unsigned long __ptr; __asm__ ("" : "=r"(*
(XEN)

Is this version supposed to work (or is it just some reference code) ?

thanks
dulloor


On Wed, Jan 13, 2010 at 11:43 AM, George Dunlap
<george.dunlap@xxxxxxxxxxxxx> wrote:
> Keir Fraser wrote:
>>
>> On 13/01/2010 16:05, "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx> wrote:
>>
>>
>>>
>>> [NB that the current global lock will eventually be replaced with
>>> per-runqueue locks.]
>>>
>>> In particular, one of the races without the first flag looks like this
>>> (brackets indicate physical cpu):
>>> [0] lock cpu0 schedule lock
>>> [0] lock credit2 runqueue lock
>>> [0] Take vX off runqueue; vX->processor == 1
>>> [0] unlock credit2 runqueue lock
>>> [1] vcpu_wake(vX) lock cpu1 schedule lock
>>> [1] finds vX->running false, adds it to the runqueue
>>> [1] unlock cpu1 schedule_lock
>>>
>>
>> Actually, hang on. Doesn't this issue, and the one that your second patch
>> addresses, go away if we change the schedule_lock granularity to match
>> runqueue granularity? That would seem pretty sensible, and could be
>> implemented with a schedule_lock(cpu) scheduler hook, returning a
>> spinlock_t*, and a some easy scheduler code changes.
>>
>> If we do that, do you then even need separate private per-runqueue locks?
>> (Just an extra thought).
>>
>
> Hmm.... can't see anything wrong with it.  It would make the whole locking
> discipline thing a lot simpler.  It would, AFAICT, remove the need for
> private per-runqueue locks, which make it a lot harder to avoid deadlock
> without these sorts of strange tricks. :-)
>
> I'll think about it, and probably give it a spin to see how it works out.
>
> -George
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.