Since it's an assertion, I assume you ran it with debug=y?
I'm definitely changing some assumptions with this, so it's not a
surprise that some assertions trigger.
I'm working on a modified version based on the discussion we had here;
I'll post a patch (tested with debug=y) when I'm done.
-George
On Thu, Jan 28, 2010 at 11:27 PM, Dulloor <dulloor@xxxxxxxxx> wrote:
> George,
>
> With your patches and sched=credit2, xen crashes on a failed assertion :
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) Assertion '_spin_is_locked(&(*({ unsigned long __ptr; __asm__ ("" :
> "=r"(*
> (XEN)
>
> Is this version supposed to work (or is it just some reference code) ?
>
> thanks
> dulloor
>
>
> On Wed, Jan 13, 2010 at 11:43 AM, George Dunlap
> <george.dunlap@xxxxxxxxxxxxx> wrote:
>> Keir Fraser wrote:
>>>
>>> On 13/01/2010 16:05, "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx> wrote:
>>>
>>>
>>>>
>>>> [NB that the current global lock will eventually be replaced with
>>>> per-runqueue locks.]
>>>>
>>>> In particular, one of the races without the first flag looks like this
>>>> (brackets indicate physical cpu):
>>>> [0] lock cpu0 schedule lock
>>>> [0] lock credit2 runqueue lock
>>>> [0] Take vX off runqueue; vX->processor == 1
>>>> [0] unlock credit2 runqueue lock
>>>> [1] vcpu_wake(vX) lock cpu1 schedule lock
>>>> [1] finds vX->running false, adds it to the runqueue
>>>> [1] unlock cpu1 schedule_lock
>>>>
>>>
>>> Actually, hang on. Doesn't this issue, and the one that your second patch
>>> addresses, go away if we change the schedule_lock granularity to match
>>> runqueue granularity? That would seem pretty sensible, and could be
>>> implemented with a schedule_lock(cpu) scheduler hook, returning a
>>> spinlock_t*, and a some easy scheduler code changes.
>>>
>>> If we do that, do you then even need separate private per-runqueue locks?
>>> (Just an extra thought).
>>>
>>
>> Hmm.... can't see anything wrong with it. It would make the whole locking
>> discipline thing a lot simpler. It would, AFAICT, remove the need for
>> private per-runqueue locks, which make it a lot harder to avoid deadlock
>> without these sorts of strange tricks. :-)
>>
>> I'll think about it, and probably give it a spin to see how it works out.
>>
>> -George
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|