>>> On 24.08.10 at 11:09, George Dunlap <dunlapg@xxxxxxxxx> wrote:
> On Tue, Aug 24, 2010 at 9:48 AM, Jan Beulich <JBeulich@xxxxxxxxxx> wrote:
>>>> I thought the
>>>> solution he had was interesting: when yielding due to a spinlock,
>>>> rather than going to the back of the queue, just go behind one person.
>>>> I think an impleentation of "yield_to" that might make sense in the
>>>> credit scheduler is:
>>>> * Put the yielding vcpu behind one cpu
>>
>> Which clearly has the potential of burning more cycles without
>> allowing the vCPU to actually make progress.
>
> I think you may misunderstand; the yielding vcpu goes behind at least
> one vcpu on the runqueue, even if the next vcpu is lower priority. If
> there's another vcpu on the runqueue, the other vcpu always runs.
No, I understood it that way. What I was referring to is (as an
example) the case where two vCPU-s on the sam pCPU's run queue
both yield: They will each move after the other in the run queue in
close succession, but neither will really make progress, and neither
will really increase the likelihood of the respective lock holder to
get a chance to run.
> I posted some scheduler patches implementing this yield a week or two
> ago, and included some numbers. The numbers were with Windows Server
> 2008, which has queued spinlocks (equivalent of ticketed spinlocks).
> The throughput remained high even when highly over-committed. So a
> simple yield does have a significant effect. In the unlikely even
> that it is scheduled again, it will simply yield again when it sees
> that it's still waiting for the spinlock.
Immediately, or after a few (hundred) spin cycles?
> In fact, undirected-yield is one of yield-to's competitors: I don't
> think we should accept a "yield-to" patch unless it has significant
> performance gains over undirected-yield.
This position I agree with.
>> At the risk of fairness wrt other domains, or even within the
>> domain. As said above, I think it would be better to temporarily
>> merge the priorities and location in the run queue of the yielding
>> and yielded-to vCPU-s, to have the yielded-to one get the
>> better of both (with a way to revert to the original settings
>> under the control of the guest, or enforced when the borrowed
>> time quantum expires).
>
> I think doing tricks with priorities is too complicated. Complicated
> mechanisms are very difficult to predict and prone to nasty,
> hard-to-debug corner cases. I don't think it's worth exploring this
> kind of solution until it's clear that a simple solution cannot get
> reasonable performance. And I would oppose accepting any
> priority-inheritance solution into the tree unless there were
> repeatable measurements that showed that it had significant
> performance gain over a simpler solution.
And so I do with this. Apart from suspecting fairness issues with
your yield_to proposal (as I wrote), my point just is - we won't
know if a "complicated" solution outperforms a "simple" one if we
don't try it.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|