>>> On 24.08.10 at 10:20, Keir Fraser <keir.fraser@xxxxxxxxxxxxx> wrote:
> On 24/08/2010 09:08, "George Dunlap" <dunlapg@xxxxxxxxx> wrote:
> It seems to me that Jeremy's spinlock implementation provides all the info a
> scheduler would require: vcpus trying to acquire a lock are blocked, the
> lock holder wakes just the next vcpu in turn when it releases the lock. The
> scheduler at that point may have a decision to make as to whether to run the
> lock releaser, or the new lock holder, or both, but how can the guest help
> with that when its a system-wide scheduling decision? Obviously the guest
> would presumably like all its runnable vcpus to run all of the time!
Blocking on an unavailable lock is somewhat different imo: If the blocked
vCPU didn't exhaust its time slice, I think it is very valid to for it to
expect to not penalize the whole VM, and rather donate (part of) its
remaining time slice to the lock holder. That keeps other domains
unaffected, while allowing the subject domain to make better use of
its resources.
>> I thought the
>> solution he had was interesting: when yielding due to a spinlock,
>> rather than going to the back of the queue, just go behind one person.
>> I think an impleentation of "yield_to" that might make sense in the
>> credit scheduler is:
>> * Put the yielding vcpu behind one cpu
Which clearly has the potential of burning more cycles without
allowing the vCPU to actually make progress.
>> * If the yield-to vcpu is not running, pull it to the front within its
>> priority. (I.e., if it's UNDER, put it at the front so it runs next;
>> if it's OVER, make it the first OVER cpu.)
At the risk of fairness wrt other domains, or even within the
domain. As said above, I think it would be better to temporarily
merge the priorities and location in the run queue of the yielding
and yielded-to vCPU-s, to have the yielded-to one get the
better of both (with a way to revert to the original settings
under the control of the guest, or enforced when the borrowed
time quantum expires).
The one more difficult case I would see in this model is what
needs to happen when the yielding vCPU has event delivery
enabled and receives an event, making it runnable again: In
this situation, the swapping of priority and/or run queue
placement might need to be forcibly reversed immediately,
not so much for fairness reasons than for keeping event
servicing latency reasonable. This includes the fact that in
such a case the vCPU wouldn't be able to do what it wants
with the waited for lock acquired, but would rather run the
event handling code first anyway, and hence the need for
boosting the lock holder went away.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|