On Mon, Oct 15, 2007 at 01:26:06PM +0100, George Dunlap wrote:
> Part of the problem is that for the credit scheduler, the "priority"
> is used a bit differently. It changes, and it has no fundamental
> relationship between more important work and less important work; it's
> just a mechanism for implementing time allocations. (And a very clever
> way, I might add.)
>
> It's clear that "yield-I-really-mean-it" is useful for smp
> synchronization issues (like yielding when waiting for a spinlock held
> by scheduled-out vcpus, or waiting for a scheduled-out processor to
> ACK an IPI). But I can't really think of a situation where
> "yield-to-other-cpus-that-haven't-used-all-their-credits-yet" is
> particularly useful. Can you think of an example?
>
> Perhaps a better implementation of "yield-I-really-mean-it" would be:
> * Reduce the priority only if there are no vcpus of the same priority
> in the queue; and perhaps, only if there are no vcpus in the queue and
> no work to steal.
Isn't this the opposite of what our case needs? That is, we yield, and
we want to schedule another VCPU, whether it's the same priority or not.
> > Arguably, a number of things need to be done in
> > the Xen scheduler and synchronization primitives to improve
> > the performance of SMP guests. It may be worthwhile to have
> > a generic discussion about that on top of the specific problem
> > you're encountering.
>
> Here are some random ideas:
> * Expose to the guest, via the shared-info page, which vcpus are
> actively scheduled or not.
That info is already available via the runstate (although we don't use
it, and it wouldn't help us - the problem is that the 'other' VCPU
doesn't get scheduled when we yield, not that we don't know whether to
yield or not.)
> * Implement some kind of a yield or block primitive, like:
> + yield to a specific vcpu (i.e., the one holding the lock you want)
> + block with a vcpu mask. The vcpu will then be blocked until each of
> the vcpus in the mask has been scheduled at least once.
Possible if the scheduler can't be fixed in a similar way.
regards,
john
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|