This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Re: performance of credit2 on hybrid workload

To: David Xu <davidxu06@xxxxxxxxx>
Subject: Re: [Xen-devel] Re: performance of credit2 on hybrid workload
From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Date: Wed, 8 Jun 2011 11:36:36 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 08 Jun 2011 03:37:10 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=Q/bpvMdL2s0a6pdFDVEU8hm15E6nq7Zaada4jDWz9dM=; b=pzU9ZGoPw6anZR6bmGYYevR9Po6Sum4pjfjeQaBtbBy+vLrq7Ihh2nnB5WrziTSWlE PI/dc7zQ+I+N/XH8pB82vuxZq5XRgfLRxcfe5z7qBjEc8Vb6gLs2jqgMoYqEI5LS5Ny0 JFN9Eq4LUg2kR85hWup8+VF9z2MvGw0pjBBBA=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=k4afTRMsnXHYLyaYEEKsVVFLhOXZvwy/30kh9l27Hv6QtjqXWncQqzpGijsz3lL0fF aMfpO8aorp17UU+ihV0acDdM8RwBoPcMUzt3dcaX0qsJX24k+cq7++whHDlxHbwC7uC7 iGESsw3XEGcoufTQGUBPL1nqyexveIEpSxfi0=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <BANLkTikGiK+HFAgzVF1pPObwGi55FeAW-g@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <BANLkTik9+a64cm6YPgnL0sTaXbEWCqYJcA@xxxxxxxxxxxxxx> <1306340309.21026.8524.camel@elijah> <BANLkTi=57gDitoq7-T7n9Zh0_ZrCMuxfRg@xxxxxxxxxxxxxx> <1306401493.21026.8526.camel@elijah> <BANLkTikU0KqN_yd1J3_HtCaAN0LrF6qBXQ@xxxxxxxxxxxxxx> <BANLkTimaUs=pnBV3sEd0c_KsNeEF4SjSDQ@xxxxxxxxxxxxxx> <BANLkTikGiK+HFAgzVF1pPObwGi55FeAW-g@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, Jun 7, 2011 at 8:28 PM, David Xu <davidxu06@xxxxxxxxx> wrote:
> Hi George,
> Could you share some ideas about how to addressed the  "mixed workload"
> problem,  where a single VM does both
> cpu-intensive and latency-sensitive workloads, even though you haven't
> implemented it yet?  I am also working on it, maybe I can try some methods
> and give you feedback. Thanks.

Well the main thing to remember is that you can't give the VM any
*more* time.  The amount of time it's allowed is defined by the
scheduler parameters (and the other VMs running).  So all you can do
is change *when* the VM gets the time.  So what you want the scheduler
to do is give the VM shorter timeslices *so that* it can get time more

For example, the credit1 scheduler will let a VM burn through 30ms of
credit.  That means if its "fair share" is (say) 50%, then it has to
wait at least 30ms before being allowed to run again in order to
maintain fairness.  If its "fair share" is 33%, then the VM has to
wait at least 60ms.  If the scheduler were to preempt it after 5ms,
then the VM would only have to be delayed for 5ms or 10ms,
respectively; and if it were preempted after 1ms, it would only have
to be delayed 1s or 2s.

So the real key to giving a VM with a mixed workload better latency
characteristics is not to wake it up sooner, but to preempt it sooner.

The problem is, of course, that preempting workloads which are *not*
latency sensitive too soon adds scheduling overhead, and reduces cache
effectiveness.  So the question becomes, how do I know how long to let
a VM run for?

One solution would be to introduce a scheduling parameter that will
tell the scheduler how long to set the preemption timer for.  Then if
an administrator knows he's running a mixed-workload VM, he can
shorten it down; or if he knows he's running a cpu-cruncher, he can
make it longer.  This would also be useful in verifying the logic of
"shorter timeslices -> less latency for mixed workloads"; i.e,. we
could vary this number and see the effects.

One issue with adding this to the credit1 scheduler is that the
credit1 scheduler is that there are only 3 priorities (BOOST, UNDER,
and OVER), and scheduling is round-robin within each priority.  It's a
known issue with round-robin scheduling that tasks which yield (or are
preempted soon) are discriminated against compared to tasks which use
up their full timeslice (or are preempted less soon).  So there
results may not be representative.

The next step would be to try to get the scheduler to determine the
latency characteristics of a VM automatically.  The key observation
here is that most of the time, latency-sensitive operations are
initiated with an interrupt; or to put it the other way, a pending
interrupt generally means that there is a latency sensitive operation
waiting to happen.  My idea was to have the scheduler look at the
historical rate of interrupts and determine a preemption timeslice
based on those, such that on average, the VM's credit would be enough
to run just when the next interrupt arrived for it to handle.

It occurs to me now that after a certain point, interrupts themselves
become inefficient and drivers sometimes go into "polling" mode, which
would look to the scheduler the same as cpu-bound.  Hmm... bears
thinking about. :-)

Anyway, that's where I got in my thinking on this. Let me know what
you think. :-)


Xen-devel mailing list