WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] determine the latency characteristics of a VM automatica

To: David Xu <davidxu06@xxxxxxxxx>
Subject: Re: [Xen-devel] determine the latency characteristics of a VM automatically
From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Date: Mon, 5 Sep 2011 15:45:38 +0100
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 05 Sep 2011 07:46:31 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=/ss1/DdWOVyXCN68lu+Rw5ko5FqpmUtG0yby7taePwg=; b=Ql2TJzAG+p9ueEiDoGB8Y8KL2symNRdYdjTJdusOGsPcU6obPm1/XxusW6Ked9tnXa 6vcok1f71wcSKrzrZNH+MNQqNIF2SajRk3T7uWSv3OABhxJihG/tdxX+/BAh26U7rHf9 Gu5aLm/3Jn33RP8Kslo+RAuCDt1RpjZLegm6g=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <CAGjowiQ5bBU9cxdP0wPDTt4YUD-01_AAw7PWiKS6Re1ovp9FRQ@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <CAGjowiQ5bBU9cxdP0wPDTt4YUD-01_AAw7PWiKS6Re1ovp9FRQ@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Fri, Sep 2, 2011 at 5:27 PM, David Xu <davidxu06@xxxxxxxxx> wrote:
> Hi George,
> Tow months ago, we talked about how to reduce the scheduling latency for a
> specific VM which runs a mixed workload, where the boost mechanism can not
> works well. I have tried some methods to reduce the scheduling latency for
> some assumed latency-sensitive VMs and got some progress on it. Now I hope
> to make it on demand. That is to say, I hope to get the scheduler to
> determine the latency characteristics of a VM automatically. Since most time
> latency-sensitive operations are initiated with an interrupt, so a pending
> interrupt generally means that there is a latency sensitive operation
> waiting to happen. I remember you said your idea was to have the scheduler
> look at the historical rate of interrupts and determine a preemption
> timeslice based on those. I know your general idea, but could you talk more
> about it? What's more, I wonder if only the interrupts can infer the
> workload type? In my opinion, a pending interrupt indicates there is a
> operation to handle but may not be latency sensitive. Some common I/O
> operation, e.g. http request for a web page or  file transmission, would
> also result in pending interrupt if the destination VM does not get
> scheduled at the moment. But they are not latency sensitive. Of course, if
> we can directly get some important information for distinguishing the
> latency-sensitive workload from common workload, it is powerful and high
> efficient. I am looking forward to your opinions and I hope I will not
> disturb your work. Thanks.

Cong,

Thanks for your interest in this.  My first comment is about
latency-senstive.  When I say "latency sensitive", I don't mean that
the *user* cares about the time the operation takes to complete.
(Although I would argue that the user does care about how long a web
page takes to load.)  I mean that the *algorithm* is affected by
delays in processing.  In the case of TCP for example (which will be
involved both in the http fetch and the file transfer), the throughput
will be significantly lower if the VM is not allowed to handle packets
in a timely fashion.

Regarding interrupts: the general idea was this.  Suppose a VM gets
interrupts every 6ms.  And suppose that right now the system is busy
enough that it can only get 50% of the cpu.  Ideally, we want the VM
to be able to run every time it gets an interrupt -- every 6ms.  So in
order to get 50% but still be able to run every 6ms, it needs to run
for 3ms each time.  So Xen should let it run for 3ms, then pre-empt
it, and then when it gets another interrupt, pull it to the front of
the queue and let it run again.

That would be the desired "emergent" behavior of the algorithm (that
is, how we would like the algorithm to behave on a large scale).  But
how to make a particular scheduling algorithm do that is the
challenging part, and it would would depend on the algorithm.  Were
you thinking about trying to do this in credit2?

The simple formula would be:  runtime = (average interval between
interrupts) * (%age of cpu the VM can expect to get).  This might work
just fine, or it might need refinement based on experience (as
interrupts seem to me unlikely to come at nice even intervals).

The first thing to try would be to figure out how to find the average
recent interval between interrupts.  An exact but perhaps inefficient
way you could do this is keep a circular list of the last N interrupts
with a timestamp of when they happened (say, the last 8), with a
pointer to the oldest one.  Then set avg_interval = (timestamp now -
timestamp of oldest interrupt).  Another way would be to have a
"decaying average" function, where new_avg = last_interval * p +
old_avg * (1-p).

The harder thing would be to figure out what percentage of CPU the VM
is likely to receive.  That may be a bit tricky, and will depend a lot
on which algorithm you're using.

An easier thing we might try is not setting the rate per vcpu, but per
pcpu.  That is, when we assign a vcpu to a pcpu, we add its interrupt
interval average to the pcpu interval interrupt average, and set the
timeslice for that cpu accordingly.  That would be fine when workloads
are similar, but could cause problems if the workloads are very
different.

Thoughts?
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>