WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [RFC][PATCH] scheduler: credit scheduler for client virt

To: "NISHIGUCHI Naoki" <nisiguti@xxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC][PATCH] scheduler: credit scheduler for client virtualization
From: "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx>
Date: Thu, 4 Dec 2008 12:21:14 +0000
Cc: Ian.Pratt@xxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, disheng.su@xxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Thu, 04 Dec 2008 04:21:41 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:sender :to:subject:cc:in-reply-to:mime-version:content-type :content-transfer-encoding:content-disposition:references :x-google-sender-auth; bh=KB4/Z/Z9Hir+pFkKCf0IHwMdUby0SK0huR2BryuhKE8=; b=s44fFwenHrT7p0HoPBgkvpv3Hv69FfZwMeqbbvOLq5E+aidOymMN6t3r3YvacSqIwG lhaa9LYKx1SxQt9AaXXsuDspdomFGXYcO+dWAUW0TgFiifE6lSkiugF4kfqWxBkBceK3 7PPXunD6yYF+e3GV86Vj66eQE+FKlnXPvKrZY=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references:x-google-sender-auth; b=Oxc655S1G2X6pBS04DZ36zIgW+10RFhxUaSJo/zgZDf5pQG95m/dTA5sv3+42m83Xn PATtmcgTJCMNd34JJDncVsfe9Faxshcc7o1uSEmd9HdIti6MLTXGDjHEaaMxog2u77t3 LZlFiw6pHa4FgBCY/MbH1uvYW5g6JCvp4KOiI=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <49378C16.1040106@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <49364960.2060101@xxxxxxxxxxxxxx> <C55BFEE2.1FCA7%keir.fraser@xxxxxxxxxxxxx> <de76405a0812030446m38290b2ex9d624a0f7d788cfc@xxxxxxxxxxxxxx> <49378C16.1040106@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Thu, Dec 4, 2008 at 7:51 AM, NISHIGUCHI Naoki
<nisiguti@xxxxxxxxxxxxxx> wrote:
>> The more accurate credit scheduling and vcpu credit "balancing" seem
>> like good ideas.  For the other changes, it's probably worth measuring
>> on a battery of tests to see what kinds of effects we get, especially
>> on network throughput.
>
> I didn't think about the battery and the performance.

I'm sorry, I used an uncommon definition of the word "battery"; I
should have been more careful. :-)

In this context, "a battery of tests" means "a combination of several
different kinds of tests."  I meant some disk-intensive tests, some
network-intensive tests, some cpu-intensive tests, and some
combination of all three.  I can run some of these, and you can make
sure that the "client" tests still work well.  It would probably be
helpful to have other people volunteer to do some testing as well,
just to make sure we have our bases covered.

> I set the next-timer to 2ms in any vcpu having "boost" credits since every
> vcpu having "boost" credits need to be run equally at short intervals. If
> there are vcpus having "boost" credits and the next-timer of a vcpu is set
> to 10ms, the other vcpus will be waited during 10ms.

> At present, I am thinking that if the other vcpus don't have "boost" credits
> then we may set the next-timer to 30ms.

I see -- the current setup is good if there's only one "boosted" VM
(per cpu) at a time; but if there are two "boosted" VMs, they're back
to taking turns at 30 ms.  Your 2ms patch allows several
latency-sensitive VMs to share the "low latency" boost.  That makes
sense.  I agree with your suggestion: we can set the timer to 2ms only
if the next waiting vcpu on the queue is also BOOST.

> I tested the video latency measurement with the "boost" time set to 10ms.
> But it regretted not to work well. As I was mentioned above, the vcpu was
> occasionally waited during 10ms.

OK, good to know.

> On my patch, "boost" time is tuneable. How about the default "boost" time is
> 30ms and if necessary, "boost" time is set? Is it acceptable?

I suspect that latency-sensitive workloads such as network, especially
network servers that do very little computation, may also benefit from
short boost times.

> In order to lengthen the "boost" time as much as possible, I will think
> about computing the length of the next-timer of the vcpu having "boost"
> credits.

If it makes things simpler, we could just stick with 10ms timeslices
when there are no waiting vcpus with BOOST priority, and 2ms if there
is BOOST priority.  I don't think there's a particular need to give a
VM only (say) 8 ms instead of 10, if there are no latency-sensitive
VMs waiting.

> I'll try to revise the patch.

I suggest:
* Modify the credit scheduler directly, rather than having an extra scheduler
* Break down your changes into patches that make individual changes,
i.e (from your first post):
 + A patch to subtract credit consumed accurately
 + A patch to preserve the value of cpu credit when the vcpu is over upper bound
 + A patch to shorten cpu time per one credit
 + A patch to balance credits of each vcpu of a domain
 + A patch to introduce BOOST credit (both Xen and tool components)
 + A patch to shorten allocated time in BOOST priority if the next
vcpu on the runqueue is also at BOOST

Then we can evaluate each change individually.

Thanks for your work!

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel