WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [RFC][PATCH] scheduler: credit scheduler for client virt

To: "NISHIGUCHI Naoki" <nisiguti@xxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC][PATCH] scheduler: credit scheduler for client virtualization
From: "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx>
Date: Thu, 4 Dec 2008 12:37:14 +0000
Cc: Ian.Pratt@xxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, disheng.su@xxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Thu, 04 Dec 2008 04:37:40 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:sender :to:subject:cc:in-reply-to:mime-version:content-type :content-transfer-encoding:content-disposition:references :x-google-sender-auth; bh=tSy93ZxkQSdSIzt8N6YfivKhRybgQYrVOIP2FQGEKbg=; b=LQ79O3YvhEoGfTbScQ44jsVh0Uo8IM1eiNLdxNN2gP9SU8Rn1ke2Sd0LPCwXc/XMol lButEDxssyjH9phQ/UCrjHjucKSCz0ru+XRic9m51BKvBb4UiIedW24HjwpgRpRPlDVi /pg3Nx56b5UvnKWfN1OHpS2z0EOPwMXnNfZEg=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references:x-google-sender-auth; b=FBOSwmpDqjgnPIy3lzzAkQGhBLazUbddX5KqOU3AffQ5dm84gh97QKLZ/IK0LD53aI MZIiUXee8IFILYcDqBkQ710WYxOLLjwYLrRvs3zfG7hugEGPUQA9I7nisMfa6lB9F/s8 hU1cprAqApEd4AI1GwrEUwmNqqmhItPlFhBjI=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <de76405a0812040421i15f9e87dy3bf80c6a590505e0@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <49364960.2060101@xxxxxxxxxxxxxx> <C55BFEE2.1FCA7%keir.fraser@xxxxxxxxxxxxx> <de76405a0812030446m38290b2ex9d624a0f7d788cfc@xxxxxxxxxxxxxx> <49378C16.1040106@xxxxxxxxxxxxxx> <de76405a0812040421i15f9e87dy3bf80c6a590505e0@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Thu, Dec 4, 2008 at 12:21 PM, George Dunlap
<George.Dunlap@xxxxxxxxxxxxx> wrote:
> I see -- the current setup is good if there's only one "boosted" VM
> (per cpu) at a time; but if there are two "boosted" VMs, they're back
> to taking turns at 30 ms.  Your 2ms patch allows several
> latency-sensitive VMs to share the "low latency" boost.  That makes
> sense.  I agree with your suggestion: we can set the timer to 2ms only
> if the next waiting vcpu on the queue is also BOOST.

There was a paper earlier this year about scheduling and I/O performance:
 http://www.cs.rice.edu/CS/Architecture/docs/ongaro-vee08.pdf

One of the things he noted was that if a driver domain is accepting
network packets for multiple VMs, we sometimes get the following
pattern:
* driver domain wakes up, starts processing packets.  Because it's in
"over", it doesn't get boosted.
* Passes a packet to VM 1, waking it up.  It runs in "boost",
preempting the (now lower-priority) driver domain.
* Other packets (possibly even for VM 1) sit in the driver domain's
queue, waiting for it to get cpu time.

Their tests, for 3 networking guests and 3 cpu-intensive guests,
showed a 40% degradation in performance due to this problem.  While
we're thinking about the scheduler, it might be worth seeing if we can
solve this.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel