WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler re

To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, "Su, Disheng" <disheng.su@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] RE: [RFC][PATCH 0/4] Modification of credit scheduler rev2
From: NISHIGUCHI Naoki <nisiguti@xxxxxxxxxxxxxx>
Date: Thu, 15 Jan 2009 13:42:52 +0900
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "Ian.Pratt@xxxxxxxxxxxxx" <Ian.Pratt@xxxxxxxxxxxxx>, "aviv@xxxxxxxxxxxx" <aviv@xxxxxxxxxxxx>, "keir.fraser@xxxxxxxxxxxxx" <keir.fraser@xxxxxxxxxxxxx>, "sakaia@xxxxxxxxxxxxxx" <sakaia@xxxxxxxxxxxxxx>
Delivery-date: Wed, 14 Jan 2009 20:43:52 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <0A882F4D99BBF6449D58E61AAFD7EDD603BB4AB4@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4949BC2C.4060302@xxxxxxxxxxxxxx> <BB1F052FCDB1EA468BD99786C8B1ED2C01D260D043@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <496E99B9.3010906@xxxxxxxxxxxxxx> <0A882F4D99BBF6449D58E61AAFD7EDD603BB4AB4@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.19 (Windows/20081209)
Hi, Kevin

Tian, Kevin wrote:
From:NISHIGUCHI Naoki
Sent: Thursday, January 15, 2009 10:05 AM
        4. issues left:
a. Abrupt glitches are still generated when the
QEMU emulated mouse being used and moving mouse quickly in guest A. Passing-through USB mouse/keyboard to guest A, then no glitches.

I also noticed that. Though I don't know the precise cause, I found that dom0 and guest A would consume largely CPU time (hundreds of milliseconds) in such situation. In this case, the priority of dom0 and guest A falls rapidly, then guest B runs until the priority of dom0 and guest A becomes BOOST. In worst case, it will take about 120ms.

I remember that Disheng once told me that BOOST only happens
when vcpu is waken up and its current priority is UNDER. In your
case guest A should be in OVER after running hundreds of ms, and then it waits enough long time to become UNDER and then BOOST. If this is the case, your enhancement on BOOST level
seems only solving part of the latency issue. Here either assigning
a static priority, or adding more BOOST source (like event, intr,
etc) seems more complete solution.

In my case, though the vcpu should be switched to other vcpu in time slice, the cpu running the vcpu doesn't schedule during hundreds of ms. I don't know why this happens. In credit scheduler, credit consumed by the vcpu must be subtracted. Therefore I think it is correct that dom0 and guest A are OVER because my approach is to boost the vcpu within the range of weight.

I think assigning a static priority is one solution. However, I think that it affects credit accounting because we don't know how long the domain with the static priority (probably highest priority) is run.

About adding more BOOST source, could you explain more to me?

b. vcpu migration. As said before, without vcpu
pinned, glitches are obvious.

I think that this issue would be solved by adding the condition for migrating the vcpu.
e.g. If the vcpu has boost credit, don't migrate the vcpu.

Is it over-kill? how about you already get 3 BOOST vcpu in runqueue of current cpu, when other cpus are all running OVER vcpus? Boost itself looks not the only determinative factor for migration, and instead what you concern is the relative priority in system wide.

Yes, you are right.
I'll consider about runqueue of each cpu and so on.

Thanks for your advice.

Regards,
Naoki


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel