WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] RE: The caculation of the credit in credit_scheduler

To: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] RE: The caculation of the credit in credit_scheduler
From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Date: Wed, 10 Nov 2010 18:53:41 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>, "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx>
Delivery-date: Wed, 10 Nov 2010 02:55:55 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4CDA35C5.7060806@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <789F9655DD1B8F43B48D77C5D30659732FD0A5C9@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4CD95A22.2090902@xxxxxxxxxxxxx> <789F9655DD1B8F43B48D77C5D30659732FD7DF70@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4CDA31A8.4050308@xxxxxxxxxxxxxx> <789F9655DD1B8F43B48D77C5D30659732FD7E0EC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4CDA35C5.7060806@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcuAnQ9dO+ql1EhTTxSI/wdWgMCOuQAKGwKQ
Thread-topic: [Xen-devel] RE: The caculation of the credit in credit_scheduler
Yes, this works. Thanks very much!

--jyh

>-----Original Message-----
>From: Juergen Gross [mailto:juergen.gross@xxxxxxxxxxxxxx]
>Sent: Wednesday, November 10, 2010 2:04 PM
>To: Jiang, Yunhong
>Cc: George Dunlap; xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, Xiantao
>Subject: Re: [Xen-devel] RE: The caculation of the credit in credit_scheduler
>
>On 11/10/10 06:55, Jiang, Yunhong wrote:
>>
>>
>>> -----Original Message-----
>>> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Juergen Gross
>>> Sent: Wednesday, November 10, 2010 1:46 PM
>>> To: Jiang, Yunhong
>>> Cc: George Dunlap; xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, 
>>> Xiantao
>>> Subject: Re: [Xen-devel] RE: The caculation of the credit in 
>>> credit_scheduler
>>>
>>> On 11/10/10 03:39, Jiang, Yunhong wrote:
>>>>
>>>>
>>>>> -----Original Message-----
>>>>> From: George Dunlap [mailto:George.Dunlap@xxxxxxxxxxxxx]
>>>>> Sent: Tuesday, November 09, 2010 10:27 PM
>>>>> To: Jiang, Yunhong
>>>>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, Xiantao
>>>>> Subject: Re: The caculation of the credit in credit_scheduler
>>>>>
>>>>> On 05/11/10 07:06, Jiang, Yunhong wrote:
>>>>>> The reason is how the credit is caculated. Although the 3 HVM domains is
>>> pinned
>>>>> to 2 PCPU and share the 2 CPUs, they will all get 2* 300 credit when 
>>>>> credit
>>> account.
>>>>> That means the I/O intensive HVM domain will never be under credit, thus 
>>>>> it
>will
>>>>> preempt the CPU intensive whenever it is boost (i.e. after I/O access to
>QEMU),
>>> and
>>>>> it is set to be TS_UNDER only at the tick time, and then, boost again.
>>>>>
>>>>> I suspect that the real reason you're having trouble is that pinning and
>>>>> the credit mechanism don't work very well together.  Instead of pinning,
>>>>> have you tried using the cpupools interface to make a 2-cpu pool to put
>>>>> the VMs into?  That should allow the credit to be divided appropriately.
>>>>
>>>> I have a quick look in the code, and seems the cpu pool should not help on 
>>>> such
>>> situation. The CPU poll only cares about the CPUs a domain can be 
>>> scheduled, but
>>> not about the credit caculation.
>>>
>>> With cpupools you avoid the pinning. This will result in better credit
>>> calculation results.
>>
>> My system is doing testing, so I can't do the experiment now, but I'm not 
>> sure if
>the cpupool will help the credit caculation.
>>
>>> From the code in csched_acct() at "common/sched_credit.c", the credit_fair 
>>> is
>caculated as followed, and the creadt_fair's original value is caculated by 
>sum all
>pcpu's credit, without considering the cpu poll.
>>
>>          credit_fair = ( ( credit_total
>>                            * sdom->weight
>>                            * sdom->active_vcpu_count )
>>                          + (weight_total - 1)
>>                        ) / weight_total;
>>
>> Or did I missed anything?
>
>The scheduler sees only the pcpus and domains in the pool, as it is cpupool
>specific.
>BTW: the credit scheduler's problem with cpu pinning was the main reason for
>introducing cpupools.
>
>
>Juergen
>
>--
>Juergen Gross                 Principal Developer Operating Systems
>TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
>Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
>Domagkstr. 28                           Internet: ts.fujitsu.com
>D-80807 Muenchen                 Company details:
>ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel