WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [Q] about Credit Scheduler Weight

To: Atsushi SAKAI <sakaia@xxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [Q] about Credit Scheduler Weight
From: Emmanuel Ackaouy <ack@xxxxxxxxxxxxx>
Date: Wed, 11 Oct 2006 14:48:52 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 11 Oct 2006 06:49:53 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200610100959.k9A9xMJP014756@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Mail-followup-to: Atsushi SAKAI <sakaia@xxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
References: <200610100959.k9A9xMJP014756@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.1i
Hi.

In your first example, you assume there is 60ms of credit
to be assigned every 30ms. This means there are 2 physical
CPUs to schedule VCPUs onto.

Now, your domU1 has double the weight as domU2, but domU1
only has 1 VCPU so it can only consume one physical CPU
(ie 30ms every 30ms of wall clock time). DomU2 simply
consumes whatever resources are left available to it: 1
physical CPU. That is the intent of credit_extra: To
fairly re-assign resources that aren't consumeable by
certain domains.

I don't think anything is wrong here.

However, there may be cases where pinning VCPUs does cause
some calculation problems because a domain is restricted
in a way independant from its number of VCPUs. I'll check
that out.

Emmanuel.

On Tue, Oct 10, 2006 at 06:58:53PM +0900, Atsushi SAKAI wrote:
> Hi, All
> 
> I have two questions and "request for modification" about credit scheduler 
> weight policy. 
> I hope credit scheduler weight "correctly" reflects time consumption of CPUs 
> in any cases.
> I would be appreciated if you give me a outlook or plan about credit 
> scheduler weight policy.
> 
> N.B.
> In the following examples, I assume that CPU intensive jobs are running on 
> DomU1&2.
> 
> 
> case 1)In vcpu view(vcpu credit is over 30msec)
> 
> If vcpu credit is over 30msec,
> The value of over 30msec is re-distribute credit to other vcpu based on 
> weight(credit_xtra).
> (for example we assume 50msec credit is assigned to a target vcpu. 
> After the filetering,  30msec for the target vcpu and remain 20msec goes to 
> other vcpus.)
> This makes the target vcpu cannot consume accurate weight.
> 
> example for case 1)
> Weight=2:1 But CPUs=1:1
> 
> #pcpus=2 (pcpu:Physical CPU)
> DomU1 Weight 256 (#vcpus=1)
> DomU2 Weight 128 (#vcpus=2)
> 
> credit =60ms
> 
> (distribute credit based on weight)
> DomU1vcpu0 = 40msec
> DomU2vcpu0 = 10msec DomU2vcpu1=10msec
> (after 30ms filtering(credit_xtra))
> DomU1vcpu0 = 30msec
> DomU2vcpu0 = 15msec DomU2vcpu1=15msec
> 
> CPU consumption(=>1:1)
> DomU1 = 30msec
> DomU2 = 30msec
> 
> 
> case 2)In pcpu view w/ affinity(some pcpu credit over 30msec)
> 
> In case we are using affinity(vcpu-pin) and the pcpu which pinned vcpu credit 
> sum is over 30msec,
> this is also problematic.
> (for example pcpu0=DomU1vcpu0=DomU2vcpu0 and DomU1vcpu0+DomU2vcpu0 > 30msec)
> In this case, all vcpu pinned in the pcpu is running on equal weight under 
> CSCHED_PRI_TS_UNDER priority.
> Because the domain dispatcher equally dispatches the vcpu in round robin. 
> It means equally consume CPU (not weight based!).
> 
> To avoid this problem is more difficult than case 1, since vcpu-pin supports 
> vcpu grouping
> (which means the vcpu can select pcpu more than 1 this means NOT 1-to-1).
> If just 1 cpu pinned(ex. vcpu0=pcpu0 1-to-1), it is easy to write check 
> routine of pcpu credit.
> But consider vcpu-grouping like current xm vcpu-pin, check routine is 
> difficult to write, I think.
> 
> Is there any plan to solve this problem?
> 
> 
> example for case 2)
> Weight=2:1 But CPUs=1:3
> 
> #pcpus=2
> DomU1 Weight 256 (#vcpus=1) vcpu0=pcpu0
> DomU2 Weight 128 (#vcpus=2) vcpu0=pcpu0 vcpu1=pcpu1
> 
> credit =60ms
> (distribute credit based on weight)
> DomU1vcpu0 = 40msec
> DomU2vcpu0 = 10msec DomU2vcpu1=10msec
> (after 30ms filtering(credit_xtra))
> DomU1vcpu0 = 30msec
> DomU2vcpu0 = 15msec DomU2vcpu1=15msec
> (after round robin dispatch)
> DomU1 = 15msec (15msec cannot used by DomU1vcpu0 because pcpu0 shared w/ 
> DomU2vcpu0)
> DomU2 = 45msec (15msec@pcpu1 is consumed by DomU2vcpu1 under 
> CSCHED_PRI_TS_OVER priority)
> 
> 
> Thanks,
> Atsushi SAKAI
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>