|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] [Q] about Credit Scheduler Weight
Hi.
In your first example, you assume there is 60ms of credit
to be assigned every 30ms. This means there are 2 physical
CPUs to schedule VCPUs onto.
Now, your domU1 has double the weight as domU2, but domU1
only has 1 VCPU so it can only consume one physical CPU
(ie 30ms every 30ms of wall clock time). DomU2 simply
consumes whatever resources are left available to it: 1
physical CPU. That is the intent of credit_extra: To
fairly re-assign resources that aren't consumeable by
certain domains.
I don't think anything is wrong here.
However, there may be cases where pinning VCPUs does cause
some calculation problems because a domain is restricted
in a way independant from its number of VCPUs. I'll check
that out.
Emmanuel.
On Tue, Oct 10, 2006 at 06:58:53PM +0900, Atsushi SAKAI wrote:
> Hi, All
>
> I have two questions and "request for modification" about credit scheduler
> weight policy.
> I hope credit scheduler weight "correctly" reflects time consumption of CPUs
> in any cases.
> I would be appreciated if you give me a outlook or plan about credit
> scheduler weight policy.
>
> N.B.
> In the following examples, I assume that CPU intensive jobs are running on
> DomU1&2.
>
>
> case 1)In vcpu view(vcpu credit is over 30msec)
>
> If vcpu credit is over 30msec,
> The value of over 30msec is re-distribute credit to other vcpu based on
> weight(credit_xtra).
> (for example we assume 50msec credit is assigned to a target vcpu.
> After the filetering, 30msec for the target vcpu and remain 20msec goes to
> other vcpus.)
> This makes the target vcpu cannot consume accurate weight.
>
> example for case 1)
> Weight=2:1 But CPUs=1:1
>
> #pcpus=2 (pcpu:Physical CPU)
> DomU1 Weight 256 (#vcpus=1)
> DomU2 Weight 128 (#vcpus=2)
>
> credit =60ms
>
> (distribute credit based on weight)
> DomU1vcpu0 = 40msec
> DomU2vcpu0 = 10msec DomU2vcpu1=10msec
> (after 30ms filtering(credit_xtra))
> DomU1vcpu0 = 30msec
> DomU2vcpu0 = 15msec DomU2vcpu1=15msec
>
> CPU consumption(=>1:1)
> DomU1 = 30msec
> DomU2 = 30msec
>
>
> case 2)In pcpu view w/ affinity(some pcpu credit over 30msec)
>
> In case we are using affinity(vcpu-pin) and the pcpu which pinned vcpu credit
> sum is over 30msec,
> this is also problematic.
> (for example pcpu0=DomU1vcpu0=DomU2vcpu0 and DomU1vcpu0+DomU2vcpu0 > 30msec)
> In this case, all vcpu pinned in the pcpu is running on equal weight under
> CSCHED_PRI_TS_UNDER priority.
> Because the domain dispatcher equally dispatches the vcpu in round robin.
> It means equally consume CPU (not weight based!).
>
> To avoid this problem is more difficult than case 1, since vcpu-pin supports
> vcpu grouping
> (which means the vcpu can select pcpu more than 1 this means NOT 1-to-1).
> If just 1 cpu pinned(ex. vcpu0=pcpu0 1-to-1), it is easy to write check
> routine of pcpu credit.
> But consider vcpu-grouping like current xm vcpu-pin, check routine is
> difficult to write, I think.
>
> Is there any plan to solve this problem?
>
>
> example for case 2)
> Weight=2:1 But CPUs=1:3
>
> #pcpus=2
> DomU1 Weight 256 (#vcpus=1) vcpu0=pcpu0
> DomU2 Weight 128 (#vcpus=2) vcpu0=pcpu0 vcpu1=pcpu1
>
> credit =60ms
> (distribute credit based on weight)
> DomU1vcpu0 = 40msec
> DomU2vcpu0 = 10msec DomU2vcpu1=10msec
> (after 30ms filtering(credit_xtra))
> DomU1vcpu0 = 30msec
> DomU2vcpu0 = 15msec DomU2vcpu1=15msec
> (after round robin dispatch)
> DomU1 = 15msec (15msec cannot used by DomU1vcpu0 because pcpu0 shared w/
> DomU2vcpu0)
> DomU2 = 45msec (15msec@pcpu1 is consumed by DomU2vcpu1 under
> CSCHED_PRI_TS_OVER priority)
>
>
> Thanks,
> Atsushi SAKAI
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|