[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [Q] about Credit Scheduler Weight



Hi, All

I have two questions and "request for modification" about credit scheduler 
weight policy. 
I hope credit scheduler weight "correctly" reflects time consumption of CPUs in 
any cases.
I would be appreciated if you give me a outlook or plan about credit scheduler 
weight policy.

N.B.
In the following examples, I assume that CPU intensive jobs are running on 
DomU1&2.


case 1)In vcpu view(vcpu credit is over 30msec)

If vcpu credit is over 30msec,
The value of over 30msec is re-distribute credit to other vcpu based on 
weight(credit_xtra).
(for example we assume 50msec credit is assigned to a target vcpu. 
After the filetering,  30msec for the target vcpu and remain 20msec goes to 
other vcpus.)
This makes the target vcpu cannot consume accurate weight.

example for case 1)
Weight=2:1 But CPUs=1:1

#pcpus=2 (pcpu:Physical CPU)
DomU1 Weight 256 (#vcpus=1)
DomU2 Weight 128 (#vcpus=2)

credit =60ms

(distribute credit based on weight)
DomU1vcpu0 = 40msec
DomU2vcpu0 = 10msec DomU2vcpu1=10msec
(after 30ms filtering(credit_xtra))
DomU1vcpu0 = 30msec
DomU2vcpu0 = 15msec DomU2vcpu1=15msec

CPU consumption(=>1:1)
DomU1 = 30msec
DomU2 = 30msec


case 2)In pcpu view w/ affinity(some pcpu credit over 30msec)

In case we are using affinity(vcpu-pin) and the pcpu which pinned vcpu credit 
sum is over 30msec,
this is also problematic.
(for example pcpu0=DomU1vcpu0=DomU2vcpu0 and DomU1vcpu0+DomU2vcpu0 > 30msec)
In this case, all vcpu pinned in the pcpu is running on equal weight under 
CSCHED_PRI_TS_UNDER priority.
Because the domain dispatcher equally dispatches the vcpu in round robin. 
It means equally consume CPU (not weight based!).

To avoid this problem is more difficult than case 1, since vcpu-pin supports 
vcpu grouping
(which means the vcpu can select pcpu more than 1 this means NOT 1-to-1).
If just 1 cpu pinned(ex. vcpu0=pcpu0 1-to-1), it is easy to write check routine 
of pcpu credit.
But consider vcpu-grouping like current xm vcpu-pin, check routine is difficult 
to write, I think.

Is there any plan to solve this problem?


example for case 2)
Weight=2:1 But CPUs=1:3

#pcpus=2
DomU1 Weight 256 (#vcpus=1) vcpu0=pcpu0
DomU2 Weight 128 (#vcpus=2) vcpu0=pcpu0 vcpu1=pcpu1

credit =60ms
(distribute credit based on weight)
DomU1vcpu0 = 40msec
DomU2vcpu0 = 10msec DomU2vcpu1=10msec
(after 30ms filtering(credit_xtra))
DomU1vcpu0 = 30msec
DomU2vcpu0 = 15msec DomU2vcpu1=15msec
(after round robin dispatch)
DomU1 = 15msec (15msec cannot used by DomU1vcpu0 because pcpu0 shared w/ 
DomU2vcpu0)
DomU2 = 45msec (15msec@pcpu1 is consumed by DomU2vcpu1 under CSCHED_PRI_TS_OVER 
priority)


Thanks,
Atsushi SAKAI




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.