|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 30/47] xen/sched: add support for multiple vcpus per sched unit where missing
On 14.09.2019 10:52, Juergen Gross wrote:
> @@ -266,15 +267,16 @@ static inline void vcpu_runstate_change(
> static inline void sched_unit_runstate_change(struct sched_unit *unit,
> bool running, s_time_t new_entry_time)
> {
> - struct vcpu *v = unit->vcpu_list;
> + struct vcpu *v;
>
> - if ( running )
> - vcpu_runstate_change(v, v->new_state, new_entry_time);
> - else
> - vcpu_runstate_change(v,
> - ((v->pause_flags & VPF_blocked) ? RUNSTATE_blocked :
> - (vcpu_runnable(v) ? RUNSTATE_runnable : RUNSTATE_offline)),
> - new_entry_time);
> + for_each_sched_unit_vcpu ( unit, v )
> + if ( running )
> + vcpu_runstate_change(v, v->new_state, new_entry_time);
> + else
> + vcpu_runstate_change(v,
> + ((v->pause_flags & VPF_blocked) ? RUNSTATE_blocked :
> + (vcpu_runnable(v) ? RUNSTATE_runnable : RUNSTATE_offline)),
> + new_entry_time);
> }
As mentioned on v2 already, I'm having some difficulty seeing why a
function like this one (and some of the sched-if.h changes here)
couldn't be introduced with this loop you add now right away.
Seeing this change I'm also puzzled why ->new_state is used only in
case "running" is true. Is there anything speaking against setting
that field uniformly, and simply consuming it here in all cases?
> @@ -1031,10 +1033,9 @@ int cpu_disable_scheduler(unsigned int cpu)
> if ( cpumask_empty(&online_affinity) &&
> cpumask_test_cpu(cpu, unit->cpu_hard_affinity) )
> {
> - /* TODO: multiple vcpus per unit. */
> - if ( unit->vcpu_list->affinity_broken )
> + if ( sched_check_affinity_broken(unit) )
> {
> - /* The vcpu is temporarily pinned, can't move it. */
> + /* The unit is temporarily pinned, can't move it. */
> unit_schedule_unlock_irqrestore(lock, flags, unit);
Along these lines, wouldn't this change (and further related ones)
belong into the patch introducing sched_check_affinity_broken()?
> @@ -1851,7 +1852,7 @@ void sched_context_switched(struct vcpu *vprev, struct
> vcpu *vnext)
> while ( atomic_read(&next->rendezvous_out_cnt) )
> cpu_relax();
> }
> - else if ( vprev != vnext )
> + else if ( vprev != vnext && sched_granularity == 1 )
> context_saved(vprev);
> }
Would you mind helping me with understanding why this call is
needed with a granularity of 1 only?
> --- a/xen/include/xen/sched-if.h
> +++ b/xen/include/xen/sched-if.h
> @@ -68,12 +68,32 @@ static inline bool is_idle_unit(const struct sched_unit
> *unit)
>
> static inline bool is_unit_online(const struct sched_unit *unit)
> {
> - return is_vcpu_online(unit->vcpu_list);
> + struct vcpu *v;
const?
> + for_each_sched_unit_vcpu ( unit, v )
> + if ( is_vcpu_online(v) )
> + return true;
> +
> + return false;
> +}
> +
> +static inline unsigned int unit_running(const struct sched_unit *unit)
> +{
> + return unit->runstate_cnt[RUNSTATE_running];
> }
Is there really going to be a user needing the return value be a
count, not just a boolean?
> static inline bool unit_runnable(const struct sched_unit *unit)
> {
> - return vcpu_runnable(unit->vcpu_list);
> + struct vcpu *v;
const?
> + if ( is_idle_unit(unit) )
> + return true;
> +
> + for_each_sched_unit_vcpu ( unit, v )
> + if ( vcpu_runnable(v) )
> + return true;
Isn't the loop going to yield true anyway for idle units? If so, is
there a particular reason for the special casing of idle units up
front here?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |