|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 20/46] xen: let vcpu_create() select processor
On 27.09.2019 09:00, Juergen Gross wrote:
> Today there are two distinct scenarios for vcpu_create(): either for
> creation of idle-domain vcpus (vcpuid == processor) or for creation of
> "normal" domain vcpus (including dom0), where the caller selects the
> initial processor on a round-robin scheme of the allowed processors
> (allowed being based on cpupool and affinities).
>
> Instead of passing the initial processor to vcpu_create() and passing
> on to sched_init_vcpu() let sched_init_vcpu() do the processor
> selection. For supporting dom0 vcpu creation use the node_affinity of
> the domain as a base for selecting the processors. User domains will
> have initially all nodes set, so this is no different behavior compared
> to today. In theory this is not guaranteed as vcpus are created only
> with XEN_DOMCTL_max_vcpus being called, but this call is going to be
> removed in future and the toolstack doesn't call
> XEN_DOMCTL_setnodeaffinity before calling XEN_DOMCTL_max_vcpus.
>
> To be able to use const struct domain * make cpupool_domain_cpumask()
> take a const domain pointer, too.
>
> A further simplification is possible by having a single function for
> creating the dom0 vcpus with vcpu_id > 0 and doing the required pinning
> for all vcpus after that. This allows to make sched_set_affinity()
> private to schedule.c and switch it to sched_units easily. Note that
> this functionality is x86 only.
>
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
> Acked-by: Julien Grall <julien.grall@xxxxxxx>
x86 parts:
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
with a couple of nits though (I'll see about taking care of them
while committing):
> @@ -1940,7 +1940,7 @@ static void __init find_gnttab_region(struct domain *d,
>
> static int __init construct_domain(struct domain *d, struct kernel_info
> *kinfo)
> {
> - int i, cpu;
> + int i;
This would better have become unsigned int, or else ...
> @@ -2003,12 +2003,11 @@ static int __init construct_domain(struct domain *d,
> struct kernel_info *kinfo)
> }
> #endif
>
> - for ( i = 1, cpu = 0; i < d->max_vcpus; i++ )
> + for ( i = 1; i < d->max_vcpus; i++ )
> {
> - cpu = cpumask_cycle(cpu, &cpu_online_map);
> - if ( vcpu_create(d, i, cpu) == NULL )
> + if ( vcpu_create(d, i) == NULL )
> {
> - printk("Failed to allocate dom0 vcpu %d on pcpu %d\n", i, cpu);
> + printk("Failed to allocate d0v%u\n", i);
... you should use %d here.
> --- a/xen/arch/x86/dom0_build.c
> +++ b/xen/arch/x86/dom0_build.c
> @@ -165,7 +165,7 @@ custom_param("dom0_max_vcpus", parse_dom0_max_vcpus);
> static __initdata unsigned int dom0_nr_pxms;
> static __initdata unsigned int dom0_pxms[MAX_NUMNODES] =
> { [0 ... MAX_NUMNODES - 1] = ~0 };
> -static __initdata bool dom0_affinity_relaxed;
> +__initdata bool dom0_affinity_relaxed;
The canonical ordering is type then attributes. It's of course not
overly nice to make this and ...
> @@ -196,32 +196,7 @@ static int __init parse_dom0_nodes(const char *s)
> }
> custom_param("dom0_nodes", parse_dom0_nodes);
>
> -static cpumask_t __initdata dom0_cpus;
> -
> -struct vcpu *__init dom0_setup_vcpu(struct domain *d,
> - unsigned int vcpu_id,
> - unsigned int prev_cpu)
> -{
> - unsigned int cpu = cpumask_cycle(prev_cpu, &dom0_cpus);
> - struct vcpu *v = vcpu_create(d, vcpu_id, cpu);
> -
> - if ( v )
> - {
> - if ( pv_shim )
> - {
> - sched_set_affinity(v, cpumask_of(vcpu_id), cpumask_of(vcpu_id));
> - }
> - else
> - {
> - if ( !opt_dom0_vcpus_pin && !dom0_affinity_relaxed )
> - sched_set_affinity(v, &dom0_cpus, NULL);
> - sched_set_affinity(v, NULL, &dom0_cpus);
> - }
> - }
> -
> - return v;
> -}
> -
> +cpumask_t __initdata dom0_cpus;
... this piece of init data non-static, but so be it.
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -38,6 +38,10 @@
> #include <xsm/xsm.h>
> #include <xen/err.h>
>
> +#ifdef CONFIG_X86
> +#include <asm/guest.h>
> +#endif
CONFIG_XEN_GUEST would seem more arch-neutral here.
> @@ -2145,6 +2188,35 @@ void wait(void)
> schedule();
> }
>
> +#ifdef CONFIG_X86
> +void __init sched_setup_dom0_vcpus(struct domain *d)
> +{
> + unsigned int i;
> + struct sched_unit *unit;
> +
> + for ( i = 1; i < d->max_vcpus; i++ )
> + vcpu_create(d, i);
> +
> + for_each_sched_unit ( d, unit )
> + {
> + unsigned int id = unit->unit_id;
> +
> + if ( pv_shim )
> + {
> + sched_set_affinity(unit, cpumask_of(id), cpumask_of(id));
> + }
Unnecessary pair of braces, which is in particular inconsistent with ...
> + else
> + {
> + if ( !opt_dom0_vcpus_pin && !dom0_affinity_relaxed )
> + sched_set_affinity(unit, &dom0_cpus, NULL);
... this.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |