|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] VT-d scalability issue
> When I assign a pass-through NIC to a linux VM and increase the num of
VMs, the
> iperf throughput for each VM drops greatly. Say, start 8 VM running on
> a machine with 8 physical cpus, start 8 iperf client to connect each
of them, the
> final result is only 60% of 1 VM.
>
> Further investigation shows vcpu migration cause "cold" cache for
pass-
> through domain.
Just so I understand the experiment, does each VM have a pass-through
NIC, or just one?
> following code in vmx_do_resume try to invalidate orig processor's
> cache when
> 14 migration if this domain has pass-through device and no support for
> wbinvd vmexit.
> 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting )
> {
> int cpu = v->arch.hvm_vmx.active_cpu;
> if ( cpu != -1 )
> on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1,
>
> }
>
> So we want to pin vcpu to free processor for domains with pass-through
> device in creation process, just like what we did for NUMA system.
What pinning functionality would we need beyond what's already there?
Thanks,
Ian
> What do you think of it? Or have other ideas?
>
> Thanks,
>
>
> --
> best rgds,
> edwin
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|