This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] VT-d scalability issue

To: "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>, "Keir" <Keir.Fraser@xxxxxxxxxxxx>
Subject: RE: [Xen-devel] VT-d scalability issue
From: "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxxx>
Date: Tue, 9 Sep 2008 10:28:59 +0100
Cc: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 09 Sep 2008 02:30:54 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20080909090427.GA6704@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20080909090427.GA6704@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AckSW8Apdq/gdtAuT/S3W+Gac4h3sgAAiBlA
Thread-topic: [Xen-devel] VT-d scalability issue
> When I assign a pass-through NIC to a linux VM and increase the num of
VMs, the
> iperf throughput for each VM drops greatly. Say, start 8 VM running on
> a machine with 8 physical cpus, start 8 iperf client to connect each
of them, the
> final result is only 60% of 1 VM.
> Further investigation shows vcpu migration cause "cold" cache for
> through domain.

Just so I understand the experiment, does each VM have a pass-through
NIC, or just one?

> following code in vmx_do_resume try to invalidate orig processor's
> cache when
> 14 migration if this domain has pass-through device and no support for
> wbinvd vmexit.
> 16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting )
> {
>     int cpu = v->arch.hvm_vmx.active_cpu;
>     if ( cpu != -1 )
>         on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1,
> }
> So we want to pin vcpu to free processor for domains with pass-through
> device in creation process, just like what we did for NUMA system.

What pinning functionality would we need beyond what's already there?


> What do you think of it? Or have other ideas?
> Thanks,
> --
> best rgds,
> edwin
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

Xen-devel mailing list