WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] VT-d scalability issue

To: Keir <Keir.Fraser@xxxxxxxxxxxx>
Subject: [Xen-devel] VT-d scalability issue
From: "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>
Date: Tue, 9 Sep 2008 17:04:27 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>
Delivery-date: Tue, 09 Sep 2008 02:08:37 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.16 (2007-06-09)
Keir,

I have found a VT-d scalability issue and want to some feed backs.

When I assign a pass-through NIC to a linux VM and increase the num of VMs, the
iperf throughput for each VM drops greatly. Say, start 8 VM running on a machine
with 8 physical cpus, start 8 iperf client to connect each of them, the final
result is only 60% of 1 VM.

Further investigation shows vcpu migration cause "cold" cache for pass-through
domain.

following code in vmx_do_resume try to invalidate orig processor's cache when   
                                      
14 migration if this domain has pass-through device and no support for wbinvd
vmexit.                                                                         
                                      
16 if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting )
{
    int cpu = v->arch.hvm_vmx.active_cpu;
    if ( cpu != -1 )
        on_selected_cpus(cpumask_of_cpu(cpu), wbinvd_ipi, NULL, 1,

}

So we want to pin vcpu to free processor for domains with pass-through device in
creation process, just like what we did for NUMA system.

What do you think of it? Or have other ideas?

Thanks,


-- 
best rgds,
edwin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel