WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] When flush tlb , we need consider the cpu_online

To: "Yunhong Jiang" <yunhong.jiang@xxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] When flush tlb , we need consider the cpu_online_map
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Mon, 29 Mar 2010 13:55:09 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Mon, 29 Mar 2010 05:54:51 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <789F9655DD1B8F43B48D77C5D30659731D5B7D10@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <789F9655DD1B8F43B48D77C5D30659731D5B7D10@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> 29.03.10 14:00 >>>
>When flush tlb mask, we need consider the cpu_online_map. The same happens to 
>ept flush also.

While the idea is certainly correct, doing this more efficiently seems
quite desirable to me, especially when NR_CPUS is large:

>--- a/xen/arch/x86/hvm/vmx/vmx.c       Sat Mar 27 16:01:35 2010 +0000
>+++ b/xen/arch/x86/hvm/vmx/vmx.c       Mon Mar 29 17:49:51 2010 +0800
>@@ -1235,6 +1235,9 @@ void ept_sync_domain(struct domain *d)
>      * unnecessary extra flushes, to avoid allocating a cpumask_t on the 
> stack.
>      */
>     d->arch.hvm_domain.vmx.ept_synced = d->domain_dirty_cpumask;
>+    cpus_and(d->arch.hvm_domain.vmx.ept_synced,
>+             d->arch.hvm_domain.vmx.ept_synced,
>+             cpu_online_map);

The added code can be combined with the pre-existing line:

    cpus_and(d->arch.hvm_domain.vmx.ept_synced,
             d->domain_dirty_cpumask, cpu_online_map);

>     on_selected_cpus(&d->arch.hvm_domain.vmx.ept_synced,
>                      __ept_sync_domain, d, 1);
> }
>--- a/xen/arch/x86/smp.c       Sat Mar 27 16:01:35 2010 +0000
>+++ b/xen/arch/x86/smp.c       Mon Mar 29 17:47:25 2010 +0800
>@@ -229,6 +229,7 @@ void flush_area_mask(const cpumask_t *ma
>     {
>         spin_lock(&flush_lock);
>         cpus_andnot(flush_cpumask, *mask, *cpumask_of(smp_processor_id()));
>+        cpus_and(flush_cpumask, cpu_online_map, flush_cpumask);

Here, first doing the full-mask operation and then clearing the one
extra bit is less overhead:

        cpus_and(flush_cpumask, *mask, cpu_online_map);
        cpu_clear(smp_processor_id(), flush_cpumask);

>         flush_va      = va;
>         flush_flags   = flags;
>         send_IPI_mask(&flush_cpumask, INVALIDATE_TLB_VECTOR);

Jan




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel