WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-ia64-devel] [PATCH] NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ is notreg

To: "Isaku Yamahata" <yamahata@xxxxxxxxxxxxx>
Subject: RE: [Xen-ia64-devel] [PATCH] NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ is notregistered.
From: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
Date: Tue, 30 Jan 2007 09:46:04 +0800
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 29 Jan 2007 17:45:30 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20070129102905.GB25482%yamahata@xxxxxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcdDkFGKgn20AmB3Sve5AyvrtPtWugAfycng
Thread-topic: [Xen-ia64-devel] [PATCH] NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ is notregistered.
Isaku Yamahata write on 2007年1月29日 18:29:
> 
> How about the following example?
> For simplicity, we consider only local_flush_tlb_all().
> (The similar argument can be applied to vcpu_vhpt_flush())
> 
> suppose domM has two vcpus, vcpu0, vcpu1.
>       domN has one vcpu, vcpu2.
> 
> - case 1
>   vcpu0 and vcpu1 are running on same pcpu.
>   vcpu0 runs.
>   context switch <<<< local_flush_tlb_all() is necessry here
>   vcpu1 runs.
> 
> - case 2
>   vcpu0, vcpu1 and vcpu2 are running on the same pcpu
>   vcpu0 runs
>   context switch
>   vcpu2 runs
>   vcpu2 issues local_tlb_flush().
>   context switch <<< local_flush_tlb_all() can be skipped.
I can understand this. Yes, this local_flush_tlb_all can be skipped,
But it is because vcpu2 issues local_tlb_flush.
My question is why we need new_tlbflush_clock_period?


>   vcpu1 runs
> 
> You can confirm its effect by the perf-counters,
> tlbflush_clock_cswitch_skip, flush_vtlb_for_context_switch and
> tlbflush_clock_cswitch_purge.
> Please note that local_flush_tlb_all() (or vcpu_vhpt_flush()) is
> called everytime grant table unmapping without tlb insert tracking
Currently, grant table unmapping did not purge any thing,
Because  in flush_tlb_mask(current->domain->domain_dirty_cpumask);
Domain_dirty_cpumask is always 0.

Thanks,
Anthony


> optimization. But they aren't so often called with tlb insert
> tracking optimization, tlb flush clock optimization becomes less
> effetive than before. 

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel