WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] [PATCH] NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ is notreg

To: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
Subject: Re: [Xen-ia64-devel] [PATCH] NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ is notregistered.
From: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
Date: Tue, 30 Jan 2007 14:17:08 +0900
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 29 Jan 2007 21:16:40 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <51CFAB8CB6883745AE7B93B3E084EBE26F7BB4@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20070130033550.GE25482%yamahata@xxxxxxxxxxxxx> <51CFAB8CB6883745AE7B93B3E084EBE26F7BB4@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
On Tue, Jan 30, 2007 at 12:16:53PM +0800, Xu, Anthony wrote:
> Isaku Yamahata write on 2007年1月30日 11:36:
> > On Tue, Jan 30, 2007 at 09:46:04AM +0800, Xu, Anthony wrote:
> >> Isaku Yamahata write on 2007年1月29日 18:29:
> >>> 
> >>> How about the following example?
> >>> For simplicity, we consider only local_flush_tlb_all().
> >>> (The similar argument can be applied to vcpu_vhpt_flush())
> >>> 
> >>> suppose domM has two vcpus, vcpu0, vcpu1.
> >>>   domN has one vcpu, vcpu2.
> >>> 
> >>> - case 1
> >>>   vcpu0 and vcpu1 are running on same pcpu.
> >>>   vcpu0 runs.
> >>>   context switch <<<< local_flush_tlb_all() is necessry here  
> >>> vcpu1 runs. 
> >>> 
> >>> - case 2
> >>>   vcpu0, vcpu1 and vcpu2 are running on the same pcpu   vcpu0 runs
> >>>   context switch
> >>>   vcpu2 runs
> >>>   vcpu2 issues local_tlb_flush().
> >>>   context switch <<< local_flush_tlb_all() can be skipped.
> >> I can understand this. Yes, this local_flush_tlb_all can be skipped,
> >> But it is because vcpu2 issues local_tlb_flush.
> >> My question is why we need new_tlbflush_clock_period?
> > 
> > Because the counter is finite.
> > If we can ignore conter overflow, we can check only which counter
> > is bigger.
> > But when overflow comes in (i.e. counter == 0 after increment),
> > things become complicated. It's the reason of
> > new_tlbflush_clock_period. 
> > 
> > Probably another approach to address overflow is to use signed
> > comparison like Linux jiffies time_after().
> > But we can't assume the distance between two conters is near enough.
> > 
> > 
> Understand now.
> One more question
> 
> Why need local_vhpt_flush and vcpu_vhpt_flush call 
> tlbflush_clock_inc_and_return?
> 
> In per-CPU VHPT mode,
> tlbflush_clock_inc_and_return only needs to be called in local_flush_tlb_all.
> 
> Am I right?

Yes.
The tlb flush clock has existed since the per-vcpu vhpt mode patch wasn't
checked in.
So the tlb flush clock should co-exist with non per-vcpu VHPT mode.

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel