WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] PATCH: cleanup of tlbflush

To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Subject: Re: [Xen-ia64-devel] PATCH: cleanup of tlbflush
From: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
Date: Thu, 11 May 2006 01:01:16 +0900
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx, Tristan Gingold <Tristan.Gingold@xxxxxxxx>
Delivery-date: Wed, 10 May 2006 09:01:29 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <571ACEFD467F7749BC50E0A98C17CDD8094E7BF9@pdsmsx403>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <571ACEFD467F7749BC50E0A98C17CDD8094E7BF9@pdsmsx403>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
On Wed, May 10, 2006 at 07:38:12PM +0800, Tian, Kevin wrote:
> >From: Tristan Gingold [mailto:Tristan.Gingold@xxxxxxxx]
> >Sent: 2006年5月10日 18:47
> >>
> >> I see your concern about flush efficiency. However we still need set
> >> necessary mask bits for correctness, right?
> >Not yet, because pages are not transfered.
> 
> It's not specific to page flipping. Simple page sharing also has same 
> problem.
> 
> >
> >> It would be difficult to track
> >> exact processors which have footprint about different ungranted
> >pages.
> >> To track that list may instead pull down performance at other places.
> >> Then to set domain_dirty_cpumask as ones that domain is currently
> >> running on, can be a simple/safe way in current stage though
> >> performance may be affected.
> >Unfortunatly performance are so badly affected that using SMP-g is
> >useless!
> 
> If correctness becomes an issue, like shared va has footprint on 
> several vcpus, you have to flush tlb on multiple processors or else 
> SMP-g is broken.
> 
> After more thinking, I think there's no need for flush_tlb_mask to flush 
> both tlb all and vhpt all. Flush_tlb_mask just does as what the name 
> stands for: flushing all related TLBs indicating in the 
> domain_dirty_cpumask. Instead the affected software structures can 
> be always flushed in destroy_grant_host_mapping().
> 
> For xen/x86, destroy_grant_host_mapping clears affected pte entry in 
> writable page table or the pte entry in shadow page table based on 
> host_addr.
> 
> For xen/ia64, the vhpt table can be flushed by host_addr too, in 
> destroy_grant_host_mapping. For each requested unmap page, only 
> affected vhpt entry will be flushed and there's no need for full purge.
> 
> The key point is to pass in the gva address (host_addr) which is 
> previously mapped to granted frame. It's guest's responsibility to record 
> those mapped address and then passed in at unmap request. For 
> example, xen/x86 use pre-allocated virtual address range while xen/ia64 
> uses identity-mapped one. It's current para-driver style and we can
> trust domain since guest needs to be cooperative or else domain itself 
> is messed instead of xen.
> 
> Isaku, how about your thought on it?

I don't think that tracking virtual address cause much performance loss.
At least for vbd.
The reason is that a underlying block device doesn't need to
read its data. Then unmapping such a granted page doesn't require
any flush. (I'm just guessing. The md driver or lvm may read its
contents to calculate checksum though.)
We can enhance grant table to allow no-read/no-write(dma-only) mapping.

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel