WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 3/4] amd iommu: Large io page support - enablemen

To: Wei Wang2 <wei.wang2@xxxxxxx>
Subject: Re: [Xen-devel] [PATCH 3/4] amd iommu: Large io page support - enablement
From: Keir Fraser <keir@xxxxxxx>
Date: Fri, 03 Dec 2010 10:28:17 -0800
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 03 Dec 2010 10:32:05 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:user-agent:date :subject:from:to:cc:message-id:thread-topic:thread-index:in-reply-to :mime-version:content-type:content-transfer-encoding; bh=15T6U2bLl43IuWIVdQQKjYnb73oiqvnAzzcT9FWr/XQ=; b=fMJBUoapGspyh2DMd9h7MvLrItqHFYFRdyXt4a0XeZKtqkW8TQz0Fv7XZwsCJjLzLn I35pO6DTPxNveRJJSIfhnhB4S+94fqT00QaR6HC8wH8QHW238Bm09fgNZq2Ne2iiUbsH jzsY2OYman/CfhyJ2WNp71lx2XksbsXl7ULOU=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=C2Z18v8mTtI7mOf+HFLMxsLyluJWd9yCb9RojCY8uDMhyPHEyQ9Iyn4ReZrZ4gVS5P gNrufDbBeDUUVy7NIFn/ybOvKzeU2ujeOeb6TbaeHNQoDY9EV+NWtQ+HhQmyBZ5DPEiv TpktNSz3oY/fjINJGUMcY7CiYoWF7z9zWwNu4=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <201012031745.29469.wei.wang2@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcuTF9qGaig+UtaIcUaDgmlkDsHnbw==
Thread-topic: [Xen-devel] [PATCH 3/4] amd iommu: Large io page support - enablement
User-agent: Microsoft-Entourage/12.23.0.091001
On 03/12/2010 08:45, "Wei Wang2" <wei.wang2@xxxxxxx> wrote:

> On Friday 03 December 2010 17:24:53 Keir Fraser wrote:
>> Well, let's see. The change to p2m_set_entry() now allows (superpage) calls
>> to the iommu mapping functions even if !need_iommu(). That seems a semantic
>> change. 
> That is because we have iommu_populate_page_table() which will delay io page
> table construction until device assignment. But this function can only
> updates io page table with 4k entries. I didn't find a better way to tracking
> page orders after page allocation (Q: could we extend struct page_info to
> cache page orders?). So my thought is to update IO page table earlier. And
> therefore, enabling super io page will also disable lazy io page table
> construction.  

How about hiding the superpage mapping stuff entirely within the existing
iommu_[un]map_page() hooks? If you have 9 spare bits per iommu pde (seems
very likely), you could cache in the page-directory entry how many entries
one level down currently are suitable for coalescing into a superpage
mapping. When a new iommu pte/pde is written, if it is a candidate for
coalescing, increment the parent pde's count. If the count ==
2^superpage_order, then coalesce. You can maintain such counts in every pde
up the hierarchy, for 2MB, 1GB, ... superpages.

Personally I think we could do similar for ordinary host p2m maintenance as
well, if the bits are available. With 64-bit entries, we probably have
sufficient bits (we only need 9 spare bits). What we have now for host p2m
maintenance I can't say I love very much, and I don't think we need follow
that as a model for how we introduce superpage mappings to iommu pagetables.

Anyway, this would make your patch only touch AMD code. Similar could be
done on the Intel side later, and for bonus points at that point perhaps
this coalescing/uncoalescing logic could be pulled out to some degree into
shared code.

 -- Keir

> Also, without need_iommu() checking both passthru and non-passthru guests will
> get io page table allocation. Since super paging will highly reduce io page
> table size, we might not waste too much memories here...



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel