[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 0/3] AMD IOMMU: Share p2m table with iommu



Hi, 

At 13:08 +0100 on 23 May (1306156129), Wei Wang2 wrote:
> > Unfortunately this change seems to be necessary for AMD IOMMU to share
> > pagetables with the p2m.  I'd rather we didn't have it, because it means
> > empty ptes look like RAM mappings of frame 0. :(
> >
> > Wei, is there any way we can reorganise the AMD IOMMU pagetables so we
> > can store the p2m type somewhere that's not required to be zero?  If
> > not, I'm inclined to revert the p2m-sharing for AMD IOMMUs, since at the
> > very least we'd like to be able to handle types other than ram_rw
> > (e.g. ram_ro).
> 
> Theoretically, we just need to keep bit 52 - bit 58 all zero for valid dma 
> translation entry. Probably we could  define ram_rw as 11000000000b, 
> which is the valid r/w permission for iommu and leaves bit 52 - 58 zero? 

Ugh; no, that will break EPT as well, and restricts us to only one
accessible type.  It looks like there are no bits that are available in
both normal pagetable and IOMMU pagetables.  How inconvenient.

So our only options are to harden the rest of the p2m code against
blank entries looking like RAM, or to avoid sharing pagetables between
p2m and AMD IOMMU. :(  I guess that depends on how much of a PITA it'll
be to track down the rest of the places where EPT code trips over
itself.  Maybe we should replace the clear_page() in allocating p2m
pages with a loop that explicitly makes everything p2m_invalid.  It's
not a terribly hot path, after all. 

But even if we do that, don't you want read-only and grant-mapped memory
to work with the IOMMU?

Tim.

-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.