WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] VT-d: IOTLB flush fixups

[Xiaowei Yang]
>> On map: only flush when old PTE was valid or invalid PTE may be cached.
>> On unmap: always flush old entry, but skip flush for unaffected IOMMUs.
>> 
>> Signed-off-by: Espen Skoglund <espen.skoglund@xxxxxxxxxxxxx>
>> 
>> --
>>  iommu.c |   17 +++++++++++------
>>  1 file changed, 11 insertions(+), 6 deletions(-)
>> --
>
> Seems my last mail sent to the xen-devel is lost and I have no local
> copy so I have to write again...
>
> Espen,
> Thanks for the patch! I also noticed context/iotlb flush need a
> cleanup.  As flush of present/non-present entry are different, your
> change to iommu_intel_map_page are not that correct. So I made up
> anther patch.  iommu_flush is also removed, as VTd table is not
> shared with p2m any more.
>
> Signed-off-by: Xiaowei Yang <xiaowei.yang@xxxxxxxxx>

Oh, right.  When flushing a non-present cached entry domid 0 must be
used.  Here's a modification of your patch:

 - Made the non-present flush testing a bit simpler.
 - Removed dma_addr_level_page_maddr().  Use a modified
   addr_to_dma_page_maddr() instead.
 - Upon mapping new context entry: flush old entry using domid 0 and
   always flush iotlb.

Signed-off-by: Espen Skoglund <espen.skoglund@xxxxxxxxxxxxx>

--
 arch/x86/mm/hap/p2m-ept.c       |    6 -
 drivers/passthrough/vtd/iommu.c |  150 ++++++++++------------------------------
 include/xen/iommu.h             |    1 
 3 files changed, 38 insertions(+), 119 deletions(-)


Attachment: xen-vtd-flush.patch
Description: Binary data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>