|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] IOMMU, vtd and iotlb flush rework (v3)
In one of my previous email I detailed a bug I was seeing when passing
through a Intel GPU on a guest that has more that 4G or RAM.
Allen suggested that I go for the Plan B but after a discussion with Tim
we agreed that Plan B was way to disruptive in term of code change.
This patch series implements Plan A.
http://xen.1045712.n5.nabble.com/VTD-Intel-iommu-IOTLB-flush-really-slow-td4952866.html
Changes between v2 and v3:
- Check for the presence iotlb_flush_all callback before calling it.
Changes between v1 and v2:
- Move size in struct xen_add_to_physmap in padding between .domid and
.space.
- Store iommu_dont_flush per cpu
- Change the code in hvmloader to relocate by batch of 64K, .size is now
16 bits.
Jean Guyader (6):
vtd: Refactor iotlb flush code
iommu: Introduce iommu_flush and iommu_flush_all.
add_to_physmap: Move the code for XENMEM_add_to_physmap.
mm: New XENMEM, XENMEM_add_to_physmap_gmfn_range
hvmloader: Change memory relocation loop when overlap with PCI hole.
Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary
iotlb flush
tools/firmware/hvmloader/pci.c | 20 +++-
xen/arch/x86/mm.c | 203 +++++++++++++++++++++--------------
xen/drivers/passthrough/iommu.c | 25 +++++
xen/drivers/passthrough/vtd/iommu.c | 100 ++++++++++--------
xen/include/public/memory.h | 4 +
xen/include/xen/iommu.h | 7 ++
6 files changed, 230 insertions(+), 129 deletions(-)
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] IOMMU, vtd and iotlb flush rework (v3),
Jean Guyader <=
|
|
|
|
|