On Wed, Sep 23, 2009 at 02:56:16PM -0700, Jeremy Fitzhardinge wrote:
> On 09/23/09 14:24, Konrad Rzeszutek Wilk wrote:
> > The weird part is that the function you copied-n-pasted
> > (gart_iommu_hole_init)
> > only detectes and allocates a buffer. It does not set the dma_ops at all.
> > Setting of the dma_ops is done via the gart_iommu_init() call which is done
> > much later. But with Xen-SWIOTLB already initialized, the gart_iommu_init()
> > quits right away.
> >
>
> Perhaps the fix is to make gart_iommu_hole_init() quit if iommu_detected
> too? Though it is called after xen_swiotlb_init()...
That is a good idea too. That would avoid that ugly #ifdef.
>
> > So the kernel sets the dma_ops to the Xen SWIOTLB, and it
> > allocates an extra 64MB chunk of memory for the GART, which is not
> > used, and ... somehow all of the ioremap_nocache functions stop working
> > correctly. Maybe the ioremap_nocache does use some of that memory that
> > the gart_iommu_hole_init allocated?
> >
> Can't see how it would affect it. ioremap allocates a new virtual space
> for the mapping and then just plugs in the pfns for the pages you want
> to map. They end up getting _PAGE_IOMAP set in the pte flags, which
> causes the xen/mmu.c backend to use the address as-is (ie, as an mfn),
> so the mapping will be constructed properly. Well, that's the theory;
> but I'd expect we'd be seeing a lot more havok of ioremap is either
> mapping the wrong pages or using the wrong caching.
There was a lot of havoc - all of the PCI BARs were useless. Is the MFN
(from the pfn_to_mfn on this address) suppose to have a specific value?
>
> > With this patch, the GART is forcefully disabled, and the kernel boots fine
> > (with 6GB, 8GB, etc).
> >
>
> OK, I'll put it in for now. Will we have issues with other forms of iommu?
There are three other types: AMD IOMMU (a real IOMMU), Intel's IOMMU,
and the IBM's Calgary IOMMU.
For all of those setting, no_iommu=1 should do the trick. But in reality
I need to double-check that:
diff --git a/arch/x86/xen/pci-swiotlb.c b/arch/x86/xen/pci-swiotlb.c
index 00f2260..390f698 100644
--- a/arch/x86/xen/pci-swiotlb.c
+++ b/arch/x86/xen/pci-swiotlb.c
@@ -989,6 +989,8 @@ void __init xen_swiotlb_init(void)
xen_swiotlb_init_with_default_size(64 * (1<<20)); /*
default to 64MB */
dma_ops = &xen_swiotlb_dma_ops;
iommu_detected = 1;
+ no_iommu = 1; /* Forces the other IOMMU (if they are detected)
to
+ to quit, rather than initialize. */
#ifdef CONFIG_GART_IOMMU
gart_iommu_aperture_disabled = 1;
#endif
<sigh>I think I need to rethink this swiotlb-Xen part. This is starting
to look like a hack.
>
> Another thought, could we actually use the gart iommu instead of swiotlb
> if it is available? I think it leads to exactly the same set of issues
> as extending normal swiotlb for Xen's use (ie, inserting pfn->mfn
> conversion into the correct places, and perhaps allocating the memory
> properly). Worth thinking about; it may shine light on better ways to
> fix up swiotlb.
Yes! That was my next step - see if it is possible to use it and if so
extend it for that purpose (and without any ghastly #ifdef).
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|