|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] x86 swiotlb questions
On 30/12/06 5:32 pm, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>> * Why can't we turn dma_[un]map_page into dma_[un]map_single, as x86_64
>> does? This would avoid needing to expand the swiotlb api.
>
> Because we allow highmem pages in the I/O path, hence page_address() cannot
> be used. As you may have concluded from my sending of a second rev of the
> patches, I had a bug in exactly that path, so I know it is being exercised.
> Of course, all this exists for x86-32/PAE *only*, so it may be valid to
> raise the question if it's worth it. But otoh with supporting (only) 32-bit
> PAE PV guests on x86-64 we are in the process of widening the use case here.
Ah of course, the generic swiotlb has not (yet) been used by an architecture
with highmem requiring use of kmap(). I forgot about that.
Unfortunately highmem does rather complicate things -- I guess it's up to
the lib/swiotlb maintainers whether they want to keep that complexity in an
xen-i386-specific swiotlb.c or attempt a merge.
Here's a thought: if the highmem DMA requests come from *only* the blkdev
subsystem, then perhaps we could use its highmem bounce buffer (I think that
still exists?). We turn that off on Xen right now, but we could re-enable
it, leading to a slightly odd 'double bounce buffer': the first taking us
from high pseudophysical memory to low pseudophysical memory, and the second
taking us from high machine memory to low machine memory. If we can ensure
only lowmem requests get to the swiotlb then a lot of the Xen diffs go away.
I'm not sure whether we might get DMA requests from high memory from things
other than block devices though, nor whether all block devices would
actually pass through the blkdev bounce buffer code.
Thanks,
Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|