|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] How to allocate contiguous RAM in pv guests
Hi,
An idea is to scan the bios e820 table and split a large block to reserve those
64MB.
Than, you can mark this block as E820_GART or something and add your own
allocator next to the xen-heap allocator.
To map it contigiously, you can use the fixmap from the I/O remapping area.
Is this what you need?
Thanks,
Guy.
> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Uli
> Sent: Monday, January 29, 2007 4:55 PM
> To: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-devel] How to allocate contiguous RAM in pv guests
>
> Hi!
>
> I'm working on a patch to get the GART as an IOMMU working in
> linux/dom0.
> However, the problem I describe below applies equally to the
> software IOMMU.
>
> If the BIOS doesn't setup the aperture it has to be allocated
> from memory. Therefore, one needs a contiguous memory region,
> currently 64MB.
> The software IOMMU always needs a contiguous memory region
> (same size).
>
> In order to do this, the hypercall XENMEM_exchange is given a
> bunch of mfns and returns a (host physically) contiguous
> memory region.
> Unfortunately, the implementation allocates the contiguous
> memory from the heap first and then returns the discontiguous
> mfns to it. Therefore, there has to be a (in this case) 64MB
> chunk of memory in the xen heap available for the call to succeed.
> I have observed that on most machines exactly one such chunk
> is available. However, I've also had a machine where this is
> not the case.
>
> It seems to me that using the xen heap is not the right thing to do.
> The only other option I can think of is scanning dom0's
> memory for a (host physical) chunk of memory that
> a) belongs entirely to it and
> b) is free
>
> Once such a chunk is found, one would have to map it
> contiguously into virtual memory. Actually, the latter is
> only neccessary for the software IOMMU. The GART aperture
> doesn't have to be in virtual memory since it is only
> accessed from devices.
>
> Thanks for any suggestions,
>
> Uli
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|