WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] non-contiguous allocations

>>> On 06.05.11 at 12:25, Olaf Hering <olaf@xxxxxxxxx> wrote:
> On Tue, Apr 26, Jan Beulich wrote:
> 
>> >>> On 18.04.11 at 20:45, Olaf Hering <olaf@xxxxxxxxx> wrote:
>> > On Fri, Apr 01, George Dunlap wrote:
>> > 
>> >> On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote:
>> >> > Using the u16 means each cpu could in theory use up to 256MB as trace
>> >> > buffer. However such a large allocation will currently fail on x86 due
>> >> > to the MAX_ORDER limit.
>> >> 
>> >> FWIW, I don't believe that there's any reason the allocations have to be
>> >> contiguous any more.  I kept them contiguous to minimize the changes to
>> >> the moving parts near a release.  But the new system has been pretty
>> >> well tested now, so I think looking at non-contiguous allocations may be
>> >> worthwhile.
>> > 
>> > how do I allocate a few mfns and give them a virtual address?
>> > I dont find a malloc like interface to allocate random pages.
> 
>> Otherwise I think the only option is to introduce indirection (using
>> the 1:1 mapping, and setting up an array of pointers). That may
>> however be a little difficult if (and I think that's the case) data
>> chunks aren't always of the same size (as then you need to deal
>> with the roll-over into the next page).
> 
> I'm almost done with the per-page handling in __insert_record().
> I just need to figure out the a usable address of a given mfn.
> Is the u8 *p = mfn_to_virt(mfn) the same as page_to_virt(mfn_to_page(mfn))?

Yes.

Jan

> 
> Olaf




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>