>-----Original Message-----
>From: Ian Pratt [mailto:m+Ian.Pratt@xxxxxxxxxxxx]
>Sent: Thursday, May 19, 2005 3:44 PM
Thanks for nice explanation. As a background, let's not refine
discussion only to x86, since other arch like x86-64/ia64 will have more
memories which may span traditional I/O hole, like MMIO range, etc.
>
>
>> So how does that sparse style get implemented? Could you say
>> more or show a link to the place in source tree? :)
>
>On x86, for fully virtualized guests the pfn->mfn table is virtually
>mapped and hence you can have holes in the 'physical' memory and
>arbitrary page granularity mappings to machine memory. See
>phys_to_machine_mapping().
I can see that 1:1 mapping table mapped by one pgd entry on current x86.
But, as I described in tail of this mail, why isn't such information
about holes getting used by CP and DM? Why doesn't CP and DM utilize
phys_to_machine_mapping(), but xc_get_pfn_list? Ideally the implication
of xc_get_pfn_list is only to get all machine frames allocated to that
domain, not the guest pfn -> machine pfn mapping info, which is not the
anchor for dom0 to manipulate domN's memory...
>
>For paravirtualized guests we provide a model wherebe 'physical' memory
>starts at 0 and is contiguous, but maps to arbitrary machine pages.
>Since for paravirtualized guests you can hack the kernel, I don't see
>any need to support anything else. [Note that IO address do not have
>pages in this map, whereas they do in the fully virtualized case]
Sorry that I need some time to understand trick here. Do you mean the
'physical' memory will always be continuous for any memory size, like
4G, 16G, nG...? Does that mean there's other way to arrange the MMIO
address, PIB address, etc. dynamically based on memory size? Or all the
I/O will be dummy operation... But dom0 has to access physical memory...
sorry I'm getting messed here, and appreciate your input. :)
Thanks,
Kevin
>
>Ian
>
>> Take following sequence in xc_linux_build.c as example:
>> 1. Setup_guest() call xc_get_pfn_list(xc_handle, dom,
>> page_array, nr_pages), where page_array is acquired by
>> walking domain->page_list in HV. So page_array is actually
>> the mapping of [index in page_list, machine pfn], not [guest
>> pfn, machine pfn].
>>
>> 2. loadelfimage() will utilize that page_array to load kernel of
domU,
>> like:
>> pa = (phdr->p_paddr + done) - dsi->v_start; va =
xc_map_foreign_range(
>> xch, dom, PAGE_SIZE, PROT_WRITE,
>> parray[pa>>PAGE_SHIFT]); Here parray[pa>>PAGE_SHIFT] is used,
>> which tempt to consider index of page_array as guest pfn,
>> however it's not from explanation in 1st point.
>>
>> Yes, it should work in above example, since usually kernel is
>> loaded at lower address which is far from I/O hole. So in
>> lower range actually "index in page_list" == "guest pfn".
>> However this is not correct model in generic concept.
>> Especially for device model, which needs to map whole machine
>> pages of domU, also follows the wrong model as
>> xc_get_pfn_list + xc_map_foreign.
>>
>> Maybe the sparse memory maps has already been managed inside
>> HV as you said, but we also need to waterfall same sparse
>> info to CP and DM especially for GB memory. That's why we're
>> considering adding new hypercall.
>>
>> Correct me if I misunderstand something there. :)
>>
>> Thanks,
>> Kevin
>>
>>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|