WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [patch] more correct pfn_valid()

To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, "Scott Parish" <srparish@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [patch] more correct pfn_valid()
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Thu, 19 May 2005 08:43:38 +0100
Delivery-date: Thu, 19 May 2005 07:43:12 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVb9aZ7Rlj1GNpbTJqBIcUW7A2ChAAANsGgAAkVbyAACQmCUAAAqYiQAADuuCA=
Thread-topic: [Xen-devel] [patch] more correct pfn_valid()
> So how does that sparse style get implemented? Could you say 
> more or show a link to the place in source tree? :)

On x86, for fully virtualized guests the pfn->mfn table is virtually
mapped and hence you can have holes in the 'physical' memory and
arbitrary page granularity mappings to machine memory. See
phys_to_machine_mapping().

For paravirtualized guests we provide a model wherebe 'physical' memory
starts at 0 and is contiguous, but maps to arbitrary machine pages.
Since for paravirtualized guests you can hack the kernel, I don't see
any need to support anything else. [Note that IO address do not have
pages in this map, whereas they do in the fully virtualized case]

Ian
 
> Take following sequence in xc_linux_build.c as example:
> 1. Setup_guest() call xc_get_pfn_list(xc_handle, dom, 
> page_array, nr_pages), where page_array is acquired by 
> walking domain->page_list in HV. So page_array is actually 
> the mapping of [index in page_list, machine pfn], not [guest 
> pfn, machine pfn].
> 
> 2. loadelfimage() will utilize that page_array to load kernel of domU,
> like:
> pa = (phdr->p_paddr + done) - dsi->v_start; va = xc_map_foreign_range(
>       xch, dom, PAGE_SIZE, PROT_WRITE, 
> parray[pa>>PAGE_SHIFT]); Here parray[pa>>PAGE_SHIFT] is used, 
> which tempt to consider index of page_array as guest pfn, 
> however it's not from explanation in 1st point.
> 
> Yes, it should work in above example, since usually kernel is 
> loaded at lower address which is far from I/O hole. So in 
> lower range actually "index in page_list" == "guest pfn". 
> However this is not correct model in generic concept. 
> Especially for device model, which needs to map whole machine 
> pages of domU, also follows the wrong model as 
> xc_get_pfn_list + xc_map_foreign.
> 
> Maybe the sparse memory maps has already been managed inside 
> HV as you said, but we also need to waterfall same sparse 
> info to CP and DM especially for GB memory. That's why we're 
> considering adding new hypercall. 
> 
> Correct me if I misunderstand something there. :)
> 
> Thanks,
> Kevin
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel