WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-ia64-devel] Virtual mem map

To: "Tristan Gingold" <Tristan.Gingold@xxxxxxxx>, <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-ia64-devel] Virtual mem map
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Mon, 9 Jan 2006 13:45:09 +0800
Delivery-date: Mon, 09 Jan 2006 05:51:48 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcYSuEy0iT8r7QKmRNKFGZLQVV01VQCIa6yQ
Thread-topic: [Xen-ia64-devel] Virtual mem map
>From: Tristan Gingold
>Sent: 2006年1月6日 20:58
>Hi,
>
>I am currently thinking about virtual mem map.
>In Linux, the virtual mem map is (surprise) virtually mapped.

Surprised but efficient and necessary. ;-)

>In Xen, we can use the same approach or we can manually cut the mem map into
>several pieces using a mechanism similar to paging.

I'm not exactly catching your meaning here. In case of physical memmap, 
identity mapped va is used and thus no special track structures (meaning 
pgtable in Linux) required. When physical memmap is converted to virtual 
memmap, it means you have to provide a virtually contiguous area to be mapped 
to an array of physical incontinuous pages, and thus you need extra PTEs in 
pgtable.

So in both cases you mentioned, extra structure to track mapping is necessary. 
Maybe the difference is that you propose to use another simpler structure (like 
a simple array/list) instead of multi-level page table? And then modify all 
memmap related macros (like pfn_valid, page_to_pfn, etc) to let them know 
existence of holes within memmap?

>
>I don't really like the first way: this uses TLB, this may causes more
>troubles and currently Xen almost don't use translation cache for itself.
>
>So, I think I will use the second approach.

So you really need to elaborate your 2nd approach with the exact difference 
provided. 

Actually more questions come with this issue:

Do we need to add generic non-identity mapping into Xen? If yes, then there's 
no special think for virtual memmap which can be covered. If no, we already saw 
some limitation upon lacking of such feature, which can't manage/utilize 
machine page frames efficiently.

If yes, we may need add multi-level page table into Xen and walk it in page 
fault handler. Then do we need VHPT table for Xen for performance? Currently 
all vhpt tables are only for guest running...

When we're working around one specific issue, I hope the solution can be more 
generic to cover future similar requirement.

Thanks,
Kevin


>
>Am I missing an important point ?
>Am I doing the wrong choice ?
>Please, comment.
>
>Tristan.
>
>
>_______________________________________________
>Xen-ia64-devel mailing list
>Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-ia64-devel

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>