WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] Virtual mem map

To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-ia64-devel] Virtual mem map
From: Tristan Gingold <Tristan.Gingold@xxxxxxxx>
Date: Mon, 9 Jan 2006 11:43:21 +0100
Delivery-date: Mon, 09 Jan 2006 10:25:32 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <571ACEFD467F7749BC50E0A98C17CDD802C06C15@pdsmsx403>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <571ACEFD467F7749BC50E0A98C17CDD802C06C15@pdsmsx403>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.5
Le Lundi 09 Janvier 2006 06:45, Tian, Kevin a écrit :
> From: Tristan Gingold
>
> >Sent: 2006年1月6日 20:58
> >Hi,
> >
> >I am currently thinking about virtual mem map.
> >In Linux, the virtual mem map is (surprise) virtually mapped.
>
> Surprised but efficient and necessary. ;-)
>
> >In Xen, we can use the same approach or we can manually cut the mem map
> > into several pieces using a mechanism similar to paging.
[...]
> So in both cases you mentioned, extra structure to track mapping is
> necessary. Maybe the difference is that you propose to use another simpler
> structure (like a simple array/list) instead of multi-level page table? And
> then modify all memmap related macros (like pfn_valid, page_to_pfn, etc) to
> let them know existence of holes within memmap?
Yes, here are more details on my original propositions:
The structure is a 2-levels access:
* The first access is an access to a table of offsets/length.
* The offset is an offset to the page frame table, length is used only to 
check validity.

I think this structure is simple enough to be fast.

For memory usage:
* Each entry of the first array describes 1GB of memory.  An entry is 32 bits.
  16KB for the first array can describe 2**12 * 2**30 = 2**42 B of memory.
  (Dan's machine physical memory is bellow 2**40).
* I think 1GB of granule is good enough, unless you have a machine with very
  small DIMM.  In this case, we can use 512MB or 256MB instead of 1GB.
* 1GB is 2**16 to 2**18 pages.  Thus, the offset may be 18 bits and the length
  14 bits (to be multiplied by 4).
As a conclusion, the memory footprint is *very* small, maybe too small ?

memmap related macros must be rewritten.

Tristan.


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>