This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] x86-64's contig_initmem_init

To: "Jan Beulich" <JBeulich@xxxxxxxxxx>, "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
Subject: RE: [Xen-devel] x86-64's contig_initmem_init
From: "Nakajima, Jun" <jun.nakajima@xxxxxxxxx>
Date: Tue, 30 Aug 2005 09:15:42 -0700
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 30 Aug 2005 16:13:53 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcWtdJq5+Tc/2jd6RWySAJMkalJlhgABdDBQ
Thread-topic: [Xen-devel] x86-64's contig_initmem_init
Jan Beulich wrote:
>> The tail part of the initial mapping has no special handling on i386
>> nor on x86_64. It just gets freed up when we free from 0 up to
>> max_pfn, and it never gets reserved (the reserved region precisely
>> covers kernel text/data and initial page tables).
> For i386 I'm not certain, but for x86-64 I doubt that:
> init_memory_mapping, which runs before contig_initmem_init,
> re-initializes start_pfn (which is what in turn gets used to set up
> the bootmem reservation) from the result of scanning the initial page
> tables. These, as I understand it, extend to the 4-Mb-rounded end of
> the initial mapping (which, if the unused tail turns out to be less
> than 512k, even gets extended by an extra 4M).

Okay, I wrote the code originally. It's extended to access all the pge,
pud, pmd, pte pages when establishing 1:1 direct mapping against the
guest physical memory. Unlike the native x86_64 linux, the current
x86_64 xenlinux cannot use 2MB, so we need to allocate a lot of (extra)
L1 pages if the guest memory is large. For those page table pages, I set
RO at contig_initmem_init. 

I don't think it's 4-Mb-rounded, but I'll take a look at the code.

BTW, when do/did you start seeing the problme?

>> Actually, that could be another bug on x86/64 -- I may need to
>> truncate the initial mapping, or we may be ending up with spurious
>> extra writable mappings to some pages... I'll take a look and see if
>> this is 
>> the case.
> If the above wasn't true (or was fixed), then I'd assume such a bug
> would surface (and again I'm not sure why i386 wouldn't surface it,
> as I can't see where these mappings get torn down).
> Jan

Intel Open Source Technology Center

Xen-devel mailing list