On Fri, 2009-04-24 at 15:16 +0800, Keir Fraser wrote:
> On 24/04/2009 08:04, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
> > Also, after suggesting to use gb-pages when possible here I realized that
> > it's probably a latent bug to map more space than was allocated - if the
> > non-allocated-but-mapped pages happen to later get allocated to a domain,
> > that domain may change the cacheability attributes of any of these pages,
> > resulting in aliasing issues. I'll put together a patch for this, but it'll
> > be
> > a couple of days until I'll be able to do so.
>
> I think we should shatter the superpage on demand. This would also be
> required for superpage mappings of Xen itself: when we free initmem that
> memory can now be allocated to a domain (now xenheap and domheap are merged
> on x86/64).
>
> An alternative might be to mark such partially-freed superpages as
> Xenheap-only, and allocate them preferentially for Xenheap callers (i.e.,
> alloc those pages first, then from the general heap).
>
Here is the patch I mentioned above, it can fix dom0 booting on my box:
---
unused percpu area is reclaimed as xenheap, but since xenheap and
domheap are shared on x86_64, it's possible dom0 can get these pages and
perform DMA on them. This patch removes this area in xen_in_range(),
so iommu 1:1 mapping for this area can be added.
Signed-off-by: Qing He <qing.he@xxxxxxxxx>
---
diff -r 8b152638adaa xen/arch/x86/setup.c
--- a/xen/arch/x86/setup.c Thu Apr 23 16:22:48 2009 +0100
+++ b/xen/arch/x86/setup.c Fri Apr 24 15:24:18 2009 +0800
@@ -98,6 +98,7 @@ cpumask_t cpu_present_map;
unsigned long xen_phys_start;
unsigned long allocator_bitmap_end;
+unsigned long per_cpu_used_end;
#ifdef CONFIG_X86_32
/* Limits of Xen heap, used to initialise the allocator. */
@@ -223,6 +224,8 @@ static void __init percpu_init_areas(voi
(first_unused << PERCPU_SHIFT),
(NR_CPUS - first_unused) << PERCPU_SHIFT);
#endif
+
+ per_cpu_used_end = __pa(__per_cpu_start) + (first_unused << PERCPU_SHIFT);
}
static void __init init_idle_domain(void)
@@ -1124,9 +1127,9 @@ int xen_in_range(paddr_t start, paddr_t
/* initialize first time */
if ( !xen_regions[0].s )
{
- extern char __init_begin[], __per_cpu_start[], __per_cpu_end[],
- __bss_start[];
+ extern char __init_begin[], __per_cpu_start[], __bss_start[];
extern unsigned long allocator_bitmap_end;
+ extern unsigned long per_cpu_used_end;
/* S3 resume code (and other real mode trampoline code) */
xen_regions[0].s = bootsym_phys(trampoline_start);
@@ -1136,7 +1139,7 @@ int xen_in_range(paddr_t start, paddr_t
xen_regions[1].e = __pa(&__init_begin);
/* per-cpu data */
xen_regions[2].s = __pa(&__per_cpu_start);
- xen_regions[2].e = __pa(&__per_cpu_end);
+ xen_regions[2].e = per_cpu_used_end;
/* bss + boot allocator bitmap */
xen_regions[3].s = __pa(&__bss_start);
xen_regions[3].e = allocator_bitmap_end;
diff -r 8b152638adaa xen/arch/x86/tboot.c
--- a/xen/arch/x86/tboot.c Thu Apr 23 16:22:48 2009 +0100
+++ b/xen/arch/x86/tboot.c Fri Apr 24 15:24:18 2009 +0800
@@ -48,6 +48,7 @@ static uint64_t sinit_base, sinit_size;
extern char __init_begin[], __per_cpu_start[], __per_cpu_end[], __bss_start[];
extern unsigned long allocator_bitmap_end;
+extern unsigned long per_cpu_used_end;
#define SHA1_SIZE 20
typedef uint8_t sha1_hash_t[SHA1_SIZE];
@@ -310,7 +311,7 @@ void tboot_shutdown(uint32_t shutdown_ty
__pa(&_stext);
/* per-cpu data */
g_tboot_shared->mac_regions[2].start =
(uint64_t)__pa(&__per_cpu_start);
- g_tboot_shared->mac_regions[2].size = __pa(&__per_cpu_end) -
+ g_tboot_shared->mac_regions[2].size = per_cpu_used_end -
__pa(&__per_cpu_start);
/* bss */
g_tboot_shared->mac_regions[3].start = (uint64_t)__pa(&__bss_start);
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|