|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] [timer/ticks related] dom0 hang during boot on large 1TB
On Tue, 2010-01-05 at 15:46 +0000, Dan Magenheimer wrote:
> > What is clear though is that you also depend on the memory
> > distribution
> > across the (physical) address space: Contiguous (apart from the below-
> > 4G hole) memory will likely represent little problems, but
> > sparse memory
> > crossing the 44-bit boundary can't work in any case (since MFNs are
> > represented as 32-bit quantities in 32-bit Dom0).
>
> Urk. Yes, I had forgotten about the sparse problem.
>
> > I can't say there are known problems, but I'm convinced not everything
> > can work properly above the boundary of 168G. Nevertheless it is quite
> > possible that most or all of the normal (not error handling)
> > code paths
> > work well. Page table walks e.g. during exceptions or kexec would be
> > problem candidates. And while my knowledge of the tools is rather
> > limited, libxc also has - iirc - several hard coded
> > assumptions that might not hold.
>
> What is special about 168GB? Or is that a typo? (And if it
> is supposed to be 128GB, what is special about 128GB?)
It's the size of m2p you can fit into the hypervisor hole of a PAE guest
running on a 64 bit hypervisor, since the hypervisor no longer need to
reside in there it bigger than with a PAE guest on a PAE hypervisor.
The size of the hypervisor hole is runtime settable for many guests but
I'm not sure that is plumbed through in the tools so who knows how well
it works. Increasing the size of the hypervisor hole eats in to kernel
low memory though so you would be trading off maximum per-guest RAM
against maximum host RAM to some degree.
Ian.
>
> Thanks,
> Dan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|