WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [timer/ticks related] dom0 hang during boot on large 1TB

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Subject: RE: [Xen-devel] [timer/ticks related] dom0 hang during boot on large 1TB system
From: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Date: Tue, 5 Jan 2010 15:54:53 +0000
Cc: "kurt.hackel@xxxxxxxxxx" <kurt.hackel@xxxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>
Delivery-date: Tue, 05 Jan 2010 07:55:12 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <08f4283a-8b41-4a02-b03a-f7aab4251ea2@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <08f4283a-8b41-4a02-b03a-f7aab4251ea2@default>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, 2010-01-05 at 15:46 +0000, Dan Magenheimer wrote:
> > What is clear though is that you also depend on the memory 
> > distribution
> > across the (physical) address space: Contiguous (apart from the below-
> > 4G hole) memory will likely represent little problems, but 
> > sparse memory
> > crossing the 44-bit boundary can't work in any case (since MFNs are
> > represented as 32-bit quantities in 32-bit Dom0).
> 
> Urk.  Yes, I had forgotten about the sparse problem.
> 
> > I can't say there are known problems, but I'm convinced not everything
> > can work properly above the boundary of 168G. Nevertheless it is quite
> > possible that most or all of the normal (not error handling) 
> > code paths
> > work well. Page table walks e.g. during exceptions or kexec would be
> > problem candidates. And while my knowledge of the tools is rather
> > limited, libxc also has - iirc - several hard coded 
> > assumptions that might not hold.
> 
> What is special about 168GB?  Or is that a typo?  (And if it
> is supposed to be 128GB, what is special about 128GB?)

It's the size of m2p you can fit into the hypervisor hole of a PAE guest
running on a 64 bit hypervisor, since the hypervisor no longer need to
reside in there it bigger than with a PAE guest on a PAE hypervisor.

The size of the hypervisor hole is runtime settable for many guests but
I'm not sure that is plumbed through in the tools so who knows how well
it works. Increasing the size of the hypervisor hole eats in to kernel
low memory though so you would be trading off maximum per-guest RAM
against maximum host RAM to some degree.

Ian.
> 
> Thanks,
> Dan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel