WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [PATCH 1/2] xen/mmu: Add workaround "x86-64, mm: Put

On Wed, May 04, 2011 at 08:59:03PM +0200, Daniel Kiper wrote:
> On Tue, May 03, 2011 at 09:51:41PM +0200, Daniel Kiper wrote:
> > On Tue, May 03, 2011 at 11:12:06AM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Tue, May 03, 2011 at 02:55:27AM +0200, Daniel Kiper wrote:
> > > > On Mon, May 02, 2011 at 01:22:21PM -0400, Konrad Rzeszutek Wilk wrote:
> 
> [...]
> 
> > > > I think that (Stefano please confirm or not) this patch was prepared
> > > > as workaround for similar issues. However, I do not like this patch

It was actually to fix SandyBridge boxes. Their last E820 reserved
region was around fed40000 and then the RAM region started at
100000000. Which meant that we misinterpreted the gap (starting at fed40 mfn)
as the start of RAM.

> > > > because on systems with small amount of memory it leaves huge (to some
> > > > extent) hole between max_low_pfn and 4G. Additionally, it affects
> > > > memory hotplug a bit because it allocates memory starting from current
> > > > max_mfn. It also breaks memory hotplug on i386 (maybe also others
> > > > thinks, however, I could not confirm that). If it stay for some
> > > > reason it should be amended in follwing way:
> > > >
> > > > #ifdef CONFIG_X86_32
> > > >         xen_extra_mem_start = mem_end;
> > > > #else
> > > >         xen_extra_mem_start = max((1ULL << 32), mem_end);
> > > > #endif
> > > >
> > > > Regarding comment for this patch it should be mentioned that without 
> > > > this
> > > > patch e820_end_of_low_ram_pfn() is not broken. It is not called simply.

Hmm. What is max_pfn set to?
Can you send the full dmesg of your guest?

> > > >
> > > > Last but least. I found that memory sizes below and including exactly 1 
> > > > GiB and
> > > > exactly 2 GiB, 3 GiB (maybe higher, i.e. 4 GiB, 5 GiB, ...; I was not 
> > > > able to test
> > > > them because I do not have sufficient memory) are magic. It means that 
> > > > if memory
> > > > is set with those sizes everything is working good (without 
> > > > 4b239f458c229de044d6905c2b0f9fe16ed9e01e
> > > > and 24bdb0b62cc82120924762ae6bc85afc8c3f2b26 applied). It means that 
> > > > domU
> > > > should be tested with sizes which are not power of two nor multiple of 
> > > > that.
> > >
> > > Hmm, I thought I did test 1500M.
> >
> > It does not work on my machine (24bdb0b62cc82120924762ae6bc85afc8c3f2b26
> > removed and 4b239f458c229de044d6905c2b0f9fe16ed9e01e applied).
> 
> It does not work on my machine (x86_64) with Linux Kernel Ver. 2.6.39-rc6 
> without
> git commit 24bdb0b62cc82120924762ae6bc85afc8c3f2b26 (xen: do not create the 
> extra
> e820 region at an addr lower than 4G). As I said ealier bug introduced by git
> commit 4b239f458c229de044d6905c2b0f9fe16ed9e01e (x86-64, mm: Put early page 
> table
> high) is probably hidden (repaird/workarounded ???) by git commit
> 24bdb0b62cc82120924762ae6bc85afc8c3f2b26 (xen: do not create the extra
> e820 region at an addr lower than 4G).

There are a couple of things that have been going to fix "x86-64, mm: Put
early page table high" and also .. "cleanup highmem" (something) - which
has been plaguing us since 2.6.32 (and was the one you hit long time ago).

Anyhow, regarding the setting xen_extra_mem_start to 4GB or higher should
be reworked. Not sure yet how. 
> 
> Konrad, Stefano could you confirm that ??? If it is true
> how could I help you in removing this bug ???
> 
> Daniel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>