WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: [Question] Why code differs in construct_dom0?

To: "Shan, Haitao" <haitao.shan@xxxxxxxxx>, 'Keir Fraser' <keir.fraser@xxxxxxxxxxxxx>
Subject: [Xen-devel] RE: [Question] Why code differs in construct_dom0?
From: "Shan, Haitao" <haitao.shan@xxxxxxxxx>
Date: Thu, 20 Nov 2008 20:52:54 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "'xen-devel@xxxxxxxxxxxxxxxxxxx'" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 20 Nov 2008 04:53:57 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <61563CE63B4F854986A895DA7AD3C17701F7E629@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <61563CE63B4F854986A895DA7AD3C17701F7E61F@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C54AE36D.1F6F1%keir.fraser@xxxxxxxxxxxxx> <61563CE63B4F854986A895DA7AD3C17701F7E629@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AclK7215rpYCsocDQfKwAUF7KsGCIAAAXKusAABBHNAAAOL0DAAAJiRQAATXEJA=
Thread-topic: [Question] Why code differs in construct_dom0?
I think I may not have described the problem clearly. The system has 4G memory. 
From E820 table, there was near 3.5G usable ram below 4G and about 0.5G above 
4G. Of all the ram, most of the memory was allocated to dom0, leaving only 
those for xen itself such as xenheap and xen's reservations.
We were using an onboard graphic card. When starting X, agpgart allocated 
memory from kernel, then asked xen to exchange these pages to contiguous pages 
below 4G. Each time agpgart module did this job, some pages in kernel (which 
are actually above 4G in physical memory) were replaced with contiguous pages 
below 4G. These kind of demands were rather high, about 256M in our platform. 
Finally, xen's reservation (128M) was not enough to fulfill this requirement.
Why was the reservation exhausted? Because kernel kept asking for memory below 
4G but only returning to xen memory above 4G. 
Then why is agpgart's allocation always in effect from above 4G? According to 
the code I pasted in my first mail, when pfn in dom0 was small in number, mfn 
was large. The smaller the pfn was, the larger the corresponding mfn was. 
Apggart allocated memory with GFP_DMA32 set, so the pfns allocated was likely 
to be small. Then the mfns were likely to be actually quite large (above 4G).

Either increasing the reservation (like 384M) or changing the initial p2m 
mapping in dom0 can solve the problem, and our tests verified this judgment.
We do not know which solution is better. That's why we are seeking your kindly 
help.
I am not sure if I have explained clearly enough so far. So any questions on 
the problem itself, Keir?

Shan Haitao

-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Shan, Haitao
Sent: 2008年11月20日 18:00
To: 'Keir Fraser'
Cc: 'xen-devel@xxxxxxxxxxxxxxxxxxx'
Subject: [Xen-devel] RE: [Question] Why code differs in construct_dom0?

Keir Fraser wrote:
> On 20/11/08 09:41, "Shan, Haitao" <haitao.shan@xxxxxxxxx> wrote:
> 
>> So you mean in the release build we make the mapping discontiguous
>> to detect possible bugs, while in debug build it is not
>> discontiguous? 
> 
> It's the other way round.
> 
>> And another question from problems we encountered recently, system
>> with more than 4G memory installed will crash when X server
>> shutdown. The reason is: 1> dom0 allocates memory for agp by calling
>> agp_allocate_memory with GFP_DMA32 set. This implies the pfn comes
>> from memory lower than 4G, while mfn are likely to be from memory
>> above 4G. 2> dom0 then call map_pages_to_apg, since the kernel of
>> handles 32bit gart table, dom0 uses hypercall to change its memory
>> mappings (xen_create_contiguous_region). Xen will pick proper memory
>> below 4G and free those from the guest (likely to be from memory
>> above 4G). 3> As the process goes on. More and more memory below 4G
>> is return to dom0 while leaving memory above 4G in xen. Finally,
>> xen's reservation of memory below 4G for DMA are exhausted. This
>> creates severe problems for us. 
>> 
>> What is your comments on this? Both increase the reservation in Xen
>> and using contiguous mappings are helpful in this cases. Which one
>> do you prefer? 
> 
> I'd need more info on the problem. I will point out that 64-bit Xen
> only allocates memory below 4G when asked, or when there is no memory
> available above 4G. Actually 32-bit Xen is the same, except the first
> chunk of dom0 memory allocated has to be below 1GB (because of
> limitations of Xen's domain_build.c). So I'm not sure what more Xen
> can do? 
In our problem, most of memory is allocate to dom0 by not specifying 
dom0_mem=xxx in grub. Dom0 actually has near 4G memory. So in this case, xen 
only has little memory below 4G, which is from the reservation pool.
> 
>  -- Keir
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel