|   | 
      | 
  
  
      | 
      | 
  
 
     | 
    | 
  
  
     | 
    | 
  
  
    |   | 
      | 
  
  
    | 
         
xen-devel
Re: [Xen-devel][VTD] 1:1 mapping for dom0 exhausts xenheap on	x86/32 wit
 
On 28/9/07 08:28, "Han, Weidong" <weidong.han@xxxxxxxxx> wrote:
> Keir Fraser wrote:
>> alloc_domheap_page() instead of alloc_xenheap_page(), and use
>> map_domain_page() to get temporary mappings when you need them. This
>> costs nothing on x86/64, where all memory is permanently mapped.
> 
> I already had a try to use alloc_domheap_page() instead of
> alloc_xenheap_page(). It works on x86/64. But it doesn't work on x86/32.
Use map_domain_page(), or live with only x86/64 support. You can't burn
x86/32's limited xenheap space on iommu page tables.
 -- Keir
>> Or it is *very* reasonable to only support vt-d on x86/64 hypervisor.
>> That's the configuration we care about by far the most, since 32-bit
>> guests run fine on a 64-bit hypervisor, and of course all vt-d
>> systems will be 64-bit capable.
>> 
>>  -- Keir
>> 
>> On 28/9/07 06:26, "Han, Weidong" <weidong.han@xxxxxxxxx> wrote:
>> 
>>> xenheap size is 9M on x86/32 xen, it's not enough to setup 1:1 page
>>> tables for dom0. It causes dom0 cannot boot successfully. Setup 1:1
>>> page table in domheap still might be a problem since the thinking is
>>> to use the same 1:1 page table for both dom0 and PV domains.
>>> Currently I think of two options: 1) go back to original method,
>>> that's to say setup page table dynamically for dom0; 2) increase
>>> xenheap size on x86/32. How do you think about? Thanks.
>>> 
>>> Weidong
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
 
 |   
 
 | 
    | 
  
  
    |   | 
    |