WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-ia64-devel] [PATCH] Fix some IPF Xen VT-d bugs

To: 'Isaku Yamahata' <yamahata@xxxxxxxxxxxxx>
Subject: RE: [Xen-ia64-devel] [PATCH] Fix some IPF Xen VT-d bugs
From: "Cui, Dexuan" <dexuan.cui@xxxxxxxxx>
Date: Mon, 5 Jan 2009 17:39:52 +0800
Accept-language: zh-CN, en-US
Acceptlanguage: zh-CN, en-US
Cc: "'xen-ia64-devel@xxxxxxxxxxxxxxxxxxx'" <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 05 Jan 2009 01:39:58 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20090105055151.GF32353%yamahata@xxxxxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <EADF0A36011179459010BDF5142A45750458C323@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20081224045307.GD16150%yamahata@xxxxxxxxxxxxx> <EADF0A36011179459010BDF5142A45750458C840@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20081224060440.GE16150%yamahata@xxxxxxxxxxxxx> <EADF0A36011179459010BDF5142A4575045D47B3@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20090105055151.GF32353%yamahata@xxxxxxxxxxxxx>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Aclu+bkurNpWM3heT8qGxBf1Pwbf6AAHSFqA
Thread-topic: [Xen-ia64-devel] [PATCH] Fix some IPF Xen VT-d bugs
Isaku Yamahata wrote:
> On x86 case p2m_lock/unlock() avoids the race, but ia64 doesn't have
> such lock.
> At this moment, the only HVM domain would be supported.
OK, I understand we can't support pv iommu before resolving the lockless p2m 
issue.

> The issue is dom0 case. I suppose it can be supported by mapping
> all the pages except xen pages at boot time and not iommu
> mapping/unmapping because those pages are already mapped to dom0
> by intel_iommu_domain_init().
I think actually we do this.
For the special pv guest Dom0, I think there is no issue here because
 dom0->need_iommu is actually always 0 (when Dom0 boots up and xen assigns all 
the devices to Dom0, xen doesn't invoke assign_device(), and invoking 
iommu_domain_init()/intel_iommu_domain_init() doesn't cause need_iommu(dom0) to 
be 1).

>>> intel_iommu_domain_init() and dom0 memory size
>>>   calc_dom0_size() in xen/arch/ia64/domain.c calculates default dom0
>>>   memory size. You should take memory for iommu page table
>>>   into account because the memory size for iommu page table
>>>   wouldn't   be neglectable. probably iommu_pages = (max phys addr)
>>>   / PTRS_PER_PTE_4K + (some spare) where PTRS_PER_PTE_4K = (1 <<
>>> (PAGE_SHIFT_4K - 3)) 
>> Now, in intel_iommu_domain_init(), with respect to iommu mapping,
>> Xen maps all the pages for Dom0 except for the pages used by Xen
>> itself.  
>> Do you mean xen should only maps the page owned actually by Dom0? 
>> -- for instance, you're saying xen should not map the iommu page
>> tables? -- since in Dom0 normally drivers don't touch iommu
>> pagetables at all, looks the current code  is OK?   
> 
> No. I meant that calc_dom0_size() should be updated.
> It calculates the maximum memory size which can be passed to dom0
> safely. Without dom0_mem_size Xen VMM tries to give dom0 the maximum
> memory size which is a common use case.
> 
> On the other hand, it isn't uncommon that ia64 machine has several
> hundred Giga bytes, so memory size for VT-d table would reach tens or
> hundreds megabytes which can't be neglectable compared to xen heap
> size. Memory for the VT-d table size should be taken into acount
> in calc_dom0_size().

I'll look into this.

> 
>>> intel_iommu_domain_init() and sparse memory.
>>>   To be honest, I'm not sure how it matters in practice.
>>>   On ia64 memory might be located sparsely. So iommu page table
>>>   should also sparse instead of [0, max_page] - (xen page).
>>>   You want to use efi_memmap_walk() instead of for loop.
>> Thanks for pointing this out!
>> So my understanding is: in the current intel_iommu_domain_init(),
>> since xen judges by the 'max_page', actually some pages at the high
>> address(possible in the middle or at the end) are not mapped while
>> they should have been mapped?   
> 
> On ia64 machine there might be a big hole so that mapping all the
> range [0, max_page] would cause lack of memory. Off course, it
> depends on what kind of ia64 box you use.
> Probably we can skip this issue and address it later if the issue
> arose.

I'll look into this.

Thanks!

-- Dexuan


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel