Isaku Yamahata wrote:
> On Tue, Mar 03, 2009 at 05:32:42PM +0800, Zhang, Xiantao wrote:
>>
>> Isaku Yamahata wrote:
>>> Could you elaborate on the concrete issue which you're seeing?
>>> I guess the issue occurs when passed through pci device
>>> is unplugged. But in that case, the region was occupied by
>>> the device so that qemu haven't seen io on the area anyway.
>>
>> For assigning a device to a hvm domain, all mmio regions of the pci
>> device should be mapped in p2m table, but a corner case is that
>> accessing some pages(for example, vector table in msi-x's bar) in
>> one region maybe need route to qemu and emulate the corresponding
>> logic in qemu. In that case, we have to remove the mapping for the
>> specified pages in p2m, and let accessing these pages intercepted by
>> hyperviosr and forward the io requests to qemu. But
>> zap_domain_page_one can't intilziate the mmio p2m entries for hvm
>> domain. Clear ? :-)
>
> You mean pt_iomem_map() which calls remove_msix_mapping() in
> pass-through.c of qemu-dm? Is there any case other than msi-x?
> I couldn't find any other usefull case because the current xen/ia64
> doesn't support msi/msi-x.
So far, we just found the msi-x case. Maybe we will add msi-x support later, so
this fix is also required.
>
>>> And why GPFN_LOW_MMIO independently of addr? Shouldn't it be aware
>>> of io_ranges[]?
>>
>> For the low mmio ranges (3G-3.5G), we can use the fixed mfn
>> GPFN_LOW_MMIO combined with ASSIGN_io to indicate whether the p2m
>> entries are mmio ranges. You may refer to io_ranges and it also
>> use the fixed GPFN_LOW_MMIO to intialize p2m entries for low mmio
>> range.
>
> Hmm, there are two cases to call
> xc_domain_mempry_mapping(DPCI_REMOVE_MAPPING). - Just to remove the
> entry. zap_domain_page_one() is wanted.
Why remove the entries ? For hvm domain, I think the entires should always
exists during the lift of the guests.
You may refer to the call vmx_build_io_physmap_table, and these entries are
created at the initialization time of the domain.
> the one in pt_iomem_map() and remove_msix_mapping() excpet called
> by pt_iomem_map()
>
> - mmio on the area should be rounted to qemu-dm
> GPFN_LOW_MMIO and ASSIGN_io are wanted.
>
> remove_msix_mapping() which is called by pt_iomem_map().
>
> Is there a way to distinguish them.
We don't need to distinguish them, and instead of we should keep these entires
in two cases consistent with the values which is initilized by
vmx_build_io_physmap_table.
> thanks,
>
>> Xiantao
>>
>>>
>>> On Tue, Mar 03, 2009 at 03:14:02PM +0800, Zhang, Xiantao wrote:
>>>> PATCH: Fix the logic when deassign the mmio ranges for vti-domain.
>>>>
>>>> When de-assign the mmio range, it should resume its original value
>>>> for p2m value, otherwise, it may fail to determin mmio range's
>>>> type.
>>>>
>>>> Signed-off-by: Xiantao Zhang <xiantao.zhang@xxxxxxxxx>
>>>>
>>>> diff -r 67f2e14613ef xen/arch/ia64/xen/mm.c
>>>> --- a/xen/arch/ia64/xen/mm.c Tue Feb 10 13:47:02 2009 +0800
>>>> +++ b/xen/arch/ia64/xen/mm.c Tue Mar 03 15:04:54 2009 +0800
>>>> @@ -1508,8 +1508,14 @@ deassign_domain_mmio_page(struct domain
>>>> return -EINVAL; }
>>>>
>>>> - for (; addr < end; addr += PAGE_SIZE )
>>>> - zap_domain_page_one(d, addr, 0, INVALID_MFN);
>>>> + for (; addr < end; addr += PAGE_SIZE ) {
>>>> + if (is_hvm_domain(d))
>>>> + __assign_domain_page(d, addr, GPFN_LOW_MMIO <<
>>>> PAGE_SHIFT, +
>>>> ASSIGN_writable |
>>>> ASSIGN_io); + else + zap_domain_page_one(d, addr, 0,
>>>> INVALID_MFN); + } + return 0;
>>>> }
>>>>
>>>
>>>> _______________________________________________
>>>> Xen-ia64-devel mailing list
>>>> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-ia64-devel
>>
>>
>> _______________________________________________
>> Xen-ia64-devel mailing list
>> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-ia64-devel
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|