WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] [PATCH]Fix the logic when deassign the mmio ranges

To: "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx>
Subject: Re: [Xen-ia64-devel] [PATCH]Fix the logic when deassign the mmio ranges for vti-domain.
From: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
Date: Fri, 6 Mar 2009 12:14:26 +0900
Cc: "xen-ia64-devel@xxxxxxxxxxxxxxxxxxx" <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 05 Mar 2009 19:14:31 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <706158FABBBA044BAD4FE898A02E4BC2243A4B40@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <706158FABBBA044BAD4FE898A02E4BC22435AF54@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20090303091502.GC31543%yamahata@xxxxxxxxxxxxx> <706158FABBBA044BAD4FE898A02E4BC22435B088@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20090303112748.GA1438%yamahata@xxxxxxxxxxxxx> <706158FABBBA044BAD4FE898A02E4BC2243A433A@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20090304121820.GE1438%yamahata@xxxxxxxxxxxxx> <706158FABBBA044BAD4FE898A02E4BC2243A46CC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20090305053200.GA12594%yamahata@xxxxxxxxxxxxx> <706158FABBBA044BAD4FE898A02E4BC2243A4B40@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6i
On Fri, Mar 06, 2009 at 10:11:42AM +0800, Zhang, Xiantao wrote:
> 
> Isaku Yamahata wrote:
> > On Thu, Mar 05, 2009 at 11:55:10AM +0800, Zhang, Xiantao wrote:
> >> Isaku Yamahata wrote:
> >>> On Wed, Mar 04, 2009 at 05:26:41PM +0800, Zhang, Xiantao wrote:
> >>>> So far, we just found the msi-x case. Maybe we will add msi-x
> >>>> support later, so this fix is also required.
> >>> 
> >>> Okay, makes sense.
> >>> 
> >>>>>>> And why GPFN_LOW_MMIO independently of addr? Shouldn't it be
> >>>>>>> aware of io_ranges[]?
> >>>>>> 
> >>>>>> For the low mmio ranges (3G-3.5G), we can use the fixed mfn
> >>>>>> GPFN_LOW_MMIO combined with ASSIGN_io to indicate whether the p2m
> >>>>>> entries are mmio ranges.   You may refer to io_ranges and it also
> >>>>>> use the fixed GPFN_LOW_MMIO to intialize p2m entries for low mmio
> >>>>>> range.
> >>>>> 
> >>>>> Hmm, there are two cases to call
> >>>>> xc_domain_mempry_mapping(DPCI_REMOVE_MAPPING). - Just to remove
> >>>>>   the entry. zap_domain_page_one() is wanted.
> >>>> 
> >>>> Why remove the entries ?  For hvm domain, I think the entires
> >>>> should always exists during the lift of the guests.
> >>>> You may refer to the call vmx_build_io_physmap_table, and these
> >>>> entries are created at the initialization time of the domain.
> >>>> 
> >>>>>   the one in pt_iomem_map() and remove_msix_mapping() excpet
> >>>>>   called by pt_iomem_map()
> >>>> 
> >>>>> 
> >>>>> - mmio on the area should be rounted to qemu-dm
> >>>>>   GPFN_LOW_MMIO and ASSIGN_io are wanted.
> >>>>> 
> >>>>>   remove_msix_mapping() which is called by pt_iomem_map().
> >>>>> 
> >>>>> Is there a way to distinguish them.
> >>>> 
> >>>> We don't need to distinguish them, and instead of we should keep
> >>>> these entires in two cases consistent with the values which is
> >>>> initilized by vmx_build_io_physmap_table.
> >>> 
> >>> The current x86 implementation mmio which isn't handled by xen VMM
> >>> are passed to qemu-dm. On the other hand, the current xen/ia64
> >>> check _PAGE_IO and 
> >>> if _PAGE_IO it is passed to qemu-dm, otherwise panic_domain()
> >>> So the behaviour should be changed such that if load/store on
> >>> the unpresent p2m entry is passed to qemu-dm like x86.
> >> 
> >> That may change much logic.
> > 
> > I think only vmx_hpw_miss() (and possibly few related functions)
> > needs modification.
> > 
> > 
> >>  For hvm/ia64, we have the assumption that all p2m entries  for mmio
> >> range should exist, and use the _PAGE_IO to identify its type.
> > 
> > But the msi-x emulation code in qemu-dm doesn't assume it.
> > So you created the patch, didn't you?
> > If you want to keep the assumption, the code to change would be
> > the msi-x emulation in qemu-dm, not vmm.
> 
> I don't think we need to touch the code in qemu-dm, could you give some hint ?

I wanted to claim that _PAGE_IO assumption should be changed.
So let's modify vmx_hpw_miss() so that load/store on the area of zero p2m entry
will be passed to qemu-dm.
What should be changed is to change "pte & _PAGE_IO" to
"(pte & _PAGE_IO) || (pte == INVALID_MFN)" or something like that.
And the argument, pt_pfn(__pte(pte)), to emulate_io_inst() should be twisted
into if pte == INVALID_MFN. Maybe lookup_domain_mpa() needs to be revised.
NOTE: I haven't checked the detail so that the above description needs
      minor corrections.


> In addition, maybe we need do flush_tlb_all after removing some mapping for 
> mmio range ? 

Oh Yes. You're right.
For pv domain case, zap_domain_page_ona() evantually calls
domain_page_flush_put() which flushes tlb.

Having reviewed the code, I found that the current ia64 implementation of
XEN_DOMCTL_memory_mapping is broken for hvm domain.
i.e. deassign_domain_mmio_page() is broken for full virtualed domain.
For hvm domain case, there is an assumption that the p2m table 
of hvm domain isn't changed while domain is running. Otherwise there will
be a race. (Possiblely it could be relaxed to append-only.)
zap_domain_page_one() hasn't been used for hvm domain so far, there is
no users for that. balloon driver isn't supported for ia64 hvm domain
for the same reason.

So for hvm domain, we have to do
 - domain_pause()
 - zap_domain_page_one()
   This function does
   - zero clear the p2m entry
   - flush tlb (Fortunately the current implementation happens to issues
                domain_flush_vtlb_all() for hvm domain)
 - domain_unpause()

I suppose XEN_DOMCTL_memory_mapping isn't performance critical path,
so pausing/unpausing domain is acceptable.

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel