On Tue, Dec 07, Jan Beulich wrote:
> >>> On 06.12.10 at 21:59, Olaf Hering <olaf@xxxxxxxxx> wrote:
> > --- xen-unstable.hg-4.1.22459.orig/xen/arch/x86/mm/guest_walk.c
> > +++ xen-unstable.hg-4.1.22459/xen/arch/x86/mm/guest_walk.c
> > @@ -93,11 +93,12 @@ static inline void *map_domain_gfn(struc
> > uint32_t *rc)
> > {
> > /* Translate the gfn, unsharing if shared */
> > +retry:
> > *mfn = gfn_to_mfn_unshare(p2m, gfn_x(gfn), p2mt, 0);
> > if ( p2m_is_paging(*p2mt) )
> > {
> > - p2m_mem_paging_populate(p2m, gfn_x(gfn));
> > -
> > + if ( p2m_mem_paging_populate(p2m, gfn_x(gfn)) )
> > + goto retry;
> > *rc = _PAGE_PAGED;
> > return NULL;
> > }
>
> Is this retry loop (and similar ones later in the patch) guaranteed
> to be bounded in some way?
This needs to be fixed, yes.
For the plain __hvm_copy case, with nothing else being modified, the
'return HVMCOPY_gfn_paged_out' could be just a 'continue'. But even
then, something needs to break the loop.
> > /* gfn is already on its way back and vcpu is not paused */
> > - return;
> > + goto populate_out;
>
> Do you really need a goto here (i.e. are you foreseeing to get stuff
> added between the label and the return below)?
Thats something for my debug patch, I have a trace_var at the end of
each function.
> > +/* retval 1 means the page is present on return */
> > +int p2m_mem_paging_populate(struct p2m_domain *p2m, unsigned long gfn);
>
> Isn't this a case where you absolutely need the return value checked?
> If so, you will want to add __must_check here.
Yes, that would be a good addition.
Maybe the wait_event/wake_up could be done unconditionally, independent
if the p2m domain differs from the vcpu domain.
Olaf
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|