At 01:12 +0800 on 02 Sep (1314925970), Jiageng Yu wrote:
> 2011/8/31 Keir Fraser <keir.xen@xxxxxxxxx>:
> > On 29/08/2011 17:03, "Stefano Stabellini" <stefano.stabellini@xxxxxxxxxxxxx>
> > wrote:
> >
> >>> Oh, so it will. You'd need to arrange for that to be called from inside
> >>> the guest; or you could implement an add_to_physmap space for it; that
> >>> could be called from another domain.
> >>
> >> "From inside the guest" means hvmloader?
> >> The good thing about doing it in hvmloader is that we could use the
> >> traditional PV frontend/backend mechanism to share pages. On the other
> >> hand hvmloader doesn't know if we are using stubdoms at the moment and
> >> it would need to issue the grant table hypercall only in that case.
> >> Unless we decide to always grant the videoram to guests but it would
> >> change once again the domain to which the videoram is accounted for
> >> (dom0/stubdom rather than the guest, that is a bad thing).
> >> Also I don't like the idea of making hvmloader stubdom aware.
> >
> > I don't see a problem with it, in principle. I see hvmloader as almost an
> > in-guest part of the toolstack. The fact that it only executes at guest boot
> > means it can be fairly closely tied to the toolstack version.
> >
> > -- Keir
> >
> >
> >
>
> Hi all,
>
> I report a new issue about vram mapping in linux stubdom. I use
> the follow patch to map the mfn of stubdom into hvm guest.
>
> diff -r 0f36c2eec2e1 xen/arch/x86/mm.c
> --- a/xen/arch/x86/mm.c Thu Jul 28 15:40:54 2011 +0100
> +++ b/xen/arch/x86/mm.c Thu Sep 01 14:52:25 2011 +0100
> @@ -4663,6 +4665,14 @@
> page = mfn_to_page(mfn);
> break;
> }
> + case XENMAPSPACE_mfn:
> + {
> + if(!IS_PRIV_FOR(current->domain, d))
> + return -EINVAL;
> + mfn = xatp.idx;
> + page = mfn_to_page(mfn);
> + break;
> + }
I would really rather not have this interface; I don't see why we can't
use grant tables for this.
If you must do it this way, it should check that the MFN is valid and
that it's owned by the caller.
> default:
> break;
> }
> @@ -4693,13 +4708,17 @@
> }
>
> /* Unmap from old location, if any. */
> - gpfn = get_gpfn_from_mfn(mfn);
> - ASSERT( gpfn != SHARED_M2P_ENTRY );
> - if ( gpfn != INVALID_M2P_ENTRY )
> - guest_physmap_remove_page(d, gpfn, mfn, 0);
> + if(xatp.space!=XENMAPSPACE_mfn) {
> + gpfn = get_gpfn_from_mfn(mfn);
> + ASSERT( gpfn != SHARED_M2P_ENTRY );
> + if ( gpfn != INVALID_M2P_ENTRY )
> + guest_physmap_remove_page(d, gpfn, mfn, 0);
> + }
Why did you make this change?
>
> /* Map at new location. */
> rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
> diff -r 0f36c2eec2e1 xen/include/public/memory.h
> --- a/xen/include/public/memory.h Thu Jul 28 15:40:54 2011 +0100
> +++ b/xen/include/public/memory.h Thu Sep 01 14:52:25 2011 +0100
> @@ -212,6 +212,7 @@
> #define XENMAPSPACE_shared_info 0 /* shared info page */
> #define XENMAPSPACE_grant_table 1 /* grant table page */
> #define XENMAPSPACE_gmfn 2 /* GMFN */
> +#define XENMAPSPACE_mfn 3 /* MFN */
> unsigned int space;
>
> #define XENMAPIDX_grant_table_status 0x80000000
>
>
> I got error at:
>
> arch_memory_op()
> -->case XENMEM_add_to_physmap:
> -->if ( page )
> -->put_page(page);
> -->free_domheap_page(page);
> -->BUG_ON((pg[i].u.inuse.type_info &
> PGT_count_mask) != 0);
>
> In my case, pg[i].u.inuse.type_info & PGT_count_mask =1.
OK, so you've dropped the last untyped refcount on a page which still
has a type count. That means the reference counting has got messed up
somewhere.
> Actually, in the linux based stubdom case, I need to keep these pages
> of vram mapped in qemu of stubdom. But it seems that granting pages
> implies having the pages unmapped in the process that grants them.
> Maybe the grant table could not solve the vram mapping problem.
But this patch doesn't use the grant tables at all.
Tim.
--
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|