WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: Linux Stubdom Problem

To: Keir Fraser <keir.xen@xxxxxxxxx>
Subject: Re: [Xen-devel] Re: Linux Stubdom Problem
From: Jiageng Yu <yujiageng734@xxxxxxxxx>
Date: Fri, 2 Sep 2011 01:12:50 +0800
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxx>, Thibault <samuel.thibault@xxxxxxxxxxxx>
Delivery-date: Thu, 01 Sep 2011 10:15:13 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=r8VKYusOW1HkVBZ5SHfGYuLUt41cc1Jq/exxp67+Q9A=; b=KAzWHlQjL1ETa9Z+ya7MfNvoJpgDg2fQW2IJ663ToqJWNFPTYzuS3cHEJsiPhMmP5w fcZlFRKD3xPzzUvkH6bGbiDhnPESWeUOIq5HL+HpIVUnwjQQy7624q6wpAbJ0ZlHw+LF QzUDdu9UgG/C1Z1hIBSJSDEO9JsRvee2rNmzE=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <CA838CEF.1FFF2%keir.xen@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <alpine.DEB.2.00.1108291636010.12963@kaball-desktop> <CA838CEF.1FFF2%keir.xen@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
2011/8/31 Keir Fraser <keir.xen@xxxxxxxxx>:
> On 29/08/2011 17:03, "Stefano Stabellini" <stefano.stabellini@xxxxxxxxxxxxx>
> wrote:
>
>>> Oh, so it will.  You'd need to arrange for that to be called from inside
>>> the guest; or you could implement an add_to_physmap space for it; that
>>> could be called from another domain.
>>
>> "From inside the guest" means hvmloader?
>> The good thing about doing it in hvmloader is that we could use the
>> traditional PV frontend/backend mechanism to share pages. On the other
>> hand hvmloader doesn't know if we are using stubdoms at the moment and
>> it would need to issue the grant table hypercall only in that case.
>> Unless we decide to always grant the videoram to guests but it would
>> change once again the domain to which the videoram is accounted for
>> (dom0/stubdom rather than the guest, that is a bad thing).
>> Also I don't like the idea of making hvmloader stubdom aware.
>
> I don't see a problem with it, in principle. I see hvmloader as almost an
> in-guest part of the toolstack. The fact that it only executes at guest boot
> means it can be fairly closely tied to the toolstack version.
>
>  -- Keir
>
>
>

Hi all,

    I report a new issue about vram mapping in linux stubdom. I use
the follow patch to map the mfn of stubdom into hvm guest.

diff -r 0f36c2eec2e1 xen/arch/x86/mm.c
--- a/xen/arch/x86/mm.c Thu Jul 28 15:40:54 2011 +0100
+++ b/xen/arch/x86/mm.c Thu Sep 01 14:52:25 2011 +0100
@@ -4663,6 +4665,14 @@
             page = mfn_to_page(mfn);
             break;
         }
+       case XENMAPSPACE_mfn:
+         {
+               if(!IS_PRIV_FOR(current->domain, d))
+                       return -EINVAL;
+               mfn = xatp.idx;
+               page = mfn_to_page(mfn);
+               break;
+         }
         default:
             break;
         }
@@ -4693,13 +4708,17 @@
         }

         /* Unmap from old location, if any. */
-        gpfn = get_gpfn_from_mfn(mfn);
-        ASSERT( gpfn != SHARED_M2P_ENTRY );
-        if ( gpfn != INVALID_M2P_ENTRY )
-            guest_physmap_remove_page(d, gpfn, mfn, 0);
+        if(xatp.space!=XENMAPSPACE_mfn) {
+           gpfn = get_gpfn_from_mfn(mfn);
+           ASSERT( gpfn != SHARED_M2P_ENTRY );
+           if ( gpfn != INVALID_M2P_ENTRY )
+               guest_physmap_remove_page(d, gpfn, mfn, 0);
+        }

         /* Map at new location. */
         rc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0);
diff -r 0f36c2eec2e1 xen/include/public/memory.h
--- a/xen/include/public/memory.h       Thu Jul 28 15:40:54 2011 +0100
+++ b/xen/include/public/memory.h       Thu Sep 01 14:52:25 2011 +0100
@@ -212,6 +212,7 @@
 #define XENMAPSPACE_shared_info 0 /* shared info page */
 #define XENMAPSPACE_grant_table 1 /* grant table page */
 #define XENMAPSPACE_gmfn        2 /* GMFN */
+#define XENMAPSPACE_mfn            3 /* MFN  */
     unsigned int space;

 #define XENMAPIDX_grant_table_status 0x80000000


I got error at:

arch_memory_op()
   -->case XENMEM_add_to_physmap:
         -->if ( page )
              -->put_page(page);
                    -->free_domheap_page(page);
                           -->BUG_ON((pg[i].u.inuse.type_info &
PGT_count_mask) != 0);

In my case, pg[i].u.inuse.type_info & PGT_count_mask =1.

Actually, in the linux based stubdom case, I need to keep these pages
of vram mapped in qemu of stubdom. But it seems that granting pages
implies having the pages unmapped in the process that grants them.
Maybe the grant table could not solve the vram mapping problem.

Thanks,

Jiageng Yu.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>