[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] How to share a page between dom0 and Hypervisror


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • From: veerasena reddy <veeruyours@xxxxxxxxx>
  • Date: Mon, 6 Jun 2011 19:48:34 +0530
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 06 Jun 2011 07:19:25 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=aXgZLg4xJ8C16s9tQOPUwvU2uU5fnY3aMlBLhyO9IH0YqpxSOQR7V+tNmAAy80rj29 gma4fV7ORwkd7W8Z6Bf0YwnLr3wG00QwJ05fZHKGMrVaUgg29fhzJ5DFovXCUK/fpkyG c4nMYfYYt4bDTvEiULktVHXzt8OgRQGhfOz1A=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi Ian,

Thanks a lot for the detailed explanation.
You were correct, I had to use "my_rd_wr_page" not "&my_rd_wr_page". I also had to change my code to translate the PFN to MFN (pfn_to_mfn()) on dom0 kernel itself before passing it to hypercall.
Now it works.

Thanks & Regards,
VSR.

On Mon, Jun 6, 2011 at 7:23 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
On Mon, 2011-06-06 at 14:28 +0100, veerasena reddy wrote:
> Hi,
>
> In one of experiments, I need to map a page allocated in dom0 to
> hypervisor and access/modify the page contents in hypervisor.
> I tried this by adding a new hypercall, and pass the GPA of the page
> to its handler in hypervisor which does the following:
>
> ==================== Hypercall handler  ================
> DO(my_rd_wr)(int cmd, XEN_GUEST_HANDLE(void) arg)
> {
>         unsigned long dom0_gpa;
>         unsigned long gmfn;
>         unsigned long mfn;
>         void *my_rd_wr_page;
>         struct domain *d = current->domain;
>
>         printk(XENLOG_G_DEBUG "%s:L%u: Entered\n", __FUNCTION__, __LINE__);
>         switch( cmd )
>         {
>                 case 0x1:
>                         if ( copy_from_guest(&dom0_gpa, arg, 1) )
>                                 return -EFAULT;
>                         printk(XENLOG_G_DEBUG "%s:L%u: GPA read 0x%lx\n",
>                                                 __FUNCTION__, __LINE__, dom0_gpa);
>
>                         gmfn = dom0_gpa >> 12;
>                         mfn = gmfn_to_mfn(d, gmfn);
>                         if ( !mfn_valid(mfn) ||
>                              !get_page_and_type(mfn_to_page(mfn), d, PGT_writable_page) )
>                         {
>                                 printk(XENLOG_G_WARNING
>                                         "Bad GMFN %lx (MFN %lx)\n", gmfn, mfn);
>                                 return 0;
>                         }
>
>                         my_rd_wr_page = map_domain_page(mfn);
>
>                         /* Do your initialization of the page here; just write '2' in all bytes */
>                         memset(my_rd_wr_page, 2, 1<<12);
>
>                         unmap_domain_page(my_rd_wr_page);
>                         put_page_and_type(mfn_to_page(mfn));
>                         break;
>
>                 default:
>                         printk(XENLOG_G_DEBUG "%s:L%u: unhandled\n",
>                                                 __FUNCTION__, __LINE__);
>                         break;
>         }
>
>         return 0;
> }
> ============================
>
> I have allocated a page from a sample dom0 kernel module (using vmalloc()), and passed the physical address of it to hypercall.
>
> void my_rd_wr_page_setup(void)
> {
>     unsigned long my_gpa;
>     int err;
>     char *my_rd_wr_page = NULL;
>
>     my_rd_wr_page     = __vmalloc(
>                         1 * PAGE_SIZE,
>                         GFP_KERNEL | __GFP_HIGHMEM,
>                         __pgprot(__PAGE_KERNEL & ~_PAGE_NX));
>
>     my_gpa = vmalloc_to_pfn((char *)&my_rd_wr_page) << PAGE_SHIFT;

&my_rd_wr_page is the address of the variable (i.e. probably a pointer
into the current stack) and not the address of the page you are trying
to reference.

Secondly vmalloc_to_pfn will return you a guest physical address, while
hypercalls from PV guests always take an MFN.

On the hypercall side your call to gmfn_to_mfn is normally an identity
function for a PV guest which map_domain_page takes a machine address.

So I think you need to launder the address through the p2m in the kernel
before passing it to the hypercall.

>     printk("%s: Before Hypercall; my_rd_wr_page=%p my_gpa=%lx\n", __FUNCTION__, my_rd_wr_page, my_gpa);
>     memset(my_rd_wr_page, 0, PAGE_SIZE);
>     err = _hypercall2(int, my_rd_wr, 0x1, (void *)&my_gpa);
>     printk("%s: Hypercall returned; errno-%d\n", __FUNCTION__, err);
> }
>
>
> When I loaded the module, the following error has been observed:
>
> =============== On dom0 ===========
> xen_features[0].writable_page_tables = 0
> xen_features[0].writable_descriptor_tables = 0
> xen_features[0].auto_translated_physmap = 0
> xen_features[0].supervisor_mode_kernel = 0
> xen_features[0].pae_pgdir_above_4gb = 1
> my_rd_wr_page_setup: Before Hypercall; my_rd_wr_page=ffffc90010f7e000 my_gpa=d7cf000
> my_rd_wr_page_setup: Hypercall returned; errno-0
> =====================================
> On hypervisor:
> (XEN) do_my_rd_wr:L178: GPA read 0xd7cf000
> (XEN) mm.c:2037:d0 Error pfn d7cf: rd=ffff8300773c0000, od=0000000000000000, caf=180000000000000, taf=0000000000000000
> (XEN) Bad GMFN d7cf (MFN d7cf)
> ====================================
>
> From the dom0 messages (where i read the features), it looks like writable_page_tables is not set. Is it causing the issue in my case.

Writable page tables is something else, it relates to how a _guest_ can
update it's own page tables, not how Xen creates mappings of things.

>
> Could you please advice if i did something wrong here.
> You are most welcome if you have completely different approach which works;
>
> Thanks a lot in advance
>
> Regards,
> VSR.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.