[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: How to access a dom0 page from domU Guest in read-only mode.


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: veerasena reddy <veeruyours@xxxxxxxxx>
  • Date: Fri, 24 Jun 2011 17:27:44 +0530
  • Delivery-date: Fri, 24 Jun 2011 05:05:50 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=REvza3wjJNnuwxpMw8ukZ7TL6bMSSGE6WB5PjmumZd+2qjwu2Duw4YA4BOH+rhX8pD BgaIx8crESkIZC4T3BQyS8+jL1pKXCqdf6s5/Y3T8kbmkjsFNnYKD31pXXYZPdtD2gLf HV2ZI4R2lc47Ihi2EpZA86n21n0UmEqX+wFVw=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Can anybody please throw some light here.

Thanks & Regards,
VSR.

On Thu, Jun 23, 2011 at 3:04 AM, veerasena reddy <veeruyours@xxxxxxxxx> wrote:
Hi,

I am trying to access a dom0 page from domU guest (in read-only mode) to send faster updates from IO device to domU without using ring structures and event channels. whenever there is some device update, the IO device writes to the dom0 page and domU polling on this page (in read-only mode) will pick the status update. I take the grant reference allocated in dom0 and pass it as module params to domU module.

I could successfully share domU page to dom0 but when i tried sharing dom0 page to domU the grant table operations are successful but when i tried to print the contents of the page, it crashes.

Could you please help me in understanding what i missed here.
I suspect I missed some flags to gnttab_set_map_op()/HYPERVISOR_grant_table_op(). The same flags worked when i accessed domU page in dom0. Please guide me the correct usage.

Thanks in Advance.

Regards,
VSR.

======= domU code ========
        struct vm_struct *v_start;
        int err;

        printk("\nxen: domU: init_module with gref = %d", gref); <<-- gref (grant reference is passed as a module parameter)
        // The following function reserves a range of kernel address space and
        // allocates pagetables to map that range. No actual mappings are created.
        v_start = alloc_vm_area(PAGE_SIZE);
        if (v_start == 0) {
                free_vm_area(v_start);
                printk("\nxen: dom0: could not allocate page");
                return -EFAULT;
        }

      gnttab_set_map_op(&ops, (unsigned long)v_start->addr, GNTMAP_host_map, gref, 0); /* flags, ref, domID */

      if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &ops, 1)) {
            printk("\nxen: domU: HYPERVISOR map grant ref failed");
            return -EFAULT;
      }

      if (ops.status) {
            printk("\nxen: domU: HYPERVISOR map grant ref failed status = %d",
                        ops.status);
      }
      printk("\nxen: domU: shared_page = %p, handle = %x, status = %x",
                  v_start->addr, ops.handle, ops.status);

                printk("\nBytes in page ");
                for(i=0;i<=10;i++)
                {
                        printk("%c", ((char*)(v_start->addr))[i]);   <<<--- when i enable this line, it crashes
                }
=========  crash message (on DomU) =========
root@PVHVM-domU:~/test_programs/page_share_interdomain# dmesg -c

[16259.922417] xen: domU: init_module with gref = 9   <<<<<--- grant ref passes as module param
[16259.928763] xen: domU: shared_page = ffffc90000590000, handle = 3, status = 0  <<<<<--- the status code shows SUCCESS
[16259.928871] Bytes in page                 <<<<<<<<<<--- trying to print the bytes in page; crash !!!!
[16259.929126] BUG: unable to handle kernel paging request at ffffc90000590000
[16259.929130] IP: [<ffffffffa010b185>] init_module+0x155/0x17c [domu_share2]
[16259.929144] PGD 3fd25067 PUD 3fd26067 PMD 3e667067 PTE 0
[16259.929178] Oops: 0000 [#3] SMP
===========================

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.