[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: HVM hypercalls

  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • From: veerasena reddy <veeruyours@xxxxxxxxx>
  • Date: Mon, 23 May 2011 19:22:26 +0530
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 23 May 2011 06:53:07 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=pFwK/QwPsZcEXlFalzSWIMtax6YRtRZnlyedzxSjrPR2Cuu6/CPBN8285zbEya5rS2 +6X9izdxKP0Dx3tvgzoUdsNdJTtXdWlOeOYHFHzKEssKCwkSVmxX00Hu8vq8n8gGSUgL d4+BfD1CEYsMrPVjBf7VCE/Y0qD+zzlhqDuc0=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>


Thanks a lot for quick reply.

I modified my code to get physical page address and now i do not see error message on XEN hypervisor.
Could you please correct if am writing proper physical address or not?

We can write the page address to hypervisor using wrmsr() but who should set hypercall_page which is declared as extern in hypercall.h on HVM? Because when i try to invoke HYPERCALL_xxxx(), it reported hypercall_page not declared. Do we need to enable CONFIG_XEN in HVM kernel in order to invoke hypercalls to hypervisor?

Could you please share any sample code if you have to get a clear understanding of HVM hypercalls.

        char id[13];
        unsigned int msr1;
        unsigned long  my_hpage_phys;
        int my_hpage_lo, my_hpage_hi;

        __asm__ __volatile__(
                : "=b" (*(int *)(&id[0])),
                  "=d" (*(int *)(&id[8])),
                  "=c" (*(int *)(&id[4]))
                : "a" (0x40000000)
        printk("CPU ID read- %s\n", id);

        /* Read MSR register */
        __asm__ __volatile__(
                : "=b" (*(int *)(&msr1))
                : "a" (0x40000002)

        my_hpage_phys = __get_free_page(GFP_ATOMIC);
        hypercall_page = virt_to_phys(my_hpage_phys);
        printk("my_hpage_phys get_free = %lx\n", my_hpage_phys);
        printk("hypercal_page = %p\n", hypercall_page);

        my_hpage_lo = (unsigned long)hypercall_page & 0xffffffff;
        my_hpage_hi = (unsigned long)hypercall_page >> 32;
        printk("my_hpage lo = %x hi = %x\n", my_hpage_lo, my_hpage_hi);
        /* Write hypercall page address to MSR */
        wrmsr(msr1, my_hpage_lo, my_hpage_hi);

        return 0;

================= output on HVM ==========
[root@localhost src]# dmesg
my_hypercall_page @ ffffffffa0388000
my_hpage_phys get_free = ffff880005c0b000
hypercal_page = 0000000005c0b000
my_hpage lo = 5c0b000 hi = 0

Thanks & Regards,

On Mon, May 23, 2011 at 1:52 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
On Mon, 2011-05-23 at 08:48 +0100, veeruyours wrote:
> Hi,
> I recently started working on XEN, and I am looking for ways to invoke
> hypercalls from HVM.
> I followed your instructions and succeeded in reading MSR register.
> But when i attempt to write the physical address of a 4K page from my HVM
> guest (2.6.30 kernel), i observed the XEN hypervisor reporting it as bad
> GMFN as follows.
> [root@f13 ~]# xm dmesg -c
> (XEN) traps.c:664:d17 Bad GMFN ffff88001e925 (MFN ffffffffffffffff) to MSR
> 40000000

That supposed GMFN (fff88001e925) looks an awful lot like a virtual
address and not a physical one to me, unless your guest really has >4TB
of RAM assigned...

> Could you please help me in understanding what went wrong in my
> implementation.
> I am running XEN 4.0.1 on AMD 64bit machine with svm support and the dom0
> kernel running
> The
> Thanks & Regards,
> VSR.
> --
> View this message in context: http://xen.1045712.n5.nabble.com/HVM-hypercalls-tp2541346p4418332.html
> Sent from the Xen - Dev mailing list archive at Nabble.com.
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.