WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH][v2] Hybrid extension support in Xen

To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH][v2] Hybrid extension support in Xen
From: Sheng Yang <sheng@xxxxxxxxxxxxxxx>
Date: Tue, 2 Feb 2010 22:28:48 +0800
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>
Delivery-date: Tue, 02 Feb 2010 06:31:06 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1265118605.2965.23089.camel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Intel Opensource Technology Center
References: <201002021616.19189.sheng@xxxxxxxxxxxxxxx> <201002022128.43575.sheng@xxxxxxxxxxxxxxx> <1265118605.2965.23089.camel@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.12.2 (Linux/2.6.31-17-generic; KDE/4.3.2; x86_64; ; )
On Tuesday 02 February 2010 21:50:05 Ian Campbell wrote:
> On Tue, 2010-02-02 at 13:28 +0000, Sheng Yang wrote:
> > On Tuesday 02 February 2010 21:19:32 Ian Campbell wrote:
> > > On Tue, 2010-02-02 at 12:54 +0000, Sheng Yang wrote:
> > > > On Tuesday 02 February 2010 19:22:21 Ian Campbell wrote:
> > > > > On Tue, 2010-02-02 at 08:16 +0000, Sheng Yang wrote:
> > > > > > +/* Reserve 128KB for grant table */
> > > > > > +#define GNTTAB_MEMBASE     0xfbfe0000
> > > > > > +#define GNTTAB_MEMSIZE     0x20000
> > > > >
> > > > > Why is this necessary? Isn't the grant table contained within one
> > > > > of the BARS on the virtual PCI device? What needs grant tables for
> > > > > prior to the kernel finding the PCI device which necessitates
> > > > > hardcoding these addresses in both guest and hypervisor?
> > > >
> > > > Thanks for so quick and detail comments. :)
> > > >
> > > > And this one is purposed, because we don't want to depends on QEmu.
> > > > As you see, we now have PV drivers, QEmu is an alternative way to
> > > > provide device model now. We think that still involving QEmu as a
> > > > requirement is somehow strange. So this reserved region is there, to
> > > > drop the dependence with QEmu to provide PV driver(PV driver depends
> > > > on QEmu is still strange, right?).
> > >
> > > So with your patchset you can run an HVM guest with no qemu process at
> > > all? What about the other emulated devices which have no PV equivalent?
> > > How does the VM boot, does the BIOS have pv INT 13 handler?
> >
> > No, not currently... Sorry for confusion.
> >
> > We just think QEmu provided the availability of PV driver seems
> > unelegant, so we want to decouple them. Because what QEmu provided is a
> > PCI IRQ and a MMIO region for grant table. And event channel is available
> > without that PCI IRQ, so we think decouple MMIO from QEmu should be more
> > elegant...
> 
> OK, but in that case I think we should have a mechanism for the guest to
> query the location of the grant table pages (hypercall, MSR etc) rather
> than hardcoding a magic address. It may well end up being hardcoded on
> the tools/hypervisor side for now but there is no reason to expose that
> to the guest.

Yes. I would try to use QEmu provided device for this - though I think it may 
still have problem if I use PCI probe mechanism to find the address, because 
the initialization of grant table maybe earlier.

Rather than probe as a PCI device, I prefer a query or hardcode in this case. 
> 
> I wonder if we could turn things around and have the guest pick some
> pages and tell the hypervisor to put the grant table there, removing the
> need to reserve any of the physical address space up front. How do
> full-PV guests find their grant table, is that a mechanism which could
> be re-used here instead of reserving magic regions?

I would check if we can follow the PV solution.(I remember I once checked 
that, but forgot the reason for not doing so now...)

-- 
regards
Yang, Sheng

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel