On Tuesday 02 February 2010 21:50:05 Ian Campbell wrote:
> On Tue, 2010-02-02 at 13:28 +0000, Sheng Yang wrote:
> > On Tuesday 02 February 2010 21:19:32 Ian Campbell wrote:
> > > On Tue, 2010-02-02 at 12:54 +0000, Sheng Yang wrote:
> > > > On Tuesday 02 February 2010 19:22:21 Ian Campbell wrote:
> > > > > On Tue, 2010-02-02 at 08:16 +0000, Sheng Yang wrote:
> > > > > > +/* Reserve 128KB for grant table */
> > > > > > +#define GNTTAB_MEMBASE 0xfbfe0000
> > > > > > +#define GNTTAB_MEMSIZE 0x20000
> > > > >
> > > > > Why is this necessary? Isn't the grant table contained within one
> > > > > of the BARS on the virtual PCI device? What needs grant tables for
> > > > > prior to the kernel finding the PCI device which necessitates
> > > > > hardcoding these addresses in both guest and hypervisor?
> > > >
> > > > Thanks for so quick and detail comments. :)
> > > >
> > > > And this one is purposed, because we don't want to depends on QEmu.
> > > > As you see, we now have PV drivers, QEmu is an alternative way to
> > > > provide device model now. We think that still involving QEmu as a
> > > > requirement is somehow strange. So this reserved region is there, to
> > > > drop the dependence with QEmu to provide PV driver(PV driver depends
> > > > on QEmu is still strange, right?).
> > >
> > > So with your patchset you can run an HVM guest with no qemu process at
> > > all? What about the other emulated devices which have no PV equivalent?
> > > How does the VM boot, does the BIOS have pv INT 13 handler?
> >
> > No, not currently... Sorry for confusion.
> >
> > We just think QEmu provided the availability of PV driver seems
> > unelegant, so we want to decouple them. Because what QEmu provided is a
> > PCI IRQ and a MMIO region for grant table. And event channel is available
> > without that PCI IRQ, so we think decouple MMIO from QEmu should be more
> > elegant...
>
> OK, but in that case I think we should have a mechanism for the guest to
> query the location of the grant table pages (hypercall, MSR etc) rather
> than hardcoding a magic address. It may well end up being hardcoded on
> the tools/hypervisor side for now but there is no reason to expose that
> to the guest.
Yes. I would try to use QEmu provided device for this - though I think it may
still have problem if I use PCI probe mechanism to find the address, because
the initialization of grant table maybe earlier.
Rather than probe as a PCI device, I prefer a query or hardcode in this case.
>
> I wonder if we could turn things around and have the guest pick some
> pages and tell the hypervisor to put the grant table there, removing the
> need to reserve any of the physical address space up front. How do
> full-PV guests find their grant table, is that a mechanism which could
> be re-used here instead of reserving magic regions?
I would check if we can follow the PV solution.(I remember I once checked
that, but forgot the reason for not doing so now...)
--
regards
Yang, Sheng
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|