This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] PCI BAR register space written with garbage in HVM guest

To: Dan Gora <dan.gora@xxxxxxxxx>
Subject: Re: [Xen-devel] PCI BAR register space written with garbage in HVM guest.
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Tue, 16 Mar 2010 11:20:40 -0400
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 16 Mar 2010 08:47:16 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4779de451003151955v15863656i5f39a631a8c558ee@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4779de451003151809n6cec813dp32d77fee34b1bda2@xxxxxxxxxxxxxx> <20100316014851.GE7622@xxxxxxxxxxxxxxxxxxx> <4779de451003151955v15863656i5f39a631a8c558ee@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.19 (2009-01-05)
> <aside>
> What is pcifront/pciback's role for HVM guests exactly?  I understand

None functionaly. The purpose is to "bind" the PCI devices to pciback
(or pcistub) so that no other kernel module usurps it and starts
utilizing it.

> that you "hide" the devices from the dom0 with pciback and it
> definately loads and does *something* when the HVM guest comes up, but
> accesses from the domU don't appear to go through it at all (I
> understand that it works with qemu somehow, but that channel too is
> not at all clear how it works...)

With HVM pciback is not used. You need an virtualization aware IOMMU to
pass through PCI devices to the guest. PCIback/PCI front is for PV
guests and on machine where you don't neccessarily have this fancy

> I've looked through qemu enough to kind of understand that it's
> responsible for abstracting the PCI configuration space for the domU,
> but I don't really understand how accesses get channeled through to it
> from the domU.  Does it use hypercalls somehow?  Can someone explain

The Hypervisor gets "trapped" when an outb is made (look for
emulate_privileged_op function and specifically out).

Then it somehow injects the fault in QEMU which does the rest. I don't
remember the details of how it does thought :-(

> how this whole flow is supposed to work for PCI configuration space
> accesses?
> </aside>

Xen-devel mailing list