On Wed, Jul 07, 2010 at 04:05:44PM +0200, Joanna Rutkowska wrote:
> On 07/07/10 15:30, Ian Pratt wrote:
> >> I think the fear was that there could be class- or device-specific
> >> config registers that we wouldn't know how to handle, and which
> >> could have unexpected effects if they are passed through naively.
> >> Concrete examples were never given, and this was all pre-vtd so as
> >> you say pass-through of a DMA-capable device was insecure anyway.
> >> I've always thought the permissive flag stuff was pretty useless,
> >> and I always suggest people to enable the permissive flag.
> >
> > There are some devices (typically integrated ones, e.g. igfx) that
> > use PCI config space in nasty ways, such as to describe additional
> > BARs, or to trigger SMIs. Allowing free access to these seems
> > dangerous.
> >
>
> So, you're saying that, if we have a device that allows us to set some
> of its PCI config register (some BAR) to tell where to MMIO-map some of
> the device's additional config range, and if we "asked it" to map it
> over, say, some physical addresses belonging to the hypervisor, then the
> MCH would allow for that? And the CPU would happily redirect access to
> those addresses over to the device memory? Why would it? That would
I would think the VT-d chipset would throw a fit.
> clearly be a CPU/chipset bug, as we normally would have to mark this
> memory range as MMIOed in the first place...
>
> And even if we wanted to instruct the device to map its memory over some
> already MMIOed memory in a hypervisor, shouldn't VT-d prevent the
> read/write transactions going to this device?
That is my feeling too.
>
> As for the SMI generation: that stinks indeed. But, does it offer any
> control over the generated #SMI, e.g. what we write into the 0xb2 port,
> or something like that? If it doesn, then surely it's an avenue for
> DomU->SMM escalation, which would mean full system compromise.
>
> I'm trying to figure out why so many drivers do not work well when run
> in a PV driver domain (specifically net drivers), but work fine when
> running in Dom0. Clearly this is not a pfn != mfn problem, as this
> inequality also applies to Dom0, while in Dom0 the same drivers work
> just fine. So it seems like it could only be caused by either of the
> following:
> 1) restricted access to device config space
You can track those easily. Turn on xen-pciback.verbose=1 and you should
see the writes/reads and see if there are any that touch on the
restricted areas.
> 2) interrupt routing problem
Well, that can easily be seen by the /proc/interrupts. If the numbers
are increasing the interrupts are getting there.
Thought if this is MSI/MSI-X make sure you have the latest pv-ops
kernel. There were some bugs I introduced earlier on so that turning on
MSI/MSI-X interrupts would trash the guest. That has been fixed
nowadays.
>
> Or maybe something else?
If you crank up the debug options something should show up. Especially
if you have the IOMMU turned on.
Are these wireless drivers?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|