WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH] Improve the current FLR logic

To: "Espen Skoglund" <espen.skoglund@xxxxxxxxxxxxx>, "Cui, Dexuan" <dexuan.cui@xxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH] Improve the current FLR logic
From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Date: Fri, 18 Jul 2008 13:31:41 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Thu, 17 Jul 2008 22:32:06 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <18559.16413.571459.416403@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <FE7BBCFBB500984A9A7922EBC95F516E0169A036@xxxxxxxxxxxxxxxxxxxxxxxxxxxx><18555.37467.245643.297314@xxxxxxxxxxxxxxxxxx><FE7BBCFBB500984A9A7922EBC95F516E01710D50@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <18559.16413.571459.416403@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcjoC+IDIk73yPyMSNWzLkaUJX6eKQAelevQ
Thread-topic: [Xen-devel] [PATCH] Improve the current FLR logic
Will this method impact PV domU when iommu_pv is not specified?

In fact, I have a look on the tools\python\xen\xend\server\pciif.py, I
noticed it is much bigger than other file (like tpmif.py or vscsiif.py)
in the same directory. and some pci logic in XendDomainInfo.py also. So
are there any rules when decide a function should be placed in control
panel or in backend? 

As for this order issue, I think it is ok for HVM domain, since
domain_destroy should happen after control panel do the cleanup work (We
may need do that before qemu is destroed). But I'm not sure if PV domain
will have such opporunity.

Thanks
Yunhong Jiang

Espen Skoglund <mailto:espen.skoglund@xxxxxxxxxxxxx> wrote:
> What I was thinking of with deassigning it completely was that pciback
> performs a deassign when it is binds to a device, and an assign to
> dom0 when it unbinds a device.  The latter part is not strictly
> necessary though, since Linux will also unregister the device from Xen
> when pciback performs unbind.  This implies a deassign and a
> subsequent register and assign to dom0 once the device is bound to
another
> driver. 
> 
> Doing a complete deassing instead of reassign to dom0 would ensure
> that the whole FLR/deassign order problem is circumvented.
> 
>       eSk
> 
> 
> 
> -----Original Message-----
> From: "Cui, Dexuan" <dexuan.cui@xxxxxxxxx>
> Subject: RE: [Xen-devel] [PATCH] Improve the current FLR logic
> Date: Thu, 17 Jul 2008 18:21:36 +0800
> 
> Yes, when a domain is destroyed, the place where I do FLR is late.
Thanks
> for pointing out this. Looks the situation here is a little tricky.
I'm
> trying to 
> find a better
> place.
> 
> When a device is not assigned to other domain, since Dom0 might
> dynamically rebind the device to another driver rather than pciback,
> "completely deassign the devices from the IOMMU" may not be a
> good idea?
> 
> For the old xend running on a new hypervisor, FLR is actually skipped.
> I'm not sure if this is unacceptable. I'd like to hear more comments.
> 
> Thanks,
> -- Dexuan
> 
> 
> -----Original Message-----
> From: Espen Skoglund [mailto:espen.skoglund@xxxxxxxxxxxxx]=20
> Sent: 2008=C4=EA7=D4=C215=C8=D5 1:52
> To: Cui, Dexuan
> Cc: Keir Fraser; xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH] Improve the current FLR logic
> 
> Maybe I've got this wrong, but it looks like you're doing the FLR
> after the device has been deassigned (i.e., given back to dom0).
> Shouldn't you do the FLR before you actually deassign the device
> instead?  If the device is currently set up for doing DMA transactions
> to host memory, the pending transactions will end up in dom0's memory
> space.  This is fine if GMFN =3D=3D MFN since the guest memory
> has still
> not been released, but for GMFN !=3D MFN you'll end up corrupting
arbitrary
> dom0 memory. 
> 
> The right procedure for deassigning devices should be:
> 
>  - Revoke device resources.
>  - FLR.
>  - Reassign device to dom0.
> 
> Likewise, for assigning devices we should do:
> 
>  - FLR.
>  - Assign device to guest.
>  - Grant device resources to guest.
> 
> Another option is to completely deassign the devices from the IOMMU
> rather than reassigning them to dom0.  Devices bound to pciback need
> not be assigned to dom0's IOMMU tables anyway.
> 
> Further, your patch probably breaks things when running old dom0/xend
> on a new hypervisor since you'll end up not doing any FLR at all.  I
> recently experienced the same thing with some of my own PCI cleanup
patches.
> 
>       eSk
> 
> 
> [Dexuan Cui]
>> Hi, all,
>> The attached patches try to improve the current FLR logic.
>> The idea is: removing the FLR logic from hypervisor and adding the
>> improved logic in Control Panel.
> 
>> The current FLR logic in hypervisor has some issues: 1) Dstate
>> transition is not guaranteed to properly clear the device state; 2) =
the
>> current code for PCIe FLR is actually buggy: PCI_EXP_DEVSTA_TRPND
>> doesn't mean the completion of FLR; according to the PCIe spec, after
>> issuing FLR, we should wait at least 100ms.
> 
>> To make it easier to improve the FLR logic, and to make the
hypervisor
>> thin, I think we might as well move the logic to Control Panel for
the
>> time being. In the long run, the essential logic may be implemented
in
>> the pciback driver of Dom0 instead.
> 
>> [...]
> 
>> I made some tests on my hosts. Looks the patches work well.
> 
>> I'd like to ask for your comments, and test feedbacks. Thank you
>> very much!
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel