WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [Q] Device error handling discussion -- Was: Is qemu use

To: Yuji Shimada <shimada-yxb@xxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [Q] Device error handling discussion -- Was: Is qemu used when we use VTd?
From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Date: Thu, 16 Oct 2008 15:32:40 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Akio Takebe <takebe_akio@xxxxxxxxxxxxxx>, "Ke, Liping" <liping.ke@xxxxxxxxx>
Delivery-date: Thu, 16 Oct 2008 00:33:40 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20081014171332.89E2.SHIMADA-YXB@xxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20080929134743.0EDA.SHIMADA-YXB@xxxxxxxxxxxxxxx> <E2263E4A5B2284449EEBD0AAB751098401ABE67456@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20081014171332.89E2.SHIMADA-YXB@xxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Ackt14Xg35QJEY4bR4iqWcX9aGbykwBhJh7w
Thread-topic: [Xen-devel] [Q] Device error handling discussion -- Was: Is qemu used when we use VTd?
Yuji Shimada <mailto:shimada-yxb@xxxxxxxxxxxxxxx> wrote:
> On Mon, 6 Oct 2008 10:28:26 +0800
> "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
>
>> Yuji Shimada <mailto:shimada-yxb@xxxxxxxxxxxxxxx> wrote:
>>> On Fri, 26 Sep 2008 12:36:21 +0800
>>> "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
>>
>> I changed the subject to reflect what's discussed.
>>
>>> We have to solve many difficulties to keep guest domain running.
>>>
>>> How about following idea for first step?
>>
>> Yes, agree.
>>
>>>
>>>    Non-fatal error on I/O device:
>>>        - kill the domain with error source function.
>>>        - reset the function.
>>
>>> From following staement in PCI-E 2.0 section 6.6.2: "Note that Port
>> state machines associated with Link functionality including those
>> in the Physical and Data Link Layers are not reset by FLR", I'm not
>> sure if FLR is a right method to handle the error situation. That's
>> the reason I asked on how to handle multiple-function devices.
>
> I think Non-fatal error is transaction's error and it does not require
> to reset lower layer. But I am not sure.

By default, the data link layer's error is fatal, but the result depends on how 
driver setup it.
We can trap the access to AER register, and make sure data link layer error 
always report as fatal. That is easy to implement.

>
>>>    Non-fatal error on PCI-PCI bridge.
>>>        - kill all domains with the functions under the PCI-PCI bridge.
>>>        - reset PCI-PCI bridge and secondary bus.
>>>
>>>    Fatal error:
>>>        - kill all domains with the functions under the same root port.
>>>        - reset the link (secondary bus reset on root port).
>>
>> Agree. Basically I think the action of "reset PCI-PCI bridge and
>> secondary bus" or "reset the link" has been done by AER core
>> already. What we need define is PCI back's error handler.  In first
>> step, the error handler will trigger domain reset, in future, more
>> elegant action can be defined/implemented, Any idea?
>
> I agree with you basically.
>
> Current AER core does not reset PCI-PCI bridge and secondary bus,
> when Non-fatal error occurs on PCI-PCI bridge. We need to implement
> resetting PCI-PCI bridge and secondary bus.

I'd keep the AER core as current-is unless some special reason. For example, 
why should we kill all domains under same root port and reset root port's 
secondary link? Currently it will do so only if the impacted device has no aer 
service register.
Also not sure if we need reset the link for non-fatal error if AER core does 
not do that. Are there any special difference between virtualization/native 
situation?

>
>>>
>>> Note: we have to consider to prevent device from destroying other domain's
>>> memory.
>>
>> Why should we consider destroy other domain's memory? I think VT-d
>> should gurantee this.
>
> The device is re-assigned to dom0 on destroying HVM domain. If we
> destroy domain before resetting the device, I/O device can write
> memory of dom0. On the other hand, we have to stop guest software
> before resetting the device to prevent guest software from accessing device.

That should same to normal VT-d situation. We need FLR before we re-assign 
device to dom0 (If current not working like this, it should be a bug).
Also, to stop guest software before resetting the device maybe helpful, but 
maybe not so important. Do you think guest's second access will cause host 
impacted?  After all, even on native environment this is guranted unless 
platform support it. (It is said PPC has such support).

BTW, you stated "We have to solve many difficulties to keep guest domain 
running", can you give some detail difficulties (it maybe difficult to HVM, but 
not sure for PV side)?

>
>
> By the way, do you have any plan to implement these function?
> I can provide the idea. But I can't provide the code.

Yes, we try to work on it. But we may have not enough environment to test all 
types of error. Also although the AER code can be backported easily, some 
required ACPI fix is more challenge.

>
> Thanks,
> --
> Yuji Shimada

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel