WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH 0/2] MCA support with page offlining

To: SUZUKI Kazuhiro <kaz@xxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH 0/2] MCA support with page offlining
From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Date: Tue, 16 Dec 2008 17:30:04 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc:
Delivery-date: Tue, 16 Dec 2008 01:30:40 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20081215.153321.197440268.kaz@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20081215.153321.197440268.kaz@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AclefxL4dgBlKshESX2H+yn8LtNf9wA2xN/w
Thread-topic: [Xen-devel] [PATCH 0/2] MCA support with page offlining
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx <> wrote:
> Hi all,
> 
> I had posted about MCA support for Intel64 before. It had only a
> function to log the MCA error data received from hypervisor.
> 
> http://lists.xensource.com/archives/html/xen-devel/2008-09/msg0 0876.html
> 
> I attach patches that support not only error logging but also Page
> Offlining function. The page where an MCA occurs will offline and not
> reuse. A new flag 'PGC_reserved' was added in page count_info to mark the
> impacted page. 
> 
> I know that it is better to implement the page offlining for general
> purpose, but I implemented for MCA specialized in this first step.

Maybe the MCA page offline is a bit different to normal page offline 
requirement, so take it as first step maybe a good choice :)

As for your current page_offlining, I'm not sure why the PGC_reserved page 
should not be freed? Also, for following code, will that make the heap(node, 
zone, j) can't be allocated anymore? Maybe we can creat a special list to hold 
all those pages and remove them from the heap list?

+                if ( !list_empty(&heap(node, zone, j)) ) {
+                    pg = list_entry(heap(node, zone, j).next, struct 
page_info, list);
+                    if (!(pg->count_info & PGC_reserved))
+                        goto found;
+                    else
+                        printk(XENLOG_DEBUG "Page %p(%lx) is not to be 
allocated.\n",
+                               pg, page_to_maddr(pg));
+                


> 
> And I also implement the MCA handler of Dom0 which support to shutdown
> the remote domain where a MCA occurred. If the MCA occurred on a DomU,
> Dom0 notifies it to the DomU. When the notify is failed, Dom0 calls
> SCHEDOP_remote_shutdown hypercall.
> 
> [1/2] xen part: mca-support-with-page-offlining-xen.patch

We are not sure we really need pass all #MC information to dom0 firstly, and 
let dom0 to notify domU. Xen should knows about everything, so it may have 
knowledge to decide inject virtual #MC to guest or not. Of course, this does 
not impact your patch.

> [2/2] linux/x86_64 part: mca-support-with-page-offlining-linux.patch

As for how to inject virtual #MC to guest (including dom0), I think we need 
consider following point:

a) Benefit from reusing guest #MC handler's . #MC handler is well known 
difficult to test, and the native guest handler may have been tested more 
widely. Also #MC handler improves as time going-on, reuse guest's MCA handler 
share us those improvement.
b) Maintain the PV handler to different OS version may not so easy, especially 
as hardware improves, and kernel may have better support for error 
handling/containment.
c) #MC handler may need some model specific information to decide the action, 
while guest (not dom0) has virtualized CPUID information.
d) Guest's MCA handler may requires the physical information when the #MC 
hapen, like the CPU number the #MC happens.
e) For HVM domain, PV handler will be difficult (considering Windows guest).

And we have several option to support virtual #MC to guest:

Option 1 is what currently implemented. A PV #MC handler is implemented in 
guest. This PV handler gets MCA information from Xen HV through hypercall, 
including MCA MSR value, also some additional information, like which physical 
CPU the MCA happened. Option 1 will help us on issue d), but we need main a PV 
handler, and can't get benifit from native handler. Also it does not resolve 
issue c) quite well.

option 2, Xen will provide MCA MSR virtualization so that guest's native #MC 
handler can run without changes. It can benifit from guest #MC handler, but it 
will be difficult to get model specific information, and has no physical 
information.

Option 3 uses a PV #MC handler for guest as option 1, but interface between 
Xen/guest is abstract event, like offline offending page, terminate current 
execution context etc. This should be straight forward for Linux, but may be 
difficult to Windows and other OS.

Currently we are considering option 2 to provide MCA MSR virtualization to 
guest, and dom0 can also benifit from such support (if guest has different 
CPUID as native, we will either keep guest running, or kill guest based on 
error code). Of course, current mechanism of passing MCA information from xen 
to dom0 will still be useful, but that will be used for logging purpose or for 
Correcatable Error. How do you think about this?

Thanks
Yunhong Jiang

> Signed-off-by: Kazuhiro Suzuki <kaz@xxxxxxxxxxxxxx>
> 
> Thanks,
> KAZ
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel