WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH][SVM][1/2] fix SVM 64bit hv cores>0 reboot/hang issue

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] [PATCH][SVM][1/2] fix SVM 64bit hv cores>0 reboot/hang issue
From: "Woller, Thomas" <thomas.woller@xxxxxxx>
Date: Wed, 3 May 2006 16:17:21 -0500
Delivery-date: Wed, 03 May 2006 14:17:58 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcZu9vcRJTuS0XtoTnCkTbm4C7xKHQ==
Thread-topic: [PATCH][SVM][1/2] fix SVM 64bit hv cores>0 reboot/hang issue
SVM patch for 64bit hv, to reset the ss, es, ds host selectors to NULL
during a context switch to the SVM domain's vcpu.  More detailed
description of the problem below, any alternate solutions and thoughts
are appreciated.

This patch also initializes the tlb_control to 1 for the initial
do_launch().

Applies cleanly to 9906.
Please apply.

Signed-off-by: Tom Woller <thomas.woller@xxxxxxx>

NOTES:
This issue occurs when creating an unmodified HVM/SVM guest on cores>0,
with a 64bit hypervisor, and SMP Dom0.   The system will reboot or hang
on the initial  vmrun/launch".

The reboot has been traced to a microcode induced processor shutdown.
The shutdown is triggered when the host selectors are being restored
during the initial "vmexit".  The microcode performs a consistency check
on each host selector restored.  Each selectors' GDT entry being
restored must be accessible to the core performing the VMRUN/VMEXIT.  In
this failing case the host selectors being restored are DS/ES == 0x2B
(from Dom0 vcpu), and are contained in physical GDT pages/entries that
are not present in memory for core 1 - which results in a processor
shutdown.  

Basically, each host selector that is restored (during a SVM VMEXIT)
must be accessible - i.e. those GDT pages must be mapped in for the core
performing the vmexit.  A NULL selector is not checked.  

An alternate solution (besides setting ss,es,ds to NULL) would be to
determine where to place code that will ensure that each required GDT
page, associated with the host selectors to be restored, is valid and
physically present.   Are ss, es, ds set to 0x2B needed in this 64bit
context, or is the solution of setting these selectors to NULL
sufficient?   
   
We placed test code, into the svm_ctxt_switch_to() function, that
touches each GDT page,  but this code is running in Interrupt context,
so page faults result in a CPU1 FATAL TRAP 14 (page fault) during
INTERRUPT CONTEXT. 
We also tried running this same test code during the svm do_launch() but
the result is a GP fault and system crash also.

Now, cursory testing on VT boxes indicates that the 0x2B selectors are
also restored on VMX, but the VT microcode does not seem to validate
these values, and therefore does not cause a processor shutdown.  

These registers are officially ignored in 64bit mode, and zero'ing them
out seems to be a solution that is functional, but we're unsure as to
the use of the __USER_DS 0x2B values in ds, es selectors in 64bit mode. 

Appreciate any information/thoughts,
Tom

Attachment: svm_selinit0_9908.patch
Description: svm_selinit0_9908.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>