xen-devel
[Xen-devel] RFC: Nested VMX patch series 11: vmresume
To: |
"Dong, Eddie" <eddie.dong@xxxxxxxxx>, Tim Deegan <Tim.Deegan@xxxxxxxxxx>, Keir Fraser <keir@xxxxxxx> |
Subject: |
[Xen-devel] RFC: Nested VMX patch series 11: vmresume |
From: |
"Dong, Eddie" <eddie.dong@xxxxxxxxx> |
Date: |
Wed, 1 Jun 2011 12:02:23 +0800 |
Accept-language: |
en-US |
Acceptlanguage: |
en-US |
Cc: |
"xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>, "He, Qing" <qing.he@xxxxxxxxx> |
Delivery-date: |
Tue, 31 May 2011 21:10:43 -0700 |
Envelope-to: |
www-data@xxxxxxxxxxxxxxxxxxx |
List-help: |
<mailto:xen-devel-request@lists.xensource.com?subject=help> |
List-id: |
Xen developer discussion <xen-devel.lists.xensource.com> |
List-post: |
<mailto:xen-devel@lists.xensource.com> |
List-subscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe> |
List-unsubscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> |
References: |
<osstest-7468-mainreport@xxxxxxx> |
Sender: |
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
Thread-index: |
AcwgAhjDwUdZ/2BOTBqtK+IA8ti/WgAC9edgAAAdViAAAA3MIAAAF9kwAAAdbWAAABYxcAAAC+egAAALN5AAAAWccAAABoAAAAAHMIAAAArpgA== |
Thread-topic: |
[Xen-devel] RFC: Nested VMX patch series 11: vmresume |
Thx, Eddie
Signed-off-by: Qing He <qing.he@xxxxxxxxx>
Signed-off-by: Eddie Dong <eddie.dong@xxxxxxxxx>
diff -r 599f4aacabeb xen/arch/x86/hvm/vmx/vmx.c
--- a/xen/arch/x86/hvm/vmx/vmx.c Fri May 27 17:35:24 2011 +0800
+++ b/xen/arch/x86/hvm/vmx/vmx.c Fri May 27 17:46:40 2011 +0800
@@ -2142,6 +2142,11 @@
/* Now enable interrupts so it's safe to take locks. */
local_irq_enable();
+ /* XXX: This looks ugly, but we need a mechanism to ensure
+ * any pending vmresume has really happened
+ */
+ vcpu_nestedhvm(v).nv_vmswitch_in_progress = 0;
+
if ( unlikely(exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY) )
return vmx_failed_vmentry(exit_reason, regs);
@@ -2457,10 +2462,18 @@
update_guest_eip();
break;
+ case EXIT_REASON_VMLAUNCH:
+ if ( nvmx_handle_vmlaunch(regs) == X86EMUL_OKAY )
+ update_guest_eip();
+ break;
+
+ case EXIT_REASON_VMRESUME:
+ if ( nvmx_handle_vmresume(regs) == X86EMUL_OKAY )
+ update_guest_eip();
+ break;
+
case EXIT_REASON_MWAIT_INSTRUCTION:
case EXIT_REASON_MONITOR_INSTRUCTION:
- case EXIT_REASON_VMLAUNCH:
- case EXIT_REASON_VMRESUME:
case EXIT_REASON_GETSEC:
case EXIT_REASON_INVEPT:
case EXIT_REASON_INVVPID:
diff -r 599f4aacabeb xen/arch/x86/hvm/vmx/vvmx.c
--- a/xen/arch/x86/hvm/vmx/vvmx.c Fri May 27 17:35:24 2011 +0800
+++ b/xen/arch/x86/hvm/vmx/vvmx.c Fri May 27 17:46:40 2011 +0800
@@ -283,6 +283,13 @@
}
}
+static inline u32 __n2_exec_control(struct vcpu *v)
+{
+ struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+
+ return __get_vvmcs(nvcpu->nv_vvmcx, CPU_BASED_VM_EXEC_CONTROL);
+}
+
static int vmx_inst_check_privilege(struct cpu_user_regs *regs, int
vmxop_check)
{
struct vcpu *v = current;
@@ -470,6 +477,34 @@
return X86EMUL_OKAY;
}
+int nvmx_handle_vmresume(struct cpu_user_regs *regs)
+{
+ struct vcpu *v = current;
+ struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
+ struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
+ int rc;
+
+ rc = vmx_inst_check_privilege(regs, 0);
+ if ( rc != X86EMUL_OKAY )
+ return rc;
+
+ /* check VMCS is valid and IO BITMAP is set */
+ if ( (nvcpu->nv_vvmcxaddr != VMCX_EADDR) &&
+ ((nvmx->iobitmap[0] && nvmx->iobitmap[1]) ||
+ !(__n2_exec_control(v) & CPU_BASED_ACTIVATE_IO_BITMAP) ) )
+ nvcpu->nv_vmentry_pending = 1;
+ else
+ vmreturn(regs, VMFAIL_INVALID);
+
+ return X86EMUL_OKAY;
+}
+
+int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
+{
+ /* TODO: check for initial launch/resume */
+ return nvmx_handle_vmresume(regs);
+}
+
int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
{
struct vcpu *v = current;
diff -r 599f4aacabeb xen/include/asm-x86/hvm/vmx/vvmx.h
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h Fri May 27 17:35:24 2011 +0800
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h Fri May 27 17:46:40 2011 +0800
@@ -103,6 +103,8 @@
int nvmx_handle_vmread(struct cpu_user_regs *regs);
int nvmx_handle_vmwrite(struct cpu_user_regs *regs);
+int nvmx_handle_vmresume(struct cpu_user_regs *regs);
+int nvmx_handle_vmlaunch(struct cpu_user_regs *regs);
#endif /* __ASM_X86_HVM_VVMX_H__ */
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] RFC: Nested VMX patch series 01: data structure, (continued)
- [Xen-devel] RFC: Nested VMX patch series 01: data structure, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 02: wrap APIs, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 03: vmxon_off, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 05: vmptrld, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 04: virtual VMCS structure and APIs, Dong, Eddie
- RE: [Xen-devel] RFC: Nested VMX patch series 05: vmptrst, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 07: vmclear, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 08: vmwrite, Dong, Eddie
- RE: [Xen-devel] RFC: Nested VMX patch series 09: vmread, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 10: vmcs switching API, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 11: vmresume,
Dong, Eddie <=
- [Xen-devel] RFC: Nested VMX patch series 12: shadow vmcs control, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 12: real VMCS switch, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 14: exceptions, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 15: exit from n2 guest, Dong, Eddie
- RE: [Xen-devel] RFC: Nested VMX patch series 16: fpu, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 17: cr4, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 18: capability, Dong, Eddie
- [Xen-devel] RFC: Nested VMX patch series 00, Dong, Eddie
|
|
|