|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] [PATCH 05/15] Nested Virtualization: core
> +
> +/* The exitcode is in native SVM/VMX format. The forced exitcode
> + * is in generic format.
> + */
Introducing a 3rd format of exitcode is over-complicated IMO.
> +enum nestedhvm_vmexits
> +nestedhvm_vcpu_vmexit(struct vcpu *v, struct cpu_user_regs *regs,
> + uint64_t exitcode)
> +{
I doubt about the necessary of this kind of wrapper.
In single layer virtualization, SVM and VMX have its own handler for each VM
exit. Only when certain common function is invoked, the control goes from
SVM/VMX to common one, because they have quit many differences and the savings
by wrapping that function is really small, however we pay with additional
complexity in both SVM and VMX side as well as readability and performance.
Further more, it may limit the flexibility to implement something new for both
side.
Back to the nested virtualization. I am not fully convinced we need a common
handler for the VM_entry/exit, at least not for now. It is basically same
situation with above single layer virtualization. Rather we prefer to jump from
SVM/VMX to common code when certain common service is requested.
Will that be easier?
> + }
> +
> + /* host state has been restored */
> + }
> +
> + nestedsvm_vcpu_clgi(v);
This is SVM specific, it is better to be called from SVM code itself.
> +
> + /* Prepare for running the guest. Do some final SVM/VMX
> + * specific tweaks if necessary to make it work.
> + */
> + rc = hvm_nestedhvm_vcpu_vmexit(v, regs, exitcode);
> + hvm->nh_hostflags.fields.forcevmexit = 0;
> + if (rc) {
> + hvm->nh_hostflags.fields.vmentry = 0;
> + return NESTEDHVM_VMEXIT_FATALERROR;
> + }
Thx, Eddie
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- RE: [Xen-devel] [PATCH 05/15] Nested Virtualization: core,
Dong, Eddie <=
|
|
|
|
|