|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 2/2] x86/svm: Use the virtual NMI when available
Le 15/02/2026 à 19:24, Abdelkareem Abdelsaamad a écrit :
> With the Virtual NMI (vNMI), the pending NMI is simply stuffed into the VMCB
> and handed off to the hardware. There is no need for the artificial tracking
> of the NMI handling completion with the IRET instruction interception.
>
> Adjust the svm_inject_nmi to rather inject the NMIs using the vNMI Hardware
> accelerated feature when the AMD platform support the vNMI.
>
> Adjust the svm_get_interrupt_shadow to check if the vNMI is currently blocked
> by servicing another in-progress NMI.
>
> Signed-off-by: Abdelkareem Abdelsaamad <abdelkareem.abdelsaamad@xxxxxxxxxx>
> ---
> xen/arch/x86/hvm/svm/intr.c | 9 +++++++++
> xen/arch/x86/hvm/svm/svm.c | 5 ++++-
> xen/arch/x86/hvm/svm/vmcb.c | 2 ++
> 3 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/x86/hvm/svm/intr.c b/xen/arch/x86/hvm/svm/intr.c
> index 6453a46b85..3e8959f155 100644
> --- a/xen/arch/x86/hvm/svm/intr.c
> +++ b/xen/arch/x86/hvm/svm/intr.c
> @@ -33,6 +33,15 @@ static void svm_inject_nmi(struct vcpu *v)
> u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
> intinfo_t event;
>
> + if ( vmcb->_vintr.fields.vnmi_enable )
> + {
> + if ( !vmcb->_vintr.fields.vnmi_pending &&
> + !vmcb->_vintr.fields.vnmi_blocking )
> + vmcb->_vintr.fields.vnmi_pending = 1;
> +
> + return;
> + }
> +
I think you need to update the clearbit for tpr (related to vintr) for
the hardware to know that you modified the vnmi_pending bit.
This is done through vmcb_{get,set}_vintr in your case (this will also
allow simplifying all the vmcb->_vintr).
You need to do something like
vintr_t intr = vmcb_get_vintr(vmcb);
...
if ( intr.fields.vnmi_enable )
{
if ( !intr.fields.vnmi_pending && !intr.vnmi_blocking )
{
intr.fields.vnmi_pending = 1;
vmcb_set_vintr(vmcb, intr);
}
return;
}
> event.raw = 0;
> event.v = true;
> event.type = X86_ET_NMI;
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 18ba837738..3dfdc18133 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -548,7 +548,9 @@ static unsigned cf_check int
> svm_get_interrupt_shadow(struct vcpu *v)
> if ( vmcb->int_stat.intr_shadow )
> intr_shadow |= HVM_INTR_SHADOW_MOV_SS | HVM_INTR_SHADOW_STI;
>
> - if ( vmcb_get_general1_intercepts(vmcb) & GENERAL1_INTERCEPT_IRET )
> + if ( vmcb->_vintr.fields.vnmi_enable
> + ? vmcb->_vintr.fields.vnmi_blocking
> + : (vmcb_get_general1_intercepts(vmcb) & GENERAL1_INTERCEPT_IRET) )
> intr_shadow |= HVM_INTR_SHADOW_NMI;
>
> return intr_shadow;
> @@ -2524,6 +2526,7 @@ const struct hvm_function_table * __init start_svm(void)
> P(cpu_has_tsc_ratio, "TSC Rate MSR");
> P(cpu_has_svm_sss, "NPT Supervisor Shadow Stack");
> P(cpu_has_svm_spec_ctrl, "MSR_SPEC_CTRL virtualisation");
> + P(cpu_has_svm_vnmi, "Virtual NMI");
> P(cpu_has_svm_bus_lock, "Bus Lock Filter");
> #undef P
>
> diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
> index e583ef8548..e90bbac332 100644
> --- a/xen/arch/x86/hvm/svm/vmcb.c
> +++ b/xen/arch/x86/hvm/svm/vmcb.c
> @@ -184,6 +184,8 @@ static int construct_vmcb(struct vcpu *v)
> if ( default_xen_spec_ctrl == SPEC_CTRL_STIBP )
> v->arch.msrs->spec_ctrl.raw = SPEC_CTRL_STIBP;
>
> + vmcb->_vintr.fields.vnmi_enable = cpu_has_svm_vnmi;
> +
> return 0;
> }
>
Teddy
--
Teddy Astie | Vates XCP-ng Developer
XCP-ng & Xen Orchestra - Vates solutions
web: https://vates.tech
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |