[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH] xen/vm_event: introduce vm_event_is_enabled()


  • To: Jan Beulich <jbeulich@xxxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: "Penny, Zheng" <penny.zheng@xxxxxxx>
  • Date: Tue, 23 Sep 2025 08:19:51 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=FGBIt+yqnDvo4Z89ejw7xFiF0qp1iXvx2DrNNYc/YfA=; b=q83YbKpkzHMzUFRld0bp/x+cUxbxgIW3BjTJ4rSdEP0P0fGrql3K8CYQBNr9Wpe5etJtFVEP/fHzBwVyN5Fmu1lIGCV7sN0hX0aR8ykzMRSGlA90EBZsZOCGzpwNRwWzPGCUv2ab/kEjgZYXcXTgZy/G50yay4Ii/wK9bXLBVakTgcs2v4hjD9jrU5MqXgysxv+YC1cJTnaG0SAkH5ZpevSeJqxspOMc6gZCsy9i2PGJYKTKd5vV8jIdl6i5m8REFNa6cAGMyBbCzBQYLtO2PktrdjXNyitTb2ePM05KH86gbFAhF7nDO7NNZcTT0kt+jgEpDW6hgv5brswq0f5+yg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=uNzMKpqyvE6DgsoQ90OHb55nVF0RhTuaZ0l8Vz4EKTOBkafpQFNRHAmn3nucYXFqnJ2D9mDxDn5hwtoETeshL1+qkN/D9+Yd3P2nYw91ac7HeS5OugmkDMsVTE4xyrQ8SO0sarD+GWnq3FmsS9/tby5DkccIF2BjbjjJV8m4cdoGgUNnyQPp+1i+ajmfbczmPb1K6J/byiUkehZefBjAFAZrkzapfidQRq2PwwPqYO5eUhkWTp3K69dAvb1bxbwZ9y9V2u4R7HJuTebMe2wtuPlPBYqQXNQ/Yn6p8VVcpEYLFcFWL6p5QWjivy8B60nK+3lfvLEKBggBlhluwk3VyQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: "Huang, Ray" <Ray.Huang@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>, Petre Pircalabu <ppircalabu@xxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • Delivery-date: Tue, 23 Sep 2025 08:20:05 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Msip_labels: MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Enabled=True;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_SetDate=2025-09-23T08:19:44.0000000Z;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Name=Open Source;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_ContentBits=3;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Method=Privileged
  • Thread-index: AQHcI6ElsmPyKgPhO0m7bb5S2YBG5bSPJskAgBE+WYA=
  • Thread-topic: [PATCH] xen/vm_event: introduce vm_event_is_enabled()

[Public]

> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: Friday, September 12, 2025 3:30 PM
> To: Penny, Zheng <penny.zheng@xxxxxxx>; Tamas K Lengyel
> <tamas@xxxxxxxxxxxxx>
> Cc: Huang, Ray <Ray.Huang@xxxxxxx>; Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx>; Roger Pau Monné <roger.pau@xxxxxxxxxx>;
> Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>; Petre Pircalabu
> <ppircalabu@xxxxxxxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; Oleksii Kurochko
> <oleksii.kurochko@xxxxxxxxx>
> Subject: Re: [PATCH] xen/vm_event: introduce vm_event_is_enabled()
>
> On 12.09.2025 06:52, Penny Zheng wrote:
> > @@ -2462,9 +2461,8 @@ int hvm_set_cr3(unsigned long value, bool noflush,
> bool may_defer)
> >      if ( may_defer && unlikely(currd->arch.monitor.write_ctrlreg_enabled &
> >                                 monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3)) )
> >      {
> > -        ASSERT(curr->arch.vm_event);
> > -
> > -        if ( hvm_monitor_crX(CR3, value, curr->arch.hvm.guest_cr[3]) )
> > +        if ( vm_event_is_enabled(curr) &&
> > +             hvm_monitor_crX(CR3, value, curr->arch.hvm.guest_cr[3])
> > + )
> >          {
> >              /* The actual write will occur in hvm_do_resume(), if 
> > permitted. */
> >              curr->arch.vm_event->write_data.do_write.cr3 = 1; @@
> > -2544,9 +2542,7 @@ int hvm_set_cr4(unsigned long value, bool may_defer)
> >      if ( may_defer && 
> > unlikely(v->domain->arch.monitor.write_ctrlreg_enabled &
> >                                 monitor_ctrlreg_bitmask(VM_EVENT_X86_CR4)) )
> >      {
> > -        ASSERT(v->arch.vm_event);
> > -
> > -        if ( hvm_monitor_crX(CR4, value, old_cr) )
> > +        if ( vm_event_is_enabled(v) && hvm_monitor_crX(CR4, value,
> > + old_cr) )
> >          {
> >              /* The actual write will occur in hvm_do_resume(), if 
> > permitted. */
> >              v->arch.vm_event->write_data.do_write.cr4 = 1; @@ -3407,7
> > +3403,7 @@ static enum hvm_translation_result __hvm_copy(
> >              return HVMTRANS_bad_gfn_to_mfn;
> >          }
> >
> > -        if ( unlikely(v->arch.vm_event) &&
> > +        if ( unlikely(vm_event_is_enabled(v)) &&
> >               (flags & HVMCOPY_linear) &&
> >               v->arch.vm_event->send_event &&
> >               hvm_monitor_check_p2m(addr, gfn, pfec,
> > npfec_kind_with_gla) ) @@ -3538,6 +3534,7 @@ int hvm_vmexit_cpuid(struct
> cpu_user_regs *regs, unsigned int inst_len)
> >      struct vcpu *curr = current;
> >      unsigned int leaf = regs->eax, subleaf = regs->ecx;
> >      struct cpuid_leaf res;
> > +    int ret = 0;
> >
> >      if ( curr->arch.msrs->misc_features_enables.cpuid_faulting &&
> >           hvm_get_cpl(curr) > 0 )
> > @@ -3554,7 +3551,10 @@ int hvm_vmexit_cpuid(struct cpu_user_regs *regs,
> unsigned int inst_len)
> >      regs->rcx = res.c;
> >      regs->rdx = res.d;
> >
> > -    return hvm_monitor_cpuid(inst_len, leaf, subleaf);
> > +    if ( vm_event_is_enabled(curr) )
> > +        ret = hvm_monitor_cpuid(inst_len, leaf, subleaf);
> > +
> > +    return ret;
> >  }
> >
> >  void hvm_rdtsc_intercept(struct cpu_user_regs *regs) @@ -3694,9
> > +3694,8 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t
> msr_content,
> >          if ( ret != X86EMUL_OKAY )
> >              return ret;
> >
> > -        ASSERT(v->arch.vm_event);
> > -
> > -        if ( hvm_monitor_msr(msr, msr_content, msr_old_content) )
> > +        if ( vm_event_is_enabled(v) &&
> > +             hvm_monitor_msr(msr, msr_content, msr_old_content) )
> >          {
> >              /* The actual write will occur in hvm_do_resume(), if 
> > permitted. */
> >              v->arch.vm_event->write_data.do_write.msr = 1; @@
> > -3854,12 +3853,10 @@ int hvm_descriptor_access_intercept(uint64_t exit_info,
> >      struct vcpu *curr = current;
> >      struct domain *currd = curr->domain;
> >
> > -    if ( currd->arch.monitor.descriptor_access_enabled )
> > -    {
> > -        ASSERT(curr->arch.vm_event);
> > +    if ( currd->arch.monitor.descriptor_access_enabled &&
> > +         vm_event_is_enabled(curr) )
> >          hvm_monitor_descriptor_access(exit_info, vmx_exit_qualification,
> >                                        descriptor, is_write);
> > -    }
> >      else if ( !hvm_emulate_one_insn(is_sysdesc_access, "sysdesc access") )
> >          domain_crash(currd);
>
> Following "xen: consolidate CONFIG_VM_EVENT" this function is actually
> unreachable when VM_EVENT=n, so no change should be needed here. It's instead
> the unreachability which needs properly taking care of (to satisfy Misra
> requirements) there.
>

I'm a bit confused and may not understand you correctly here.
Just because that hvm_monitor_descriptor_access() will become unreachable codes 
when VM_EVENT=n, and to avoid writing stubs, we added the vm_event_xxx check 
here. Or maybe you want me to add description to say the new checking also 
helps compiling out unreachable codes?

>
> Jan

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.