[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 01/22] x86/include/asm/intel-txt.h: constants and accessors for TXT registers and heap
On Mon, Jul 07, 2025 at 10:24:37AM +0200, Jan Beulich wrote: > On 06.07.2025 17:57, Sergii Dmytruk wrote: > > On Wed, Jul 02, 2025 at 04:29:18PM +0200, Jan Beulich wrote: > >> Btw, a brief rev log would be nice here. I saw you have something in the > >> cover letter, but having to look in two places isn't very helpful. > > > > I don't really know how to effectively maintain 23 logs at the same time > > given that changing one patch has cascading effects on the rest. I'd > > suggest using `git diff-range` instead, commands for which I can include > > in cover letters for convenience. > > Well, no, doing this per patch is possible and relevant. For cascading > effects their mentioning in a revlog can be pretty brief. OK, will give it a try. > >>> + (void)txt_read(TXTCR_ESTS); > >> > >> I don't think the cast is needed. > > > > It's not needed, but I think that explicitly discarding unused return > > value is a generally good practice even when there is a comment. > > In the context of Misra there has been discussion about doing so. But in our > present code base you will find such as the exception, not the rule. Will state the result is discarded in a comment instead. > >>> + txt_write(TXTCR_CMD_RESET, 1); > >>> + unreachable(); > >> > >> What guarantees the write to take immediate effect? That is, shouldn't > >> there > >> be e.g. an infinite loop here, just in case? > > > > I'll return infinite loop from v2. Tried adding `halt()` as Ross > > suggests, but including <asm/system.h> doesn't work in the early code > > (something about compat headers and missing expansion of things like > > __DeFiNe__). > > Yeah, untangling that may be a little involved. Open-coding halt() is an > option, as long as you clearly indicate it as such (for e.g. grep to still > find that instance). > > Jan Will do that. Regards
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |