[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] HVM vcpu hotplug: Fix acpi method NTFY bug



On 29/01/2010 07:32, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx> wrote:

> suppose
> x = scan loop number defined at method PRSC
> y = scan loop number defined at method NTFY
> in original way, there are 2 implicit precondition to make it work right:
> 1). y = x
> 2). y = 2^n
> however, these preconditions are not always be satisfied.
> for example, if x > y, it will produce unexpected scan and block vcpu
> add/remove.

Well, that didn't help. Some code comments would be handy.

Looking at method PRSC the algorithm seems pretty mad given we auto-generate
the code. Apparently we *always* call NTFY for every value of 0<=Arg0<=127,
in order? And then you would propose to do a further linear If chain within
NTFY? Why not merge NTFY into PRSC: if you must have a linear scan anyway in
PRSC, just unroll the while loops, and inline the relevant bit of NTFY for
every unrolled loop invocation.

And please provide some code comments relating this all back to the ACPI
Spec, and explaining what the local variables represent, etc. It's totally
impossible to understand right now. Although at least loop unrolling will
get rid of several local variables...

 -- Keir

> BTW, decision_tree() is too complicated to understand. it's used to produce
> dsdt.dsl middle code.
> people need spend much time to image what's the temporary dsdt.dsl middle code
> like.
> this is not good for maintain and debug.
> for this reason, I'd like to change it back to normal for() loop, easy to
> understand and maintain.
> (it's correct that decision_tree() will reduce scan loop greatly, for example,
> 128 vcpus scan number will reduce from 64 to 7 times avg, however, it's not
> important since this code is not key path, seldom used).



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.