csched_load_balance is used to see if there are higher privileged vcpu
other physical processors' runnable queue, if there is, then migrate
this vcpu to this physical processor.
But at following scenario, this vcpu migration is unnecessary.
1. idle_vcpu0 is running on lp0, hvm_vcpu is in lp0's runnable queue,
happens when hvm_vcpu is just being woken up,
2. idle_vcpu1 is running on lp1, there are no vcpu in lp1's runnable
idle_vcpu1 calls scheduler to try to find vcpu in other physical
to run on lp1, it then finds hvm_vcpu, and hvm_vcpu is migrated to
In fact, this migration is unnecessary, due to hvm_vcpu is going to run
on lp0 immediately.
As we know vcpu migration incurs extra overhead such as tlb purge.
This patch is to eliminate these unnecessary vcpu migrations.
After applying this patch, KB on up hvm domain gain more than 10%
There are 4 physical processors on my box.
What do you think?
Xen-devel mailing list