[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] unnecessary VCPU migration happens again



On Thu, Dec 07, 2006 at 11:37:54AM +0800, Xu, Anthony wrote:
> >From this logic, the migration happens frequently if the numbers VCPU
> is less than the number of logic CPU.

This logic is designed to make better use of a partially idle
system by spreading work across sockets and cores before
co-scheduling them. It won't come into play if there are no
idle execution units.

Note that __csched_running_vcpu_is_stealable() will trigger a
migration only when the end result would be strictly better
than the current situation. Once the system is balanced, it
will not bounce VCPUs around.

> That I want to highlight is,
> 
> When HVM VCPU is executing IO operation,
> This HVM VCPU is blocked by HV, until this IO operation
> is emulated by Qemu. Then HV wakes up this HVM VCPU.
> 
> While PV VCPU will not be blocked by PV driver.
> 
> 
> I can give below senario.
> 
> There are two sockets, two core per socket.
> 
> Assume, dom0 is running on socket1 core1,
>  vti1 is runing on socket1 core2,
> Vti 2 is runing on socket2 core1,
> Socket2 core2 is idle.
> 
> If vti2 is blocked by IO operation, then socket2 core1 is idle,
> That means two cores in socket2 are idle,
> While dom0 and vti1 are running on two cores of socket1,
> 
> Then scheduler will try to spread dom0 and vti1 on these two sockets.
> Then migration happens. This is no necessary.

Argueably, if 2 unrelated VCPUs are runnable on a dual socket
host, it is useful to spread them across both sockets. This
will give each VCPU more achievable bandwidth to memory.

What I think you may be argueing here is that the scheduler
is too aggressive in this action because the VCPU that blocked
on socket 2 will wake up very shortly, negating the host-wide
benefits of the migration when it does while still maintaining
the costs.

There is a tradeoff here. We could try being less aggressive
in spreading stuff over idle sockets. It would be nice to do
this with a greater understanding of the tradeoff though. Can
you share more information, such as benchmark perf results,
migration statistics, or scheduler traces?

Emmanuel.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.