|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] planned csched improvements?
After the original announcement of plans to do some work on csched there
wasn't much activity, so I'd like to ask about some observations that I made
with the current implementation and whether it would be expected that
those planned changes would take care of them.
On a lightly loaded many-core non-hyperthreaded system (e.g. a single
CPU bound process in one VM, and only some background load elsewhere),
I see this CPU bound vCPU permanently switch between sockets, which is
a result of csched_cpu_pick() eagerly moving vCPU-s to "more idle"
sockets. It would seem that some minimal latency consideration might be
useful to get added here, so that a very brief interruption by another
vCPU doesn't result in unnecessary migration.
As a consequence of that eager moving, in the vast majority of cases
the vCPU in question then (within a very short period of time) either
triggers a cascade of other vCPU migrations, or begins a series of
ping-pongs between (usually two) pCPU-s - until things settle again for
a while. Again, some minimal latency added here might help avoiding
that.
Finally, in the complete inverse scenario of severely overcommitted
systems (more than two fully loaded vCPU-s per pCPU) I frequently
see Linux' softlockup watchdog kick in, now and then even resulting
in the VM hanging. I had always thought that starvation of a vCPU
for several seconds shouldn't be an issue that early - am I wrong
here?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|