|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Re: Xen scheduler
On Apr 24, 2007, at 16:42, Petersson, Mats wrote:
If you feel two VCPUs would do better co-scheduled on a
core or socket, you'd currently have to use cpumasks -- as
mike suggested -- to restrict where they can run manually. I'd
be curious to know of real world cases where doing this
increases performance significantly.
If you have data-sharing between the apps on the same socket, and a
shared L2 or L3 cache, and the application/data fits in the cache, I
could see that it would help. [And of course, the OS for example will
have some data and code-sharing between CPU's - so some application
where a lot of time is spent in the OS itself would be benefitting
from "socket sharing"].
For other applications, having better memory bandwitch is most likely
better.
Of course, for ideal performance, it would also have to be taken into
account which CPU owns the memory being used, as the latency of
transferring memory from one CPU to another in a NUMA system can
affect the performance quite noticably.
I understand in theory what would do better scheduled in either
of these ways. What I'm interested in learning about is actual
applications that people use that exhibit the type of L2/3 cache
sharing that would make it significantly better to co-schedule
the VCPUs in question on whole sockets rather than across
them.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|