Thanks for the feedback; and, sorry for sending it just before a
holiday weekend so there was a delay in writing up a response. (OTOH,
as I did read the e-mails as they came out, it's given me more time to
think and coalesce.)
A couple of high bits: This first e-mail was meant to lay out design
goals and discuss interface. If we can agree (for example) that we
want latency-sensitive workloads (such as network, audio, and video)
to perform well, and use latency-sensitive workloads as test cases
while developing, then we don't need to agree on a specific algorithm
OK, with that in mind, some specific responses:
* [Jeremy] Is that forward-looking enough? That hardware is currently
available; what's going to be commonplace in 2-3 years?
I think we need to distinguish between "works optimally" and "works
well". Obviously we want the design to be scalable, and we don't want
to have to do a major revision in a year because 16 logical cpus works
well but 32 tanks. And it may be a good idea to "lead" the target, so
that when we actually ship something it will be right on, rather than
6 months behind.
Still, in 2-3 years, will the vast majority of servers have 32 logical
cpus, or still only 16 or less?
Any thoughts on a reasonable target?
* [Kevin Tian] How is 80%/800% chosen here?
Heuristics. 80% is a general rule of thumb for optimal server
performance. Above 80% and you may get a higher total throughput (or
maybe not) but it will be common for individual VMs to have to wait
for CPU resources, which may cause significant performance impact.
(I should clarify, 80% means 80% of *all* resources, not 80% of one
cpu; i.e., if you have 4 cores, xenuse may report 360% of one cpu;
but 100% of all resources would be 400% of one cpu.)
800% was just a general boundary. I think it's sometimes as important
to say what you *aren't* doing as what you are doing. For example, if
someone comes in and says, "This new scheduler sucks if you have a
load average of 10 (i.e., 1000% utilization)", we can say, "Running
with a load average of 10 isn't what we're designing for. Patches
will be accepted if they don't adversely impact performance at 80%.
Otherwise feel free to write your own scheduler for that kind of
system." OTOH, if a hosting provider (for example) says, "Performance
really tanks around a load of 3", we should make an effort to
* [Kevin Tian] How about VM number in total you'd like to support?
Good question. I'll do some research for how many VMs a virtual
desktop system might want to support.
For servers, I think a reasonable design space would be between 1 VM
every 3 cores (for a few extremely high-load servers) to 8 VMs every
core (for highly aggregated servers). I suppose server farms may want
Does anyone else have any thoughts on this subject -- either
suggestions for different numbers, or other use cases they want
Xen-devel mailing list