>From: George Dunlap
>Sent: 2009年4月14日 20:38
>
>Hey all,
>
>Thanks for the feedback; and, sorry for sending it just before a
>holiday weekend so there was a delay in writing up a response. (OTOH,
>as I did read the e-mails as they came out, it's given me more time to
>think and coalesce.)
>
>A couple of high bits: This first e-mail was meant to lay out design
>goals and discuss interface. If we can agree (for example) that we
>want latency-sensitive workloads (such as network, audio, and video)
>to perform well, and use latency-sensitive workloads as test cases
>while developing, then we don't need to agree on a specific algorithm
>up-front.
That looks fine to me, but latency-sentitive shouldn't be the only part
to be concerned. :-)
>
>* [Kevin Tian] How is 80%/800% chosen here?
>
>Heuristics. 80% is a general rule of thumb for optimal server
>performance. Above 80% and you may get a higher total throughput (or
>maybe not) but it will be common for individual VMs to have to wait
>for CPU resources, which may cause significant performance impact.
>
>(I should clarify, 80% means 80% of *all* resources, not 80% of one
>cpu; i.e., if you have 4 cores, xenuse may report 360% of one cpu;
>but 100% of all resources would be 400% of one cpu.)
>
>800% was just a general boundary. I think it's sometimes as important
>to say what you *aren't* doing as what you are doing. For example, if
>someone comes in and says, "This new scheduler sucks if you have a
>load average of 10 (i.e., 1000% utilization)", we can say, "Running
>with a load average of 10 isn't what we're designing for. Patches
>will be accepted if they don't adversely impact performance at 80%.
>Otherwise feel free to write your own scheduler for that kind of
>system." OTOH, if a hosting provider (for example) says, "Performance
>really tanks around a load of 3", we should make an effort to
>accomodate that.
Got it. So one more interesting question is, how do you define a
''function reasonablely well'' under 800% utilization, any criteria?
Thanks,
Kevin _______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|