Hi,
Thank you for your comments and suggestions.
George Dunlap wrote:
I didn't think about the battery and the performance.
I'm sorry, I used an uncommon definition of the word "battery"; I
should have been more careful. :-)
In this context, "a battery of tests" means "a combination of several
different kinds of tests." I meant some disk-intensive tests, some
network-intensive tests, some cpu-intensive tests, and some
combination of all three. I can run some of these, and you can make
sure that the "client" tests still work well. It would probably be
helpful to have other people volunteer to do some testing as well,
just to make sure we have our bases covered.
Oh, I misread the word “battery”. I understand what “a battery of tests”
means.
By the way, what tests do you concretely do? I have no idea on these tests.
I set the next-timer to 2ms in any vcpu having "boost" credits since every
vcpu having "boost" credits need to be run equally at short intervals. If
there are vcpus having "boost" credits and the next-timer of a vcpu is set
to 10ms, the other vcpus will be waited during 10ms.
At present, I am thinking that if the other vcpus don't have "boost" credits
then we may set the next-timer to 30ms.
I see -- the current setup is good if there's only one "boosted" VM
(per cpu) at a time; but if there are two "boosted" VMs, they're back
to taking turns at 30 ms. Your 2ms patch allows several
latency-sensitive VMs to share the "low latency" boost. That makes
sense. I agree with your suggestion: we can set the timer to 2ms only
if the next waiting vcpu on the queue is also BOOST.
OK.
We must consider also a sleeping vcpu. The vcpu will be added to the
queue by wakeup. So, we can set the timer to 2ms only if the next
waiting vcpu on the queue or the sleeping vcpu is also BOOST.
My thought about 2ms is: the period that the vcpu will be executed next
is 2ms. Therefore, time slice of the vcpu is changed according to the
number of existing vcpus. In a word, we may set the timer to 2ms or
less. But I think that the number of vcpus will not be so much. Is this
supposition wrong? And how about time slice of 2ms or less?
On my patch, "boost" time is tuneable. How about the default "boost" time is
30ms and if necessary, "boost" time is set? Is it acceptable?
I suspect that latency-sensitive workloads such as network, especially
network servers that do very little computation, may also benefit from
short boost times.
I think so, too.
In order to lengthen the "boost" time as much as possible, I will think
about computing the length of the next-timer of the vcpu having "boost"
credits.
If it makes things simpler, we could just stick with 10ms timeslices
when there are no waiting vcpus with BOOST priority, and 2ms if there
is BOOST priority. I don't think there's a particular need to give a
VM only (say) 8 ms instead of 10, if there are no latency-sensitive
VMs waiting.
I agree.
I'll try to revise the patch.
I suggest:
* Modify the credit scheduler directly, rather than having an extra scheduler
* Break down your changes into patches that make individual changes,
i.e (from your first post):
+ A patch to subtract credit consumed accurately
+ A patch to preserve the value of cpu credit when the vcpu is over upper bound
+ A patch to shorten cpu time per one credit
+ A patch to balance credits of each vcpu of a domain
+ A patch to introduce BOOST credit (both Xen and tool components)
+ A patch to shorten allocated time in BOOST priority if the next
vcpu on the runqueue is also at BOOST
Then we can evaluate each change individually.
OK.
I’ll separate individual changes from current patch and post each patch.
Best regards,
Naoki Nishiguchi
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|