Keir,
I'll run the I/O and idle tests you suggested.
Since the tests will be run in the evenings, they won't be
completed until the end of the week.
Below is the table of tests updated with the weekend SYNC with cpu load
test,
where the error was less than .01% for both linux guests.
> I'm a bit worried about any unwanted side effects of the SYNC+run_timer
> approach -- e.g., whether timer wakeups will cause higher system-wide CPU
> contention.
I agree. If the SYNC model turns out to be desirable from an accuracy
standpoint, then the significance of the additional wakeups should
characterized.
Regards,
Dave
Date Duration Protocol sles, rhat
error load
11/07 23 hrs 40 min ASYNC -4.96 sec, +4.42 sec -.006%,
+.005% cpu
11/09 3 hrs 19 min ASYNC -.13 sec, +1.44 sec, -.001%,
+.012% cpu
11/08 2 hrs 21 min SYNC -.80 sec, -.34 sec, -.009%,
-.004% cpu
11/08 1 hr 25 min SYNC -.24 sec, -.26 sec, -.005%,
-.005% cpu
11/12 65 hrs 40 min SYNC -18 sec, -8 sec, -.008%,
-.003% cpu
11/08 28 min MIXED -.75 sec, -.67 sec -.045%,
-.040% cpu
11/08 15 hrs 39 min MIXED -19. sec,-17.4 sec, -.034%,
-.031% cpu
Keir Fraser wrote:
On 9/11/07 19:22, "Dave Winchell" <dwinchell@xxxxxxxxxxxxxxx> wrote:
Since I had a high error (~.03%) for the ASYNC method a couple of days ago,
I ran another ASYNC test. I think there may have been something
wrong with the code I used a couple of days ago for ASYNC. It may have been
missing the immediate delivery of interrupt after context switch in.
My results indicate that either SYNC or ASYNC give acceptable accuracy,
each running consistently around or under .01%. MIXED has a fairly high
error of
greater than .03%. Probably too close to .05% ntp threshold for comfort.
I don't have an overnight run with SYNC. I plan to leave SYNC running
over the weekend. If you'd rather I can leave MIXED running instead.
It may be too early to pick the protocol and I can run more overnight tests
next week.
I'm a bit worried about any unwanted side effects of the SYNC+run_timer
approach -- e.g., whether timer wakeups will cause higher system-wide CPU
contention. I find it easier to think through the implications of ASYNC. I'm
surprised that MIXED loses time, and is less accurate than ASYNC. Perhaps it
delivers more timer interrupts than the other approaches, and each interrupt
event causes a small accumulated error?
Overall I would consider MIXED and ASYNC as favourites and if the latter is
actually more accurate then I can simply revert the changeset that
implemented MIXED.
Perhaps rather than running more of the same workloads you could try idle
VCPUs and I/O bound VCPUs (e.g., repeated large disc reads to /dev/null)? We
don't have any data on workloads that aren't CPU bound, so that's really an
obvious place to put any further effort imo.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|