WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Fix hvm guest time to be more accurate

Eddie,

I implemented #2B and ran a three hour test
with sles9-64 and rh4u4-64 guests. Each guest had 8 vcpus
and the box was Intel with 2 physical processors.
The guests were running large loads.
Clock was pit. This is my usual test setup, except that I just
as often used AMD nodes with more processors.

The time error was .02%, good enough for ntpd.

The implementation keeps a constant guest tsc offset.
There is no pending_nr cancellation.
When the vpt.c timer expires, it only increments pending_nr
if its value is zero.
Missed_ticks() is still calculated, but only to update the new
timeout value.
There is no adjustment to the tsc offset (set_guest_time())
at clock interrupt delivery time nor at re-scheduling time.

So, I like this method better than the pending_nr subtract.
I'm going to work on this some more and, if all goes well,
propose a new code submission soon.
I'll put some kind of policy switch in too, which we can discuss
and modify, but it will be along the lines of what we discussed below.

Thanks for your input!

-Dave



Dave Winchell wrote:

Dong, Eddie wrote:

Dave Winchell wrote:
Hi Doug,

Thanks for these comments.

Dong, Eddie wrote:

The vpt timer code in effect accumulates missed ticks
when a guest is running but has interrupts disabled
or when the platform timer is starved. For guests



This case, VMM will pick up the lost ticks into pending_intr_nr.
The only issue is that if a guest is suspended or save/restored
for long time such as several hours or days, we may see tons
of lost ticks, which is difficult to be injected back (cost minutes
of times or even longer). So we give up those amount of
pending_intr_nr.  In all above case, guest need to re-sync its
timer with others like network time for example. So it is
harmless.

Similar situation happens when somebody is debugging a guest.



The solution we provided removes the one second limit on missed ticks.
Our testing showed that this is often exceeded under some loads,
such as many guests, each running loads. Setting missed ticks to 1
tick when 1000 is exceeded is a source of timing error. In the code,
where its set to one there is a TBD sync with guest comment, but no
action.


That is possible, So we should increase 1000 to be more bigger.
Make it to be around 10s should be OK?

Agreed.

In terms of re-syncing with network time, our goal was to have the
timekeeping accurate enough so that the guest could run ntpd.
To do that, the under lying timekeeping needs to be accurate to .05%,
or so. Our measurements show that with this patch the core
timekeeping is
accurate to .02%, approximately, even under loads where many
guests run
loads.
Without this patch, timekeeping is off by more than 10% and ntpd
cannot sync it.
like 64 bit Linux which calculates missed ticks on each
clock interrupt based on the current tsc and the tsc
of the last interrupt and then adds missed ticks to jiffies
there is redundant accounting.

This change subtracts off the hypervisor calculated missed
ticks while guest running for 64 bit guests using the pit.
Missed ticks when vcpu 0 is descheduled are unaffected.




I think this one is not the right direction.

The problem in time virtualization is that we don't how guest will
use it. Latest 64 bit Linux can pick up the missed ticks from TSC
like you mentioned, but it is not true for other 64 bits guest even
linux such as 2.6.16, nor for Windows.



Ours is a specific solution.
Let me explain our logic.


Yes, it can fit for some situation :-)
But I think we need a generic solution.

How to choose the time virtualization policy can be argued.
And we may use some experiemental data. What you found
is definitely one of the good data :-)

We configure all our Linux guests with clock=pit.


Just curious: why you favor PIT instead of HPET?
Does HPET bring more deviation?
We started with pit because it kept such good time for
32 bit Linux. Based on this, we thought that
the problems with 64bit pit would be manageable.

One of these days we will characterize HPET.
Based on rtc performing well, I would think that HPET would do well too.
If not, then the reasons could be investigated.

The 32bit Linux guests we run don't calculate missed ticks and so
don't need cancellation. All the 64bit Linux guests that we run
calculate missed ticks and need cancellation.
I just checked 2.26.16 and it does calculate missed ticks in
arch/x86_64/lermel/time.c, main_timer_handler(), when using pit for
timekeeping.


But this is reported as lost ticks which will prink something.
In theory with guest TSC synchronized with guest periodic
timer. This issue can be removed, but somehow (maybe bug
or virtualization overhead) we may still see them :-(

The missed ticks cancellation code is activated in this patch when the
guest has configured the pit for timekeeping and the guest has four
level page tables (ie 64 bit).

The windows guests we run use rtc for timekeeping and don't need
or get cancellation.

So the simplifying assumption here is that a 64bit guest using pit is
calculating missed ticks.

I would be in favor of a method where xen is told directly
whether to do
missed ticks cancellation. Perhaps its part of the guest
configuration information.
Besides PV timer approach which is not always ready, basically
we have 3 HVM time virtualization approaches:

1: Current one:
    Freeze guest time when the guest is descheduled and
thus sync all guest time resource together. This one
precisely solve the guest time cross-reference issues, guest TSC
precisely represent guest time and thus can be cross-referenced
in guest to pick up lossed ticks if have. but the logic
is relatively complicated and is easy to see bugs :-(


2: Pin guest time to host time.
    This is simplest approach, guest TSC is always pinned to
host TSC with a fixed offset no matter the vCPU is descheduled or
not. In this case, other guest periodic IRQ driven time resource
are not synced to guest TSC.
    Base on this, we have 2 deviations:
    A: Accumulate pending_intr_nr like current #1 approach.
    B: Give up accumulated pending_intr_nr. We only inject
one IRQ for a periodic IRQ driven guest time such as PIT.

    What you mentioned here is a special case of 2B.

    Since we don't know how guest behaviors, what we are
proposing recently is to implement all of above, and let administrate
tools to choose the one to use base on knowledge of guest OS
type.

thanks, eddie



I agree with you on having various policies for timekeeping based on
the guest being run.
This patch addresses specifically the problem
of pit users who calculate missed ticks. Note that in the solution,
de-scheduled missed ticks are not canceled, they are still needed
as the tsc is continuous in the current methods. We are only


If we rely on guest to pick up the lost ticks, why not just do it
thoroughly?
i..e even deschedule missed ticks can rely on guest to pick up.
I have considered this. I was worried that if the descheduled period
was too large that the guest would do something funny, like declare lost
to be 1 ;-)
However, the descheduled periods are probably no longer than the
interrupts disabled periods, given some of the problems we have with
guests in spinlock_irq code. Also, since we have the Linux guest code,
and have been relying on being able to read it to make timekeeping policy,
we can see that they don't set lost to 1.

Actually, the more I think about this, the more I like the idea.
It would mean that we wouldn't have to deliver all those pent up
interrupts to the guest. It solves some other problems as well.
We could probably use this policy for most guests and timekeeping
sources. Linux 32bit with pit might be the exception.

That is what 2.B proposed.
In some cases, we saw issues in Windows (XP32) with 2B, guest wall clock
becomes slow. Maybe XP64 behaviors different like you saw, but we need
windows expert to double check.

Some rough idea in my mind is:
    Policy #1 works best for 32 bits Liunux (and old 64 bits Linux).
    Policy #2B works for latest 64 bits Linux.
    Policy #2A works for Windows (32 & 64 bits).
I agree with this breakdown.
The next step is to do some experiments, I think.

canceling those
pending_intr_nr that accumulate while the guest is running.
These are due
to inaccuracies in the xen time expirations due to interrupt loads or
long dom0 interrupt disable periods. They are also due to extended
periods where the guest has interrupts disabled. In these cases, as
the tsc has been running, the guest will calculated missed ticks at
the time of first clock interrupt
injection and then xen will deliver pending_intr_nr additional
interrupts resulting in jiffies moving by 2*pending_intr_nr instead
of the desired pending_intr_nr.
regards,
Dave


thx, eddie



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel