WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: TSC scaling and softtsc reprise, and PROPOSAL

To: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>, "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>, "Xen-Devel (E-mail)" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] RE: TSC scaling and softtsc reprise, and PROPOSAL
From: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Date: Thu, 23 Jul 2009 08:18:14 -0700 (PDT)
Cc: "Dong, Eddie" <eddie.dong@xxxxxxxxx>, John Levon <levon@xxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 23 Jul 2009 08:19:01 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4FA716B1526C7C4DB0375C6DADBC4EA341740F5E68@xxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> From: Ian Pratt [mailto:Ian.Pratt@xxxxxxxxxxxxx]

> pre-VT it wasn't possible to trap RDTSC, so this can't help PV guests.

For PV guests, CR4.TSD would always be set, generating
a general protection fault for every rdtsc.  (Or perhaps
I am missing some x86 architectural subtlety?  This is
how it is done on ia64.)

> I'd be rather surprised if VMware trapped RDTSC. From what I 
> gather, ESX3 doesn't make a great deal of use of VT for 32b 
> guests, so at the very least it would be tricky to do 
> anything about user space use of rdtsc.

I had not heard it before, so am very interested in
independent confirmation (or denial).  Given that
it is impossible (I think) to guarantee correct SMP
behavior without it, and given VMware's attention
to correctness details, I guess it doesn't surprise
me.

> I've informally heard that certain version of the JVM and 
> Oracle Db have a habit of pounding rdtsc hard from user 
> space, but I don't know what rates.

Indeed they do and they use it for timestamping
events/transactions, so these are the very same
apps that need to guarantee SMP timestamp ordering.

I realize this is an ugly problem and am searching for
the best middle ground.  For example, if tsc emulation
can be made "fast enough", that's a good answer.

> -----Original Message-----
> From: Ian Pratt [mailto:Ian.Pratt@xxxxxxxxxxxxx]
> Sent: Thursday, July 23, 2009 8:54 AM
> To: Dan Magenheimer; Zhang, Xiantao; Keir Fraser; Xen-Devel (E-mail)
> Cc: John Levon; Dong, Eddie; Ian Pratt
> Subject: RE: TSC scaling and softtsc reprise, and PROPOSAL
> 
> 
> 
> > Am I correct in reading that your patch is ONLY for
> > HVM guests?  If so, since some (maybe most) workloads
> > that rely on tsc for transaction timestamps will be
> > PV, your patch doesn't solve the whole problem.
> 
> pre-VT it wasn't possible to trap RDTSC, so this can't help PV guests.
> 
> > Can someone at Intel confirm or deny that VMware ESX
> > always traps rdtsc?  If so, it is probably not hard
> > to write an application that works on VMware ESX (on
> > certain hardware) but fails on Xen.
> 
> I'd be rather surprised if VMware trapped RDTSC. From what I 
> gather, ESX3 doesn't make a great deal of use of VT for 32b 
> guests, so at the very least it would be tricky to do 
> anything about user space use of rdtsc.
> 
> I've informally heard that certain version of the JVM and 
> Oracle Db have a habit of pounding rdtsc hard from user 
> space, but I don't know what rates.
> 
> Ian
> 
> 
> > 
> > Thanks,
> > Dan
> > 
> > > -----Original Message-----
> > > From: Zhang, Xiantao [mailto:xiantao.zhang@xxxxxxxxx]
> > > Sent: Tuesday, July 21, 2009 11:05 PM
> > > To: Keir Fraser; Dan Magenheimer; Xen-Devel (E-mail)
> > > Cc: John Levon; Ian Pratt; Dong, Eddie
> > > Subject: RE: TSC scaling and softtsc reprise, and PROPOSAL
> > >
> > >
> > > Keir Fraser wrote:
> > > > On 20/07/2009 21:02, "Dan Magenheimer" 
> <dan.magenheimer@xxxxxxxxxx>
> > > > wrote:
> > > >
> > > >> I agree that if the performance is *really bad*, the default
> > > >> should not change.  But I think we are still flying on rumors
> > > >> of data collected years ago in a very different world, and
> > > >> the performance data should be re-collected to prove that
> > > >> it is still *really bad*.  If the degradation is a fraction
> > > >> of a percent even in worst case analysis, I think the default
> > > >> should be changed so that correctness prevails.
> > > >>
> > > >> Why now?  Because more and more real-world applications are
> > > >> built on top of multi-core platforms where TSC is reliable
> > > >> and (by far) the best timesource.  And I think(?) we all agree
> > > >> now that softtsc is the only way to guarantee correctness
> > > >> in a virtual environment.
> > > >
> > > > So how bad is the non-softtsc default mode anyway? Our default
> > > > timer_mode has guest TSCs track host TSC (plus a fixed per-vcpu
> > > > offset that defaults to having all vcpus of a domain
> > > aligned to vcpu0
> > > > boot = zero tsc).
> > > >
> > > > Looking at the email thread you cited, all I see is someone
> > > from Intel
> > > > saying something about how their code to improve TSC consistency
> > > > across migration avoids RDTSC exiting where possible 
> (which I do not
> > > > see -- if the TSC rates across the hosts do not match 
> closely then
> > > > RDTSC exiting is enabled forever for that domain), and, most
> > > > bizarrely, that their 'solution' may have a tsc drift 
> >10^5 cycles.
> > > > Where did this huge number come from? What solution is 
> being talked
> > > > about, and under what conditions might the claim hold? 
> Who knows!
> > >
> > > We had done the experiment to measure the performance impact
> > > with softtsc using oltp workload, and we saw ~10% performance
> > > loss if rdtsc rate is more than 120,000/second. And we also
> > > did some other tests, and the results show that ~1%
> > > perfomance loss is caused by 10000 rdtsc instructions.  So if
> > > the rdtsc rate is not that high(>10000/second), the
> > > performance impact can be ignored.
> > >
> > > We also introduced some performance optimization solutions,
> > > but as we claimed before, they may bring some TSC drift (
> > > 10^5~10^6 cycles) between virtual processors in SMP cases.
> > > One solution is described below, for example, the guest is
> > > migrated from low TSC freq(low_freq) machine to a high TSC
> > > freq one(high_freq), you know, the low frequency is guest's
> > > expected frequency(exp_freq), and we should let guest be
> > > aware that it is running on the machine with exp_freq TSC to
> > > avoid possbile issues caused by faster TSC in any
> > > optimization solution.
> > >
> > > 1. In this solution, we only guarantee guest's TSC is
> > > increasing monotonically and the average frequency equals
> > > guest's expected frequency(exp_freq) in a fixed time slot 
> (eg. ~1ms).
> > > 2. To be simple,  let guest running in high_freq TSC (with
> > > hardware TSC offset feature, no perfomrance loss) for 1ms,
> > > and then enable rdtsc exiting and use trap and emulation
> > > method(suffers perfomance loss) to let guest running in a
> > > *VERY VERY* low frequency TSC(e.g 0.2 G Hz) for some time,
> > > and the specific value can be calculated with the formula to
> > > guarantee average TSC frquency == exp_freq:
> > >           time = (high_freq - low_freq) / (low_freq - 0.2).
> > >
> > > 3. If the guest migrate from 2.4G machine to 3.0G machine,
> > > only in (3.0-2.4) /(2.4-0.2) == ~0.273ms guest has to suffer
> > > performance loss in the total time 1ms+0.273ms ,and that is
> > > also to say, in most of the time guest can leverage
> > > hardware's TSC offset feature to reduce perfomrance loss.
> > >
> > > 4.  In the 1.273ms, we can say guest's TSC frequency is
> > > emulated to its expected one through the hardware and
> > > software's co-emulation. And the perfomance loss is very
> > > minor compared with purely softtsc solution.
> > > 5.  But at the same time, since each vcpu's TSC is emulated
> > > indpendently for SMP guest, and they may generate a drift
> > > value between vcpus, and the drift vaule's range should be
> > > 10^5 ~10^6 cycles, and we don't know such drift between vcpus
> > > whether can bring other side-effects.  At least, one
> > > side-effect case we can figure out is when one application
> > > running on one vcpu, and it may see backward TSC value after
> > > its migrating to another vcpu.  Not sure this is a real
> > > problem, but it should exist in theory.
> > >
> > > Attached the draft patch to implement the solution based on
> > > an old #Cset19591.
> > >
> > > Xiantao
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>