|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Xen Benchmarking guidelines
Thanks for the fast reply.
Itshould be possible, and in fact difficult not, to get almost native
scores on SPECINT benchmarks from within an HVM guest. There's no I/O or
system activity at all -- it's just measuring raw CPU speed.
Yeah, this was my thought as well. My VMware numbers showed some
degradation over native, which might be attributable to time issues (and
maybe the fact that it's Workstation, not ESX), but they were far more
consistent across runs.
The most likely culprits are scheduling problems or time problems in the HVM
guest.
To discount scheduling issues, it's probably worth pinning your HVM VCPU to
a single physical CPU (and set the affinity of dom0 so that it *doesn't* run
on that physical CPU) and see if that helps.
OK, I'll try this. I may try just disabling multi-core and see what 1CPU
does too. I'll let you know how it turns out.
For time issues, you can time your SPECINT runs with a stopwatch. Or perhaps
you can come with some more automatable means, but you should aim to take
before/after timestamps from *outside* the HVM guest, since you're trying to
ascertain whether the HVM timekeeping is screwed on your system.
I suspect there are some time issues here, but they are definitely not
the primary culprit. I did use a stopwatch for some of my tests and the
tests with more "bad" runs took up to a couple of hours longer to run in
real clock time. One problem is that it's more difficult to report numbers
with external times. I've seen recommendations to use ping timestamps etc.
to an external machine, but I'm mostly concerned with relative degradation
after I add some workload, so I'm hoping the timing issue
will affect all of my tests similarly and be less of an issue. First I
need to get at least a consistent run without my workload though.
Thanks again for your help,
nick
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|