[Apologies for resend: earlier email with html attachments was
rejected. Resending with txt attachments.]
>From: Zachary Amsden [mailto:zach@xxxxxxxxxx]
>Sent: Monday, March 13, 2006 9:58 AM
>In OLS 2005, we described the work that we have been doing in VMware
>with respect a common interface for paravirtualization of Linux. We
>shared the general vision in Rik's virtualization BoF.
>This note is an update on our further work on the Virtual Machine
>Interface, VMI. The patches provided have been tested on 2.6.16-rc6.
>We are currently recollecting performance information for the new -rc6
>kernel, but expect our numbers to match previous results, which showed
>no impact whatsoever on macro benchmarks, and nearly neglible impact
>on microbenchmarks.
Folks,
I'm a member of the performance team at VMware & I recently did a
round of testing measuring the performance of a set of benchmarks
on the following 2 linux variants, both running natively:
1) 2.6.16-rc6 including VMI + 64MB hole
2) 2.6.16-rc6 not including VMI + no 64MB hole
The intent was to measure the overhead of VMI calls on native runs.
Data was collected on both p4 & opteron boxes. The workloads used
were dbench/1client, netperf/receive+send, UP+SMP kernel compile,
lmbench, & some VMware in-house kernel microbenchmarks. The CPU(s)
were pegged for all workloads except netperf, for which I include
CPU utilization measurements.
Attached please find a text file presenting the benchmark results
collected in terms of ratio of 1) to 2), along with the raw scores
given in brackets. System configurations & benchmark descriptions
are given at the end of the page; more details are available on
request. Also attached for reference is a text file giving the
width of the 95% confidence interval around the mean of the scores
reported for each benchmark, expressed as a percentage of the mean.
The VMI-Native & Native scores for almost all workloads match
within the 95% confidence interval. On the P4, only 4 workloads,
all lmbench microbenchmarks (forkproc,shproc,mmap,pagefault) were
outside the interval & the overheads (2%,1%,2%,1%, respectively)
were low. The opteron microbenchmark data was a little more
ragged than the P4 in terms of variance, but it appears that only
a few lmbench microbenchmarks (forkproc,execproc,shproc) were
outside their confidence intervals and they show low overheads
(4%,3%,2%, respectively); our in-house segv & divzero seemed to
show measureable overheads as well (8%,9%).
-Regards, Anne Holler (anne@xxxxxxxxxx)
score.2.6.16-rc6.txt
Description: score.2.6.16-rc6.txt
confid.2.6.16-rc6.txt
Description: confid.2.6.16-rc6.txt
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|