On Wednesday 16 September 2009 21:31:04 Konrad Rzeszutek Wilk wrote:
> On Wed, Sep 16, 2009 at 04:42:21PM +0800, Sheng Yang wrote:
> > Hi, Keir & Jeremy
> >
> > This patchset enabled Xen Hybrid extension support.
> >
> > As we know that PV guest have performance issue with x86_64 that guest
> > kernel and userspace resistent in the same ring, then the necessary TLB
> > flushes when switch between guest userspace and guest kernel cause
> > overhead, and much more syscall overhead is also introduced. The Hybrid
> > Extension estimated these overhead by putting guest kernel back in
> > (non-root) ring0 then achieve the better performance than PV guest.
>
> What was the overhead? Is there a step-by-step list of operations you did
> to figure out the performance numbers?
The overhead I mentioned is, in x86_64 pv guest, every syscall would be goes
to hypervisor first, then hypervisor transmit it to guest kernel, finally
guest kernel goes back to guest userspace. Due to the involvement of
hypervisor, there is certainly overhead. And every transition result in TLB
flush. In 32bit pv guest, guest use #int82 to emulate syscall, which can
specific the privilege level, so that hypervisor don't need involve.
And sorry, I don't have a step-by-step list for the performance tunning. All
above is a known issue of x86_64 pv guest.
>
> I am asking this b/c at some point I would like to compare the pv-ops vs
> native and I am not entirely sure what is the best way to do this.
Sorry, I don't have much advise on this. If you means tuning, what I can
purposed is just running some microbenchmark(lmbench is a favor of mine),
collect (guest) hot function with xenoprofile and compare the result of native
and pv-ops to figure out the gap...
--
regards
Yang, Sheng
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|