WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [RFC][PATCH 0/10] Xen Hybrid extension support

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC][PATCH 0/10] Xen Hybrid extension support
From: Sheng Yang <sheng@xxxxxxxxxxxxxxx>
Date: Thu, 17 Sep 2009 16:59:47 +0800
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Eddie Dong <eddie.dong@xxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>
Delivery-date: Thu, 17 Sep 2009 02:00:17 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20090916133104.GB14725@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Intel Opensource Technology Center
References: <1253090551-7969-1-git-send-email-sheng@xxxxxxxxxxxxxxx> <20090916133104.GB14725@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.11.2 (Linux/2.6.28-15-generic; KDE/4.2.2; x86_64; ; )
On Wednesday 16 September 2009 21:31:04 Konrad Rzeszutek Wilk wrote:
> On Wed, Sep 16, 2009 at 04:42:21PM +0800, Sheng Yang wrote:
> > Hi, Keir & Jeremy
> >
> > This patchset enabled Xen Hybrid extension support.
> >
> > As we know that PV guest have performance issue with x86_64 that guest
> > kernel and userspace resistent in the same ring, then the necessary TLB
> > flushes when switch between guest userspace and guest kernel cause
> > overhead, and much more syscall overhead is also introduced. The Hybrid
> > Extension estimated these overhead by putting guest kernel back in
> > (non-root) ring0 then achieve the better performance than PV guest.
>
> What was the overhead? Is there a step-by-step list of operations you did
> to figure out the performance numbers?

The overhead I mentioned is, in x86_64 pv guest, every syscall would be goes 
to hypervisor first, then hypervisor transmit it to guest kernel, finally 
guest kernel goes back to guest userspace. Due to the involvement of 
hypervisor, there is certainly overhead. And every transition result in TLB 
flush. In 32bit pv guest, guest use #int82 to emulate syscall, which can 
specific the privilege level, so that hypervisor don't need involve. 

And sorry, I don't have a step-by-step list for the performance tunning. All 
above is a known issue of x86_64 pv guest.
>
> I am asking this b/c at some point I would like to compare the pv-ops vs
> native and I am not entirely sure what is the best way to do this.

Sorry, I don't have much advise on this. If you means tuning, what I can 
purposed is just running some microbenchmark(lmbench is a favor of mine), 
collect (guest) hot function with xenoprofile and compare the result of native 
and pv-ops to figure out the gap...

-- 
regards
Yang, Sheng

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel