WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Linux spin lock enhancement on xen

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: Re: [Xen-devel] Linux spin lock enhancement on xen
From: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
Date: Tue, 17 Aug 2010 18:58:07 -0700
Cc: Keir, "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>, Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Tue, 17 Aug 2010 19:00:45 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C6ACA28.7030104@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Oracle Corporation
References: <20100816183357.08623c4c@xxxxxxxxxxxxxxxxxxxx> <4C6ACA28.7030104@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, 17 Aug 2010 10:43:04 -0700
Jeremy Fitzhardinge <jeremy@xxxxxxxx> wrote:

>  On 08/16/2010 06:33 PM, Mukesh Rathor wrote:
> > In my worst case test scenario, I get about 20-36% improvement when
> > the system is two to three times over provisioned. 
> >
> > Please provide any feedback. I would like to submit official patch
> > for SCHEDOP_yield_to in xen.
> 
> This approach only works for old-style spinlocks.  Ticketlocks also
> have the problem of making sure the next vcpu gets scheduled on
> unlock.

Well, unfortunately, looks like old-style spinlocks are gonna be around 
for a very long time. I've heard there are customers still on EL3!


> Have you looked at the pv spinlocks I have upstream in the pvops
> kernels, which use the (existing) poll hypercall to block the waiting
> vcpu until the lock is free?
>     J

>How does this compare with Jeremy's existing paravirtualised spinlocks
>in pv_ops? They required no hypervsior changes. Cc'ing Jeremy.
> -- Keir

Yeah, I looked at it today. What pv-ops is doing is forcing a yield
via a fake irq/event channel poll, after storing the lock pointer in
a per cpu area. The unlock'er then IPIs the vcpus waiting. The lock
holder may not be running tho, and there is no hint to hypervisor
to run it. So you may have many waitor's come and leave for no
reason.

To me this is more of an overhead than needed in a guest. In my
approach, the hypervisor is hinted exactly which vcpu is the 
lock holder. Often many VCPUs are pinned to a set of physical cpus
due to licensing and other reasons. So this really helps a vcpu
that is holding a spin lock, wanting to do some possibly real
time work, get scheduled and move on. Moreover, number of vcpus is
going up pretty fast.

Thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel