This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: [PATCH 2/3] kvm hypervisor : Add hypercalls to support p

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: [Xen-devel] Re: [PATCH 2/3] kvm hypervisor : Add hypercalls to support pv-ticketlock
From: Srivatsa Vaddagiri <vatsa@xxxxxxxxxxxxxxxxxx>
Date: Fri, 21 Jan 2011 19:32:08 +0530
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxx>, Nick Piggin <npiggin@xxxxxxxxx>, kvm@xxxxxxxxxxxxxxx, Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>, Eric Dumazet <dada1@xxxxxxxxxxxxx>, Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>, suzuki@xxxxxxxxxx, Avi Kivity <avi@xxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, Américo Wang <xiyou.wangcong@xxxxxxxxx>, Linux Virtualization <virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 21 Jan 2011 06:03:28 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4D38774B.6070704@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <cover.1289940821.git.jeremy.fitzhardinge@xxxxxxxxxx> <20110119164432.GA30669@xxxxxxxxxxxxxxxxxx> <20110119171239.GB726@xxxxxxxxxxxxxxxxxx> <1295457672.28776.144.camel@laptop> <4D373340.60608@xxxxxxxx> <20110120115958.GB11177@xxxxxxxxxxxxxxxxxx> <4D38774B.6070704@xxxxxxxx>
Reply-to: vatsa@xxxxxxxxxxxxxxxxxx
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.20 (2009-06-14)
On Thu, Jan 20, 2011 at 09:56:27AM -0800, Jeremy Fitzhardinge wrote:
> >  The key here is not to
> > sleep when waiting for locks (as implemented by current patch-series, which 
> > can 
> > put other VMs at an advantage by giving them more time than they are 
> > entitled 
> > to)
> Why?  If a VCPU can't make progress because its waiting for some
> resource, then why not schedule something else instead?

In the process, "something else" can get more share of cpu resource than its 
entitled to and that's where I was bit concerned. I guess one could
employ hard-limits to cap "something else's" bandwidth where it is of real 
concern (like clouds).

> Presumably when
> the VCPU does become runnable, the scheduler will credit its previous
> blocked state and let it run in preference to something else.

which may not be sufficient for it to gain back bandwidth lost while blocked
(speaking of mainline scheduler atleast).

> > Is there a way we can dynamically expand the size of lock only upon 
> > contention
> > to include additional information like owning vcpu? Have the lock point to a
> > per-cpu area upon contention where additional details can be stored perhaps?
> As soon as you add a pointer to the lock, you're increasing its size. 

I didn't really mean to expand size statically. Rather have some bits of the 
lock word store pointer to a per-cpu area when there is contention (somewhat 
similar to how bits of rt_mutex.owner are used). I haven't thought thr' this in
detail to see if that is possible though.

- vatsa

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>