This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding

To: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: Fri, 3 Jun 2005 16:48:41 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Bryan Rosenburg <rosnbrg@xxxxxxxxxx>, Michael Hohnbaum <hohnbaum@xxxxxxxxxx>, Orran Krieger <okrieg@xxxxxxxxxx>, Ryan Harper <ryanh@xxxxxxxxxx>
Delivery-date: Fri, 03 Jun 2005 21:48:04 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D282069@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D282069@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040907i
* Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2005-06-03 16:18]:
> > > Have you any suggestions for metrics for comparing the schemes? 
> > > lmbench is quite good for assessing the no contenion case. Perhaps 
> > > doing a kernel build on a guest with VCPUs > phy CPUs is a 
> > reasonable 
> > > way of assesing the benefit.
> > 
> > We have currently been using a lock-intensive program, [1]pft 
> > as a benchmark.  I patched in lockmeter to measure the 
> > 'lockiness' of various benchmarks, and even with 8 VCPUS on 
> > backed on a single cpu doesn't generate a large number of 
> > lock contentions.  pft is far more lock intensive.
> I'll take a look at pft. Does it use futexes, or is it just contending
> for spinlocks in the kernel?

It contends for spinlocks in kernel.
> > However, one of our concerns with confer/directed yielding is 
> > that the lock holder vcpu doesn't know that it was given a 
> > time-slice and that it should voluntarily yield giving other 
> > vcpus get a chance at the lock.
> > With out such a mechanism, one can imagine that the lock 
> > holder would continue on and possibly grab the lock yet again 
> > before being preempted to which another vcpu will then yield, 
> > etc.  We could add something in the vcpu_info array 
> > indicating that it was given a slice and in
> > _raw_spin_unlock() check and call do_yield().  These spinlock 
> > changes certainly affect the speed of the spinlocks in Linux 
> > which is one of the reasons we wanted to avoid directed 
> > yielding or any other  mechanism that required spinlock accounting.
> Spinlock accounting that doesn't involve lock'ed operations might be OK.

Do you mean to do the accouting somewhere besides in the lock routines?

> > I don't know if you had a chance to see my status on the 
> > [2]preemption notification from about a month ago.  I'm going 
> > to bring that patch up to current and re-run the tests to see 
> > where things are again.  Please take a look at the original results.
> Thanks, I did look at the graphs at the time. As I recall, the
> notification mechanism was beginning to look somewhat expensive under
> high context switch loads induced by IO. We'll have to see what the cost

Yes.  One of the tweaks we are looking to do is change the IO operation
from kernel space (responding to an icmp packet happens within the
kernel) to something that is more IO realistic which would involve
more time per operation, like sending a message over tcp (echo server or
something like that).

> BTW: it would be really great if you could work up a patch to enable
> xm/xend to add/remove VCPUs from a domain.

OK.  I have an older patch that I'll bring up-to-date.  Here is a list
of things that I think we should do with add/remove.

1. Fix cpu_down() to tell Xen to remove the vcpu from its list of
runnable domains.  Currently it a "down" vcpu only yields it's timeslice

2. Fix cpu_up() to have Xen make the target vcpu runnable again.

3. Add cpu_remove() which removes the cpu from Linux, and removes the
vcpu in Xen.

4. Add cpu_add() which boots another vcpu and then brings it up another
cpu in Linux.

I expect that cpu_up/cpu_down to be more light-weight than

Does that sound reasonable.  Do we want all four or can we live with
just 1 and 2?

Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253

Xen-devel mailing list