WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding

To: "Ryan Harper" <ryanh@xxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Fri, 3 Jun 2005 22:17:16 +0100
Cc: Bryan Rosenburg <rosnbrg@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Michael Hohnbaum <hohnbaum@xxxxxxxxxx>, Orran Krieger <okrieg@xxxxxxxxxx>
Delivery-date: Fri, 03 Jun 2005 21:16:29 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVofxqkThSjGM1kRf25+V5XqJFvkQAANDMg
Thread-topic: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
> > Have you any suggestions for metrics for comparing the schemes? 
> > lmbench is quite good for assessing the no contenion case. Perhaps 
> > doing a kernel build on a guest with VCPUs > phy CPUs is a 
> reasonable 
> > way of assesing the benefit.
> 
> We have currently been using a lock-intensive program, [1]pft 
> as a benchmark.  I patched in lockmeter to measure the 
> 'lockiness' of various benchmarks, and even with 8 VCPUS on 
> backed on a single cpu doesn't generate a large number of 
> lock contentions.  pft is far more lock intensive.

I'll take a look at pft. Does it use futexes, or is it just contending
for spinlocks in the kernel?
 
> However, one of our concerns with confer/directed yielding is 
> that the lock holder vcpu doesn't know that it was given a 
> time-slice and that it should voluntarily yield giving other 
> vcpus get a chance at the lock.
> With out such a mechanism, one can imagine that the lock 
> holder would continue on and possibly grab the lock yet again 
> before being preempted to which another vcpu will then yield, 
> etc.  We could add something in the vcpu_info array 
> indicating that it was given a slice and in
> _raw_spin_unlock() check and call do_yield().  These spinlock 
> changes certainly affect the speed of the spinlocks in Linux 
> which is one of the reasons we wanted to avoid directed 
> yielding or any other  mechanism that required spinlock accounting.

Spinlock accounting that doesn't involve lock'ed operations might be OK.

> I don't know if you had a chance to see my status on the 
> [2]preemption notification from about a month ago.  I'm going 
> to bring that patch up to current and re-run the tests to see 
> where things are again.  Please take a look at the original results.

Thanks, I did look at the graphs at the time. As I recall, the
notification mechanism was beginning to look somewhat expensive under
high context switch loads induced by IO. We'll have to see what the cost
of using spinlock accounting is. If there are no locked operations wre
might be OK.

BTW: it would be really great if you could work up a patch to enable
xm/xend to add/remove VCPUs from a domain.

Thanks,
Ian 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel