WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding

To: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: Fri, 3 Jun 2005 17:52:18 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Bryan Rosenburg <rosnbrg@xxxxxxxxxx>, Michael Hohnbaum <hohnbaum@xxxxxxxxxx>, Orran Krieger <okrieg@xxxxxxxxxx>, Ryan Harper <ryanh@xxxxxxxxxx>
Delivery-date: Fri, 03 Jun 2005 22:51:27 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D28206B@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D28206B@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040907i
* Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2005-06-03 17:41]:
>  
> > > I'll take a look at pft. Does it use futexes, or is it just 
> > contending 
> > > for spinlocks in the kernel?
> > 
> > It contends for spinlocks in kernel.
> 
> Sounds like this will be a good benchmark. Does it generate a
> perofrmance figure as it runs? (e.g. iterations a second or such like).

yes, here some sample output:

#Gb Rep Thr CLine  User      System   Wall      flt/cpu/s fault/wsec
  0  5    8   1    2.30s      0.33s   1.05s     62296.578 104970.599

Gb = Gigabytes of mem (I used 128M)
Reps = repetitions of the test internally
Thr = # of test threads 

I generally run this with one thread per VCPU, and 128M of memory.

> > > Thanks, I did look at the graphs at the time. As I recall, the 
> > > notification mechanism was beginning to look somewhat 
> > expensive under 
> > > high context switch loads induced by IO. We'll have to see what the 
> > > cost
> > 
> > Yes.  One of the tweaks we are looking to do is change the IO 
> > operation from kernel space (responding to an icmp packet 
> > happens within the
> > kernel) to something that is more IO realistic which would 
> > involve more time per operation, like sending a message over 
> > tcp (echo server or something like that).
> 
> Running a parallel UDP ping-pong test might be good. 

OK.

> > Here 
> > is a list of things that I think we should do with add/remove.
> > 
> > 1. Fix cpu_down() to tell Xen to remove the vcpu from its 
> > list of runnable domains.  Currently it a "down" vcpu only 
> > yields it's timeslice back.
> > 
> > 2. Fix cpu_up() to have Xen make the target vcpu runnable again.
> > 
> > 3. Add cpu_remove() which removes the cpu from Linux, and 
> > removes the vcpu in Xen.
> > 
> > 4. Add cpu_add() which boots another vcpu and then brings it 
> > up another cpu in Linux.
> > 
> > I expect that cpu_up/cpu_down to be more light-weight than 
> > cpu_add/cpu_remove.
> > 
> > Does that sound reasonable.  Do we want all four or can we 
> > live with just 1 and 2?
> 
> It's been a while since I looked at Xen's boot_vcpu code (which could do
> with a bit of refactoring between common and arch anyhow), but I don't
> recall there being anything in there that looked particularly expensive.
> Having said that, it's only holding down a couple of KB of memory, so
> maybe we just need up/down/add.

Sounds good.


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@xxxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel