WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding

To: "Ryan Harper" <ryanh@xxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Fri, 3 Jun 2005 23:06:42 +0100
Cc: Bryan Rosenburg <rosnbrg@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Michael Hohnbaum <hohnbaum@xxxxxxxxxx>, Orran Krieger <okrieg@xxxxxxxxxx>
Delivery-date: Fri, 03 Jun 2005 22:05:55 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVohgnXUX77nCeIT/+lwQf30EwVzwAAI1Kw
Thread-topic: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
 
> > I'll take a look at pft. Does it use futexes, or is it just 
> contending 
> > for spinlocks in the kernel?
> 
> It contends for spinlocks in kernel.

Sounds like this will be a good benchmark. Does it generate a
perofrmance figure as it runs? (e.g. iterations a second or such like).
  
> > Thanks, I did look at the graphs at the time. As I recall, the 
> > notification mechanism was beginning to look somewhat 
> expensive under 
> > high context switch loads induced by IO. We'll have to see what the 
> > cost
> 
> Yes.  One of the tweaks we are looking to do is change the IO 
> operation from kernel space (responding to an icmp packet 
> happens within the
> kernel) to something that is more IO realistic which would 
> involve more time per operation, like sending a message over 
> tcp (echo server or something like that).

Running a parallel UDP ping-pong test might be good. 
 
> > BTW: it would be really great if you could work up a patch 
> to enable 
> > xm/xend to add/remove VCPUs from a domain.
> 
> OK.  I have an older patch that I'll bring up-to-date.  

Great, thanks.

> Here 
> is a list of things that I think we should do with add/remove.
> 
> 1. Fix cpu_down() to tell Xen to remove the vcpu from its 
> list of runnable domains.  Currently it a "down" vcpu only 
> yields it's timeslice back.
> 
> 2. Fix cpu_up() to have Xen make the target vcpu runnable again.
> 
> 3. Add cpu_remove() which removes the cpu from Linux, and 
> removes the vcpu in Xen.
> 
> 4. Add cpu_add() which boots another vcpu and then brings it 
> up another cpu in Linux.
> 
> I expect that cpu_up/cpu_down to be more light-weight than 
> cpu_add/cpu_remove.
> 
> Does that sound reasonable.  Do we want all four or can we 
> live with just 1 and 2?

It's been a while since I looked at Xen's boot_vcpu code (which could do
with a bit of refactoring between common and arch anyhow), but I don't
recall there being anything in there that looked particularly expensive.
Having said that, it's only holding down a couple of KB of memory, so
maybe we just need up/down/add.

Thanks,
Ian 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel