xen-devel
Re: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
* Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2005-06-03 17:41]:
>
> > > I'll take a look at pft. Does it use futexes, or is it just
> > contending
> > > for spinlocks in the kernel?
> >
> > It contends for spinlocks in kernel.
>
> Sounds like this will be a good benchmark. Does it generate a
> perofrmance figure as it runs? (e.g. iterations a second or such like).
yes, here some sample output:
#Gb Rep Thr CLine User System Wall flt/cpu/s fault/wsec
0 5 8 1 2.30s 0.33s 1.05s 62296.578 104970.599
Gb = Gigabytes of mem (I used 128M)
Reps = repetitions of the test internally
Thr = # of test threads
I generally run this with one thread per VCPU, and 128M of memory.
> > > Thanks, I did look at the graphs at the time. As I recall, the
> > > notification mechanism was beginning to look somewhat
> > expensive under
> > > high context switch loads induced by IO. We'll have to see what the
> > > cost
> >
> > Yes. One of the tweaks we are looking to do is change the IO
> > operation from kernel space (responding to an icmp packet
> > happens within the
> > kernel) to something that is more IO realistic which would
> > involve more time per operation, like sending a message over
> > tcp (echo server or something like that).
>
> Running a parallel UDP ping-pong test might be good.
OK.
> > Here
> > is a list of things that I think we should do with add/remove.
> >
> > 1. Fix cpu_down() to tell Xen to remove the vcpu from its
> > list of runnable domains. Currently it a "down" vcpu only
> > yields it's timeslice back.
> >
> > 2. Fix cpu_up() to have Xen make the target vcpu runnable again.
> >
> > 3. Add cpu_remove() which removes the cpu from Linux, and
> > removes the vcpu in Xen.
> >
> > 4. Add cpu_add() which boots another vcpu and then brings it
> > up another cpu in Linux.
> >
> > I expect that cpu_up/cpu_down to be more light-weight than
> > cpu_add/cpu_remove.
> >
> > Does that sound reasonable. Do we want all four or can we
> > live with just 1 and 2?
>
> It's been a while since I looked at Xen's boot_vcpu code (which could do
> with a bit of refactoring between common and arch anyhow), but I don't
> recall there being anything in there that looked particularly expensive.
> Having said that, it's only holding down a couple of KB of memory, so
> maybe we just need up/down/add.
Sounds good.
--
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253 T/L: 678-9253
ryanh@xxxxxxxxxx
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding, Ryan Harper
- RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding, Ian Pratt
- RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding, Ian Pratt
- RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding, Ian Pratt
- RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding, Ian Pratt
- RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding, Ian Pratt
- RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding, Ian Pratt
|
|
|