WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] planned csched improvements?

On Tue, 20 Oct 2009 10:37:19 +0100
George Dunlap <George.Dunlap@xxxxxxxxxxxxx> wrote:

> On Tue, Oct 20, 2009 at 1:01 AM, Mukesh Rathor
> <mukesh.rathor@xxxxxxxxxx> wrote:
> > Yeah, I've been thinking in the back of my mind, some sort of
> > multiple runqueus
> 
> There already are multiple runqueues; the overhead comes from the
> "steal work" method of moving vcpus between them, which works fine for
> low number of cpus but doesn't scale well.

Exactly, we'd have to study to find the contentions and address those.
Hopefully I can take what you got, tinker around a bit, and send the
changes to see what you think.

> Hmm, I thought I had written up my plans for load-balancing in an
> e-mail to the list, but I can't seem to find them now.  Standby
> sometime for a description. :-)

Actually, I think you posted it on the list and I saved it somewhere, 
and plan on reading and figuring it once I get closer to doing the work.

> > Agree. I'm hoping to collect all that information over the next
> > couple/few months.  The last attempt, made a year ago, didn't yield
> > in a whole lot of information because of problems with 32bit tools
> > and 64bit guest apps interaction.
> 
> I have some good tools for collecting scheduling activity and
> analyzing using xentrace and xenalyze.  When you get things set up,
> let me know and I'll post some information about using xentrace /
> xenalyze to characterize a workload's scheduling.

Great thanks.

> > In a nutshell, there's tremendous smarts in the DB, and so I think
> > it prefers a simplified schedular/OS that it can provide hints to
> > and interact a little with.  Ideally, it would like ability for a
> > privileged thread to tell the OS/hyp, I want to yield cpu to thread
> > #xyz.
> 
> If the thread is not scheduled on a vcpu by the OS, then when the DB
> says to yield to that thread, the OS can switch on the running vcpu,
> no changes needed.
> 
> The only potential modification would be if the DB wants to yield to a
> thread which is scheduled on another vcpu, but that vcpu is not
> currently running.  Then the guest OS *may* want to be able to ask the
> HV to yield the currently running vcpu to the other vcpu.  That
> interface is work thinking about.

Yup, precisely.

> > Moreover, my focus is large, 32 to 128 logical processors, with 1/2
> > to 1TB memory.  As such, I also want to address VCPUs being confined
> > to logical block of physical CPUs, taking into consideration that
> > licenses are per physical cpu core.
> 
> This sounds like it would benefit from the "CPU pools" patch submitted
> by Juergen Gross.
 
 Yes, I saw that also on the list also, and when I get closer to doing the
 work, will take a closer look. Right now I am still trying to round up
 the hardware, then will have to round up folks familiar with benchmarks
 to setup them up. Then the easier part begins :)...

 thanks
 Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel