WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] XEN Proposal

To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, 'Juergen Gross' <juergen.gross@xxxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] XEN Proposal
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Wed, 10 Dec 2008 13:34:11 +0000
Cc:
Delivery-date: Wed, 10 Dec 2008 09:00:00 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <0A882F4D99BBF6449D58E61AAFD7EDD601E23C87@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AclayL6UBSYXIBESRfWnuEJsMZtqCAAAUD4QAAB/HjA=
Thread-topic: [Xen-devel] XEN Proposal
User-agent: Microsoft-Entourage/12.14.0.081024
That was grouping domains to directly share scheduling credits, rather than
grouping to share physical resources.

 -- Keir

On 10/12/2008 13:21, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:

> I remember seeing some post before about domain group scheduler.
> Not sure about its progress now, and maybe you can check that
> thread to see anything useful?
> 
> Thanks,
> Kevin
> 
>> From:Juergen Gross
>> Sent: Wednesday, December 10, 2008 9:10 PM
>> 
>> Hi,
>> 
>> Currently the XEN credit scheduler has its pitfalls in
>> supporting weights of
>> domains together with cpu pinning (see the threads
>> http://lists.xensource.com/archives/html/xen-devel/2007-02/msg0
>> 0006.html
>> http://lists.xensource.com/archives/html/xen-devel/2006-10/msg0
>> 0365.html
>> http://lists.xensource.com/archives/html/xen-devel/2007-07/msg0
>> 0303.html
>> which include a rejected patch).
>> 
>> We are facing this problem, too. We tried the above patch, but
>> it didn't solve
>> our problem completely, so we decided to start a new solution.
>> 
>> Our basic requirement is to limit a set of domains to a set of
>> physical cpus
>> while specifying the scheduling weight for each domain. The
>> general (and in my
>> opinion best) solution would be the introduction of a "pool"
>> concept in XEN.
>> 
>> Each physical cpu is dedicated to exactly one pool. At start
>> of XEN this is
>> pool0. A domain is member of a single pool (dom0 will always
>> be member of
>> pool0), there may be several domains in one pool. Scheduling
>> does not cross
>> pool boundaries, so the weight of a domain is only related to
>> the weight of
>> the other domains in the same pool. So it is possible to have
>> an own scheduler
>> for each pool.
>> 
>> What changes would be needed?
>> - The hypervisor must be pool-aware. It needs information
>> about the pool
>>  configuration (cpu mask, scheduler) and the pool membership
>> of a domain.
>>  The scheduler must restrict itself to its own pool only.
>> - There must be an interface to set and query the pool configuration.
>> - At domain creation the domain must be added to a pool.
>> - libxc must be expanded to support the new interfaces.
>> - xend and the xm command must support pools, defaulting to
>> pool0 if no pool
>>  is specified
>> 
>> The xm commands could look like this:
>> xm pool-create pool1 ncpu=4              # create a pool with 4 cpus
>> xm pool-create pool2 cpu=1,3,5           # create a pool with
>> 3 dedicated cpus
>> xm pool-list                             # show pools:
>>  pool      cpus          sched      domains
>>  pool0     0,2,4         credit     0
>>  pool1     6-9           credit     1,7
>>  pool2     1,3,5         credit     2,3
>> xm pool-modify pool1 ncpu=3              # set new number of cpus
>> xm pool-modify pool1 cpu=6,7,9           # modify cpu-pinning
>> xm pool-destroy pool1                    # destroy pool
>> xm create vm5 pool=pool1                 # start domain in pool1
>> 
>> There is much more potential in this approach:
>> - add memory to a pool? Could be interesting for NUMA
>> - recent discussions on xen-devel related to scheduling
>> (credit scheduler for
>>  client virtualization) show some demand for further work
>> regarding priority
>>  and/or grouping of domains
>> - this might be an interesting approach for migration of
>> multiple related
>>  domains (pool migration)
>> - move (or migrate?) a domain to another pool
>> - ...
>> 
>> Any comments, suggestions, work already done, ...?
>> Otherwise we will be starting our effort soon.
>> 
>> Juergen
>> 
>> -- 
>> Juergen Gross                             Principal Developer
>> IP SW OS6                      Telephone: +49 (0) 89 636 47950
>> Fujitsu Siemens Computers         e-mail:
>> juergen.gross@xxxxxxxxxxxxxxxxxxx
>> Otto-Hahn-Ring 6                Internet: www.fujitsu-siemens.com
>> D-81739 Muenchen         Company details:
>> www.fujitsu-siemens.com/imprint.html
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>