WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Scheduler follow-up: Design target (was [RFC] Scheduler

To: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Scheduler follow-up: Design target (was [RFC] Scheduler work, part 1)
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Fri, 17 Apr 2009 14:11:32 +0200
Cc: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>, "Tian, Kevin" <kevin.tian@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Delivery-date: Fri, 17 Apr 2009 05:12:01 -0700
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1239970334; x=1271506334; h=from:sender:reply-to:subject:date:message-id:to:cc: mime-version:content-transfer-encoding:content-id: content-description:resent-date:resent-from:resent-sender: resent-to:resent-cc:resent-message-id:in-reply-to: references:list-id:list-help:list-unsubscribe: list-subscribe:list-post:list-owner:list-archive; z=From:=20Juergen=20Gross=20<juergen.gross@xxxxxxxxxxxxxx> |Subject:=20Re:=20[Xen-devel]=20Scheduler=20follow-up:=20 Design=20target=20(was=20[RFC]=20Scheduler=0D=0A=20work, =20part=201)|Date:=20Fri,=2017=20Apr=202009=2014:11:32=20 +0200|Message-ID:=20<49E871F4.5000807@xxxxxxxxxxxxxx>|To: =20George=20Dunlap=20<George.Dunlap@xxxxxxxxxxxxx>|CC:=20 "xen-devel@xxxxxxxxxxxxxxxxxxx"=20<xen-devel@xxxxxxxxxxxx rce.com>,=20=0D=0A=20Ian=20Pratt=20<Ian.Pratt@xxxxxxxxxxx om>,=0D=0A=20"Tian,=20Kevin"=20<kevin.tian@xxxxxxxxx>,=20 =0D=0A=20Jeremy=20Fitzhardinge=20<jeremy@xxxxxxxx> |MIME-Version:=201.0|Content-Transfer-Encoding:=207bit |In-Reply-To:=20<de76405a0904140538u5834ce93t7118e570ac2d 0fc3@xxxxxxxxxxxxxx>|References:=20<de76405a0904140538u58 34ce93t7118e570ac2d0fc3@xxxxxxxxxxxxxx>; bh=XeN5XniJxJzrq8IcskM58VXufIjG7uoF92gnAgdQmOQ=; b=eygSQe7E/40KLNn1kb6pEUZiCy0lJmxSowHkPzjvQNgInw2beqP9eU3W hE/zryZPREufMfMkgsHyY4/npnwlEyc9QyQKt5iv39JeTnqEd/tBU4ppW bQhdi6+wBefmkLUGklK3M8t0nW3jACsnUiNm5dzDLIc6ZV3gQ+xqF/htN wBxw1hb/bfJa3t6/o+1acDJ1ZEEVwEPYIKlwDwinfWPmBWTsfKybSOkJB PsVIyeIaeiHbZFbYsUnEVwOsnoCHJ;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:X-Enigmail-Version:Content-Type: Content-Transfer-Encoding; b=GMQPBqDZC/Nt/tEPHRaMOdN+cZrmotVn5VH50mxwhOaPYOxlOBLW1A4B jI2XRZ0/V1BykOm4DmB9+BTr3fNHTIB3s36Zs6TTfVh4nXlt8vEplJLYZ uNpVudEGHDjt7xAPIi/YLb2MFi4RKGMcE0UrvC1avxDOwkGGEcZwfVPya C7/ofjLzQ2NfPv/BL4HBPcTtAk9yEWcigqX0YEUev2ppMlKff8eFPhDvm o1H4jX6x4HCSjZXA9pG2byFhTGEWg;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <de76405a0904140538u5834ce93t7118e570ac2d0fc3@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <de76405a0904140538u5834ce93t7118e570ac2d0fc3@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla-Thunderbird 2.0.0.19 (X11/20090103)
George Dunlap wrote:
> * [Jeremy] Is that forward-looking enough?  That hardware is currently
> available; what's going to be commonplace in 2-3 years?
> 
> I think we need to distinguish between "works optimally" and "works
> well".  Obviously we want the design to be scalable, and we don't want
> to have to do a major revision in a year because 16 logical cpus works
> well but 32 tanks.  And it may be a good idea to "lead" the target, so
> that when we actually ship something it will be right on, rather than
> 6 months behind.

This problem might be less critical if cpupools are supported. On really
large systems it would be possible to limit the number of logical cpus
for a scheduler.

> 
> Still, in 2-3 years, will the vast majority of servers have 32 logical
> cpus, or still only 16 or less?

I think Nehalem-EX will have 16 on one socket (8 cores with 2 HT each).
With 4 sockets this would sum up to 64.

> * [Kevin Tian] How about VM number in total you'd like to support?
> 
> Good question.  I'll do some research for how many VMs a virtual
> desktop system might want to support.
> 
> For servers, I think a reasonable design space would be between 1 VM
> every 3 cores (for a few extremely high-load servers) to 8 VMs every
> core (for highly aggregated servers).  I suppose server farms may want
> more.
> 
> Does anyone else have any thoughts on this subject -- either
> suggestions for different numbers, or other use cases they want
> considered?

For our BS2000 servers we would really appreciate support of cpupools :-)
Or as an alternative correct handling of weights with cpu-pinning.

Another question: do you plan to replace the current credit scheduler or will
the new scheduler be another alternative to credit and sedf?


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 636 47950
Fujitsu Technolgy Solutions               e-mail: juergen.gross@xxxxxxxxxxxxxx
Otto-Hahn-Ring 6                        Internet: ts.fujitsu.com
D-81739 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>