WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Auto CPU rebalance ?

To: "John Levin" <xenjohn@xxxxxxxxx>, "Matsumoto" <n_matumoto@xxxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Auto CPU rebalance ?
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Thu, 29 Dec 2005 22:05:35 -0000
Delivery-date: Thu, 29 Dec 2005 22:10:03 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcYMu1ojNG5g2iewTK6jX1aDJmXOtQABt15Q
Thread-topic: [Xen-devel] Auto CPU rebalance ?
> Also is any scheduler going to include any algorthm to handle 
> gang scheduling of multiplie vcpus?

It remains to be seen whether gang scheduling is actually necessary. I'd
like to see how well some combination of "bad pre-emption"
avoidance/mitigation coupled with biasing the scheduler to run vcpu's of
domains that already have vcpu's running. 

Strict gang scheduling is likely to lead to a lot of wasted cycles, and
I expect we can do better. The first step is to define a set of
workloads we want to optimise for, and collect some traces. 

Ian  

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>