This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] New CPU scheduler w/ SMP load balancer

To: "Kamble, Nitin A" <nitin.a.kamble@xxxxxxxxx>
Subject: Re: [Xen-devel] New CPU scheduler w/ SMP load balancer
From: Emmanuel Ackaouy <ack@xxxxxxxxxxxxx>
Date: Wed, 31 May 2006 11:44:05 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 31 May 2006 03:44:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <E305A4AFB7947540BC487567B5449BA80AAA6918@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Mail-followup-to: "Kamble, Nitin A" <nitin.a.kamble@xxxxxxxxx>, Anthony Liguori <aliguori@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
References: <E305A4AFB7947540BC487567B5449BA80AAA6918@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.1i
On Fri, May 26, 2006 at 01:57:47PM -0700, Kamble, Nitin A wrote:
>    I was looking into doing some load balancing (there was none earlier)
> to the domain/vcpu scheduling inside the Xen. And I am glad to see your
> patch is targeting exactly that.
>    I believe the credit scheduler also has sizeable impact on the HVM
> domain performance. Do you have any performance data for the HVM guests
> with your scheduler?

The new scheduler is aggressive about moving VCPUs around to
keep the entire SMP host system busy. The operation to move a
VCPU's context from one CPU to another needs to be reasonably

I didn't try this with an HVM guest. Can you help out with
this? I need to look at the code path that moves an HVM VCPU's
context between physical CPUs. We may need to optimize that


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>