[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Add migration_cost option to scheduler


  • To: "Yang, Xiaowei" <xiaowei.yang@xxxxxxxxx>
  • From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
  • Date: Mon, 9 Mar 2009 12:55:40 +0000
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 09 Mar 2009 05:56:05 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=bpAissmaNY6QGbM8XSg7V4ZGiWsaXkhushMaGYrzsMb7lgO0TRZI88FA8BHylDayYJ BMOBmfcBj4+/cKcIfhKyiWCL8p1X0VaZbV3HkKa44k+zAo0lHPEdngb0JRipF3tEWpR4 U23FVtU3J8YBIT/Ygw9OasfVViiugaDpiQe2o=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hmm, I think this patch may not be exactly what we want.  It looks
like it checks for how long a vcpu has been in its current stat, not
how recently it has been running.  So if a vcpu sleeps for a long time
on a cpu that's running other workloads, then wakes up
(blocked->runnable), the cache is by no means "hot".  But since it has
only been in the "runnable" state for a few hundred cycles, it won't
be migrated, even though there's little cost.

However, if the pcpu was idle since the last time this vcpu ran (i.e.,
if we're just catching the vcpu in the process of waking up), the
cache *is* still hot.  Hmm....

 -George

2009/3/9 Yang, Xiaowei <xiaowei.yang@xxxxxxxxx>:
> The idea is borrowed from Linux kernel: if the vCPU is just scheduled out
> and put to run-queue, it's likely cache-hot on its current pCPU, and it may
> be scheduled in in a short period of time; however, if vCPU is migrated to
> another pCPU, it need to re-warm the cache - that's the meaning of migration
> cost.
>
> The patch introduces an option migration_cost to depress too aggressive vCPU
> migration (actually we really see migration frequency is very high most of
> the time.), while in the meantime keeping load balance works in certain
> degree.
>
> Linux kernel uses 0.5ms by default. Considering the cost may be higher (e.g.
> VMCS impact) than in native, migration_cost=1ms is chosen for our tests,
> which are performed on a 4x 6-core Dunnington platform. In 24-VM case, there
> is ~2% stable performance gain for enterprise workloads like SPECjbb and
> sysbench. If HVM is with stubdom, the gain is more: 4% for the same
> workloads.
>
> The best value may vary on different platforms based on different cache
> hierarchy and with different workloads. Due to resource limit, we haven't
> test many combinations. And we plans to try more in future. Welcome to
> evaluate and give feedback on what's suitable / not suitable for you.
>
> Signed-off-by: Xiaowei Yang <xiaowei.yang@xxxxxxxxx>
>
>
> Thanks,
> Xiaowei
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.