This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] sedf testing: volunteers please

To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] sedf testing: volunteers please
From: Stephan Diestelhorst <sd386@xxxxxxxxxxxx>
Date: Tue, 28 Jun 2005 09:55:40 +0100
Delivery-date: Tue, 28 Jun 2005 08:54:49 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0 (Windows/20041206)

  your idea is about right. Things to notice are:
How much slice did you give to dom0?
If dom0 gets 75% of the cpu, then the other two domains will share the
remaining 20%
100%-75% - 5% (a small reserve)) in the requsted ratio.
That means they will get 2/10 * 20% = 4% and 8/10 * 20% = 16% of the CPU.
This is guranteed and they won't exceed their reservation because they
don't have the extra-flag specified!
You could either:
  -reduce the reservation fordom0
  -use the extraflag for your domains vm1 and vm2 to givethem any
remaining time (which drives them in weighted extratime mode)
  xm sedf vm1 0 0 0 1 2

Hope that helps,

>I enabled the sedf scheduler by applying the patch to Xen testing tree not
>the unstable tree.
>Then I did the following test. I started two user domains (named "vm1" and
>"vm2" respectively). I made the following sedf configurations:
>xm sedf vm1 0 0 0 0 2
>xm sedf vm2 0 0 0 0 8
>My intention is to have vm1 reserve 20% of the available cpu and vm2
>reserve the rest of 80% (please correct me if my understanding about
>sedf here is wrong).
>Then I start "slurp" job in both domains and it will print out the cpu
>share continuously. To my surprise, vm1 takes around 4% of cpu and vm2
>occpuies around 17% cpu. I was expecting they share the cpu something like
>20% and 80% though the ratio of 4% and 17% is similar as that of 20% and
>80%. BTW, dom0 didn't run any extra job when I ran the test.
>Could you please let me know why only 21% (4%+17%) cpu is given to both
>vm1 and vm2 not 100%-% taken by dom0?
>On Wed, 18 May 2005, Stephan Diestelhorst wrote:
>>The new sedf scheduler has been in the xen-unstable reopository for a
>>couple of days now. As it may become the default scheduler soon, any
>>testing now is much appreciated!
>>Quick summary can be found in docs/misc/sedf_scheduler_mini-HOWTO.txt
>>Future directions:
>>-effective scheduling of SMP-guests
>>  -clever SMP locking in domains (on the way)
>>  -timeslice donating (under construction)
>>  -identifying gangs and schedule them together
>>  -balancing of domains/ VCPUs
>>Any comments/wishes/ideas/... on that are welcome!
>>  Stephan Diestelhorst
>>Xen-devel mailing list

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>