This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] RE: IDLE domain is scheduled more than dom0

To: "Stephan Diestelhorst" <sd386@xxxxxxxxx>
Subject: [Xen-devel] RE: IDLE domain is scheduled more than dom0
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Mon, 11 Jul 2005 18:53:30 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 11 Jul 2005 10:52:22 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcWDz+RgyCb21SARTSiy+RHTAYTM0QCLLaJQ
Thread-topic: IDLE domain is scheduled more than dom0
>From: sd386@xxxxxxxxxxxxxxxx [mailto:sd386@xxxxxxxxxxxxxxxx] On
>Behalf Of Stephan Diestelhorst
>Sent: Friday, July 08, 2005 11:15 PM
>>      An example is: (No DomU created)
>>                       Total cpu time
>> IDLE                   93def9294f
>> Dom0                  5420288a1b
>Hi Kevin,
>  this is quite an interesting result.
>In "sedf_add_task" dom0 is set to 15ms every 20ms, so this means that
it gets
>75% of the cpu_time. During boottime this might vary due to lots of I/O
but a
>ratio of 33% for dom0 and 66% for the idle task is strange. Could you
>try two things:
>a) give the domain extratime by modfying the line
>inf->stauts = EXTRA_NONE | SEDF_ASLEEP;
>inf->status = EXTRA_AWARE | SEDF_ASLEEP;
>in "sedf_add_task" "case 0" in sched_sedf.c

This can't help, with almost same statistics:
                          Total cpu time
IDLE                      19CE88F3E7
Dom0                     1040A91728

Not sure whether that strange coming from the fact that IDLE is also
initialized with period as WEIGHT_PERIOD(100ms)?

>b) set inf->slice = MILLIESECS(20);

Instead this approach has exactly same effect as previous bvt:
                          Total cpu time
IDLE                      48FD92BF
Dom0                     2D61AF8D5D

Then IDLE wasn't scheduled any more after Dom0 is up, except explicit
request. Though Dom0 is placed on waitq when in the start of
do_schedule, it will be moved back to runq when update_queues. The
reason is that the start of next period of Dom0 is always earlier than
NOW(), when inf->slice == inf->period. Then scheduler is always happy to
choose Dom0 instead of IDLE, since both in runq.

So, to make b) change can give smooth boot for Dom0, but not sure
whether this will affect multiple domains since your intent is to set
Dom0 as 75% threshold. Then how about adding a special check in
update_queues, upon the case where IDLE is only candidate on runq? In
that case, it's better to pull one from waitq which is run-able there
only being slice consumed up, to begin a new round immediately...


Xen-devel mailing list