WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH 1/1] Xen ARINC653 scheduler

Hello George,

In response to your request for justification for #2 and #3 on your list:

None of the existing schedulers allow configuration on the fly like this.  
Existing schedulers assume all domains will be runnable, and so the domain 
control operation is used to fine-tune parameters about each domain (such as 
the weight and credits for the std. credit scheduler). Our scheduler is 
configurable beyond parameters pertaining to a particular domain -- the major 
frame period and complete list of runnable domains is set globally. Although it 
would be possible, it would not really make sense to set these global 
parameters within a hypercall designed to tune domain-specific settings.  It is 
a useful new function that can be piloted with the ARINC 653 scheduler.

Anthony Boorsma
www.DornerWorks.com

-----Original Message-----
From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of George Dunlap
Sent: Wednesday, March 31, 2010 6:55 AM
To: Anthony Boorsma
Cc: Keir.Fraser@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] [PATCH 1/1] Xen ARINC653 scheduler

And what the patch does:
1) Adds a new scheduler, arin653.
2) Adds a xc_sched_op() function to the libxc interface, which will
make sched_op hypercalls directly.
3) Adds a new arin653 SCHEDOP to adjust various parameters for the
arin653 scheduler.

#1 is fine, but #2 and #3 need justification.

For #3, the standard way to adjust scheduling parameters is to use
XEN_DOMCTL_SCHEDOP_{get,put}info.  Is there any reason not to do that
for this scheduler as well?  And if you do that for #3, is #2
necessary?

 -George

On Tue, Mar 30, 2010 at 11:33 PM, Anthony Boorsma
<Anthony.Boorsma@xxxxxxxxxxxxxxx> wrote:
> Hello George,
>
> Here is a description of the ARINC653 Specification, why it's important and 
> what the patch does:
>
> The ARINC 653 specification (Avionics Application Standard Software 
> Interface) outlines requirements for creating a robust software environment 
> where isolated domains of applications can execute independently of one 
> another.  The domains are isolated both spatially and temporally.  The range 
> of memory in the hardware system is partitioned into sub-ranges that are 
> allocated to the software domains running in the ARINC 653 system.  These 
> domains cannot access memory regions allocated to other domains, preventing 
> them from affecting those domains by altering or reading their memory 
> contents.  Domains in an ARINC 653 system are also partitioned temporally. 
>  They are scheduled deterministically in a fashion that prevents one domain 
> from affecting the performance of the other domains.  Domains should not be 
> aware of each other; they should believe they are the only software running 
> in the system.
>
> The term "domain" is used by the Xen community to describe each virtualized 
> environment (CPU time, memory allocation, and I/O interfaces) that is 
> isolated from the others.  The ARINC 653 specification refers to this concept 
> as a "partition".
>
> The ARINC 653 scheduler added to Xen provides deterministic scheduling of Xen 
> domains.  It is a "credit"-based scheduler, operating using the ARINC 653 
> concept of a "major frame."  Within a major frame, every domain in the 
> current schedule is guaranteed to receive a certain amount of CPU time.  When 
> the domain has used its configured amount of processor time for that major 
> frame, it is placed on an "expired" list to wait for the next major frame 
> before it can run again.  Domains that yield or block before using all of 
> their credit for a major frame will be allowed to resume execution later in 
> the major frame until their credits expire.  When a major frame is finished, 
> all domains on the expired list are moved back to the run list and the 
> credits for all domains are restored to their configured amounts.
>
> The current schedule is configurable by the Domain 0 operating system.  It 
> can set the run time for each domain as well as the major frame for the 
> schedule.  The major frame must be long enough to allow all configured 
> domains to use all of their credits.
>
> An ARINC 653 scheduler option in Xen would give developers of safety-critical 
> applications, or developers simply wishing to make use of a 
> deterministically-scheduled environment, an easily-accessible platform in 
> which to test application software.  Such developers, for example, could work 
> out timing or synchronization issues on a Xen PC platform before running the 
> application on the potentially less-available, and more costly, target 
> hardware.  This would expand Xen's user base to include those interested in 
> such deterministic scheduling, while users who are not interested would not 
> need to use the ARINC 653 scheduler.
>
> Anthony Boorsma
> www.DornerWorks.com
>
> -----Original Message-----
> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of George Dunlap
> Sent: Wednesday, March 24, 2010 8:19 PM
> To: Anthony Boorsma
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Keir.Fraser@xxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH 1/1] Xen ARINC653 scheduler
>
> What is ARINC653?  Can you give a summary of what it is, why it's
> important, and what this patch does?
>
> Thanks,
>  -George
>
> On Fri, Mar 19, 2010 at 4:23 PM, Anthony Boorsma
> <Anthony.Boorsma@xxxxxxxxxxxxxxx> wrote:
>> This is a patch to the Xen Hypervisor that adds a scheduler that provides
>> partial ARINC653 CPU scheduling.
>>
>>
>>
>> Anthony Boorsma
>>
>> www.DornerWorks.com
>>
>>
>>
>>   ***  Diffing -rupN
>>
>> diff -rupN a/tools/libxc/xc_core.c b/tools/libxc/xc_core.c
>>
>> --- a/tools/libxc/xc_core.c           2009-08-06 09:57:25.000000000 -0400
>>
>> +++ b/tools/libxc/xc_core.c        2010-03-19 09:07:29.595745100 -0400
>>
>> @@ -321,7 +321,15 @@ elfnote_dump_none(void *args, dumpcore_r
>>
>>      struct xen_dumpcore_elfnote_none_desc none;
>>
>>
>>
>>      elfnote_init(&elfnote);
>>
>> -    memset(&none, 0, sizeof(none));
>>
>> +    /*
>>
>> +     * josh holtrop <DornerWorks.com> - 2009-01-04 - avoid compilation
>> problem
>>
>> +     * with warning "memset used with constant zero length parameter" and
>>
>> +     * warnings treated as errors with the new gcc in Ubuntu 9.10
>>
>> +     */
>>
>> +    if (sizeof(none) > 0)
>>
>> +    {
>>
>> +        memset(&none, 0, sizeof(none));
>>
>> +    }
>>
>>
>>
>>      elfnote.descsz = sizeof(none);
>>
>>      elfnote.type = XEN_ELFNOTE_DUMPCORE_NONE;
>>
>> diff -rupN a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
>>
>> --- a/tools/libxc/xc_misc.c          2009-08-06 09:57:25.000000000 -0400
>>
>> +++ b/tools/libxc/xc_misc.c        2010-03-19 09:09:45.906278500 -0400
>>
>> @@ -2,6 +2,8 @@
>>
>>   * xc_misc.c
>>
>>   *
>>
>>   * Miscellaneous control interface functions.
>>
>> + *
>>
>> + * xc_sched_op function added by DornerWorks <DornerWorks.com>.
>>
>>   */
>>
>>
>>
>>  #include "xc_private.h"
>>
>> @@ -358,6 +360,60 @@ void *xc_map_foreign_pages(int xc_handle
>>
>>      return res;
>>
>>  }
>>
>>
>>
>> +int xc_sched_op(int xc_handle, int sched_op, void * arg)
>>
>> +{
>>
>> +    DECLARE_HYPERCALL;
>>
>> +    int rc;
>>
>> +    int argsize = 0;
>>
>> +
>>
>> +    hypercall.op     = __HYPERVISOR_sched_op;
>>
>> +    hypercall.arg[0] = sched_op;
>>
>> +    hypercall.arg[1] = (unsigned long) arg;
>>
>> +
>>
>> +    switch (sched_op)
>>
>> +    {
>>
>> +    case SCHEDOP_yield:
>>
>> +        argsize = 0;
>>
>> +        break;
>>
>> +    case SCHEDOP_block:
>>
>> +        argsize = 0;
>>
>> +        break;
>>
>> +    case SCHEDOP_shutdown:
>>
>> +        argsize = sizeof(sched_shutdown_t);
>>
>> +        break;
>>
>> +    case SCHEDOP_poll:
>>
>> +        argsize = sizeof(sched_poll_t);
>>
>> +        break;
>>
>> +    case SCHEDOP_remote_shutdown:
>>
>> +        argsize = sizeof(sched_remote_shutdown_t);
>>
>> +        break;
>>
>> +    case SCHEDOP_arinc653_sched_set:
>>
>> +        argsize = sizeof(sched_arinc653_sched_set_t);
>>
>> +        break;
>>
>> +    default:
>>
>> +        PERROR("xc_sched_op(): Unknown scheduler operation.");
>>
>> +        break;
>>
>> +    }
>>
>> +
>>
>> +    if (argsize > 0)
>>
>> +    {
>>
>> +        if ( (rc = lock_pages(arg, argsize)) != 0 )
>>
>> +        {
>>
>> +            PERROR("Could not lock memory");
>>
>> +            return rc;
>>
>> +        }
>>
>> +    }
>>
>> +
>>
>> +    rc = do_xen_hypercall(xc_handle, &hypercall);
>>
>> +
>>
>> +    if (argsize > 0)
>>
>> +    {
>>
>> +        unlock_pages(arg, argsize);
>>
>> +    }
>>
>> +
>>
>> +    return rc;
>>
>> +}
>>
>> +
>>
>>  /*
>>
>>   * Local variables:
>>
>>   * mode: C
>>
>> diff -rupN a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
>>
>> --- a/tools/libxc/xenctrl.h           2009-08-06 09:57:25.000000000 -0400
>>
>> +++ b/tools/libxc/xenctrl.h         2010-03-19 09:16:32.104190500 -0400
>>
>> @@ -7,6 +7,9 @@
>>
>>   *
>>
>>   * xc_gnttab functions:
>>
>>   * Copyright (c) 2007-2008, D G Murray <Derek.Murray@xxxxxxxxxxxx>
>>
>> + *
>>
>> + * xc_sched_op function:
>>
>> + * Copyright (c) 2010, DornerWorks, Ltd. <DornerWorks.com>
>>
>>   */
>>
>>
>>
>>  #ifndef XENCTRL_H
>>
>> @@ -1267,4 +1270,7 @@ int xc_get_vcpu_migration_delay(int xc_h
>>
>>  int xc_get_cpuidle_max_cstate(int xc_handle, uint32_t *value);
>>
>>  int xc_set_cpuidle_max_cstate(int xc_handle, uint32_t value);
>>
>>
>>
>> +/* perform a scheduler operation */
>>
>> +int xc_sched_op(int xc_handle, int sched_op, void * arg);
>>
>> +
>>
>>  #endif /* XENCTRL_H */
>>
>> diff -rupN a/xen/common/Makefile b/xen/common/Makefile
>>
>> --- a/xen/common/Makefile      2009-08-06 09:57:27.000000000 -0400
>>
>> +++ b/xen/common/Makefile   2010-03-18 19:34:05.200130400 -0400
>>
>> @@ -13,6 +13,7 @@ obj-y += page_alloc.o
>>
>>  obj-y += rangeset.o
>>
>>  obj-y += sched_credit.o
>>
>>  obj-y += sched_sedf.o
>>
>> +obj-y += sched_arinc653.o
>>
>>  obj-y += schedule.o
>>
>>  obj-y += shutdown.o
>>
>>  obj-y += softirq.o
>>
>> diff -rupN a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
>>
>> --- a/xen/common/sched_arinc653.c     1969-12-31 19:00:00.000000000 -0500
>>
>> +++ b/xen/common/sched_arinc653.c   2010-03-19 09:12:32.105381300 -0400
>>
>> @@ -0,0 +1,725 @@
>>
>> +/*
>>
>> + * File: sched_arinc653.c
>>
>> + * Copyright (c) 2009, DornerWorks, Ltd. <DornerWorks.com>
>>
>> + *
>>
>> + * Description:
>>
>> + *   This file provides an ARINC653-compatible scheduling algorithm
>>
>> + *   for use in Xen.
>>
>> + *
>>
>> + * This program is free software; you can redistribute it and/or modify it
>>
>> + * under the terms of the GNU General Public License as published by the
>> Free
>>
>> + * software Foundation; either version 2 of the License, or (at your
>> option)
>>
>> + * any later version.
>>
>> + *
>>
>> + * This program is distributed in the hope that it will be useful,
>>
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
>>
>> + * See the GNU General Public License for more details.
>>
>> + */
>>
>> +
>>
>> +
>>
>> +/**************************************************************************
>>
>> + * Includes                                                               *
>>
>> + *************************************************************************/
>>
>> +#include <xen/lib.h>
>>
>> +#include <xen/sched.h>
>>
>> +#include <xen/sched-if.h>
>>
>> +#include <xen/timer.h>
>>
>> +#include <xen/softirq.h>
>>
>> +#include <xen/time.h>
>>
>> +#include <xen/errno.h>
>>
>> +#include <xen/sched_arinc653.h>
>>
>> +#include <xen/list.h>
>>
>> +#include <public/sched.h>           /* ARINC653_MAX_DOMAINS_PER_SCHEDULE */
>>
>> +
>>
>> +
>>
>> +/**************************************************************************
>>
>> + * Private Macros                                                         *
>>
>> + *************************************************************************/
>>
>> +
>>
>> +/* Retrieve the idle VCPU for a given physical CPU */
>>
>> +#define IDLETASK(cpu)  ((struct vcpu *) per_cpu(schedule_data, (cpu)).idle)
>>
>> +
>>
>> +/*
>>
>> + * Return a pointer to the ARINC653-specific scheduler data information
>>
>> + * associated with the given VCPU (vc)
>>
>> + */
>>
>> +#define AVCPU(vc) ((arinc653_vcpu_t *)(vc)->sched_priv)
>>
>> +
>>
>> +/**************************************************************************
>>
>> + * Private Type Definitions                                               *
>>
>> + *************************************************************************/
>>
>> +/*
>>
>> + * The sched_entry_t structure holds a single entry of the
>>
>> + * ARINC653 schedule.
>>
>> + */
>>
>> +typedef struct sched_entry_s
>>
>> +{
>>
>> +    /* dom_handle holds the handle ("UUID") for the domain that this
>>
>> +     * schedule entry refers to. */
>>
>> +    xen_domain_handle_t dom_handle;
>>
>> +    /* vcpu_id holds the VCPU number for the VCPU that this schedule
>>
>> +     * entry refers to. */
>>
>> +    int                 vcpu_id;
>>
>> +    /* runtime holds the number of nanoseconds that the VCPU for this
>>
>> +     * schedule entry should be allowed to run per major frame. */
>>
>> +    s_time_t            runtime;
>>
>> +} sched_entry_t;
>>
>> +
>>
>> +/*
>>
>> + * The arinc653_vcpu_t structure holds ARINC653-scheduler-specific
>>
>> + * information for all non-idle VCPUs
>>
>> + */
>>
>> +typedef struct arinc653_vcpu_s
>>
>> +{
>>
>> +    /* runtime stores the number of nanoseconds that this VCPU is allowed
>>
>> +     * to run per major frame. */
>>
>> +    s_time_t            runtime;
>>
>> +    /* time_left stores the number of nanoseconds (its "credit") that this
>>
>> +     * VCPU still has left in the current major frame. */
>>
>> +    s_time_t            time_left;
>>
>> +    /* last_activation_time stores the time that this VCPU was switched
>>
>> +     * to for credit-accounting purposes. */
>>
>> +    s_time_t            last_activation_time;
>>
>> +    /* list holds the linked list information for whichever list this
>>
>> +     * VCPU is stored on. */
>>
>> +    struct list_head    list;
>>
>> +    /* vc points to Xen's struct vcpu so we can get to it from an
>>
>> +     * arinc653_vcpu_t pointer. */
>>
>> +    struct vcpu *       vc;
>>
>> +    /* The active flag tells whether this VCPU is active in the current
>>
>> +     * ARINC653 schedule or not. */
>>
>> +    bool_t              active;
>>
>> +} arinc653_vcpu_t;
>>
>> +
>>
>> +
>>
>> +/**************************************************************************
>>
>> + * Global Data                                                            *
>>
>> + *************************************************************************/
>>
>> +
>>
>> +/*
>>
>> + * This array holds the active ARINC653 schedule.
>>
>> + * When the system tries to start a new VCPU, this schedule is scanned
>>
>> + * to look for a matching (handle, VCPU #) pair. If both the handle
>> ("UUID")
>>
>> + * and VCPU number match, then the VCPU is allowed to run. Its run time
>>
>> + * (per major frame) is given in the third entry of the schedule.
>>
>> + */
>>
>> +static sched_entry_t arinc653_schedule[ARINC653_MAX_DOMAINS_PER_SCHEDULE] =
>> {
>>
>> +    { "", 0, MILLISECS(10) }
>>
>> +};
>>
>> +
>>
>> +/*
>>
>> + * This variable holds the number of entries that are valid in
>>
>> + * the arinc653_schedule table.
>>
>> + * This is not necessarily the same as the number of domains in the
>>
>> + * schedule, since a domain with multiple VCPUs could have a different
>>
>> + * schedule entry for each VCPU.
>>
>> + */
>>
>> +static int num_schedule_entries = 1;
>>
>> +
>>
>> +/*
>>
>> + * arinc653_major_frame holds the major frame time for the ARINC653
>> schedule.
>>
>> + */
>>
>> +static s_time_t arinc653_major_frame = MILLISECS(10);
>>
>> +
>>
>> +/*
>>
>> + * next_major_frame holds the time that the next major frame starts
>>
>> + */
>>
>> +static s_time_t next_major_frame = 0;
>>
>> +
>>
>> +/* Linked list to store runnable domains in the current schedule with
>>
>> + * time left this major frame */
>>
>> +static LIST_HEAD(run_list);
>>
>> +
>>
>> +/* Linked list to store blocked domains in the current schedule */
>>
>> +static LIST_HEAD(blocked_list);
>>
>> +
>>
>> +/* Linked list to store runnable domains in the current schedule
>>
>> + * that have no time left in this major frame */
>>
>> +static LIST_HEAD(expired_list);
>>
>> +
>>
>> +/* Linked list to store runnable domains not in the current schedule */
>>
>> +static LIST_HEAD(deactivated_run_list);
>>
>> +
>>
>> +/* Linked list to store blocked domains not in the current schedule */
>>
>> +static LIST_HEAD(deactivated_blocked_list);
>>
>> +
>>
>> +
>>
>> +/**************************************************************************
>>
>> + * Scheduler functions                                                    *
>>
>> + *************************************************************************/
>>
>> +
>>
>> +static int dom_handle_cmp(const xen_domain_handle_t h1,
>>
>> +        const xen_domain_handle_t h2)
>>
>> +{
>>
>> +    return memcmp(h1, h2, sizeof(xen_domain_handle_t));
>>
>> +}
>>
>> +
>>
>> +/*
>>
>> + * This function scans the current ARINC653 schedule and looks
>>
>> + * for an entry that matches the VCPU v.
>>
>> + * If an entry is found, a pointer to it is returned.
>>
>> + * Otherwise, NULL is returned.
>>
>> + */
>>
>> +static sched_entry_t * find_sched_entry(struct vcpu * v)
>>
>> +{
>>
>> +    sched_entry_t * sched_entry = NULL;
>>
>> +    if (v != NULL)
>>
>> +    {
>>
>> +        for (int i = 0; i < num_schedule_entries; i++)
>>
>> +        {
>>
>> +            if (   (v->vcpu_id == arinc653_schedule[i].vcpu_id)
>>
>> +                && (dom_handle_cmp(arinc653_schedule[i].dom_handle,
>>
>> +                        v->domain->handle) == 0))
>>
>> +            {
>>
>> +                sched_entry = &arinc653_schedule[i];
>>
>> +                break;
>>
>> +            }
>>
>> +        }
>>
>> +    }
>>
>> +    return sched_entry;
>>
>> +}
>>
>> +
>>
>> +/*
>>
>> + * This function is called by the hypervisor when a privileged domain
>>
>> + * invokes the HYPERVISOR_sched_op hypercall with a command of
>>
>> + * SCHEDOP_arinc653_sched_set.
>>
>> + * It returns 0 on success and nonzero upon error.
>>
>> + * This function is only called from do_sched_op(), defined within
>>
>> + * xen/common/schedule.c. The parameter schedule is set to be the
>>
>> + * address of a local variable from within do_sched_op(), so it is
>>
>> + * guaranteed not to be NULL.
>>
>> + */
>>
>> +int arinc653_sched_set(sched_arinc653_sched_set_t * schedule)
>>
>> +{
>>
>> +    int ret = 0;
>>
>> +    s_time_t total_runtime = 0;
>>
>> +    int found_dom0 = 0;
>>
>> +    const static xen_domain_handle_t dom0_handle = {0};
>>
>> +
>>
>> +    /* check for valid major frame and number of schedule entries */
>>
>> +    if ( (schedule->major_frame <= 0)
>>
>> +      || (schedule->num_sched_entries < 1)
>>
>> +      || (schedule->num_sched_entries > ARINC653_MAX_DOMAINS_PER_SCHEDULE)
>> )
>>
>> +    {
>>
>> +        ret = -EINVAL;
>>
>> +    }
>>
>> +    if (ret == 0)
>>
>> +    {
>>
>> +        for (int i = 0; i < schedule->num_sched_entries; i++)
>>
>> +        {
>>
>> +            /*
>>
>> +             * look for domain 0 handle - every schedule must contain
>>
>> +             * some time for domain 0 to run
>>
>> +             */
>>
>> +            if (dom_handle_cmp(schedule->sched_entries[i].dom_handle,
>>
>> +                        dom0_handle) == 0)
>>
>> +            {
>>
>> +                found_dom0 = 1;
>>
>> +            }
>>
>> +            /* check for a valid VCPU id and runtime */
>>
>> +            if ( (schedule->sched_entries[i].vcpu_id < 0)
>>
>> +              || (schedule->sched_entries[i].runtime <= 0) )
>>
>> +            {
>>
>> +                ret = -EINVAL;
>>
>> +            }
>>
>> +            else
>>
>> +            {
>>
>> +                total_runtime += schedule->sched_entries[i].runtime;
>>
>> +            }
>>
>> +        }
>>
>> +    }
>>
>> +    if (ret == 0)
>>
>> +    {
>>
>> +        /* error if the schedule doesn't contain a slot for domain 0 */
>>
>> +        if (found_dom0 == 0)
>>
>> +        {
>>
>> +            ret = -EINVAL;
>>
>> +        }
>>
>> +    }
>>
>> +    if (ret == 0)
>>
>> +    {
>>
>> +        /* error if the major frame is not large enough to run all entries
>> */
>>
>> +        if (total_runtime > schedule->major_frame)
>>
>> +        {
>>
>> +            ret = -EINVAL;
>>
>> +        }
>>
>> +    }
>>
>> +    if (ret == 0)
>>
>> +    {
>>
>> +        arinc653_vcpu_t * avcpu;
>>
>> +        arinc653_vcpu_t * avcpu_tmp;
>>
>> +
>>
>> +        /* copy the new schedule into place */
>>
>> +        num_schedule_entries = schedule->num_sched_entries;
>>
>> +        arinc653_major_frame = schedule->major_frame;
>>
>> +        for (int i = 0; i < schedule->num_sched_entries; i++)
>>
>> +        {
>>
>> +            memcpy(arinc653_schedule[i].dom_handle,
>>
>> +                    schedule->sched_entries[i].dom_handle,
>>
>> +                    sizeof(arinc653_schedule[i].dom_handle));
>>
>> +            arinc653_schedule[i].vcpu_id =
>> schedule->sched_entries[i].vcpu_id;
>>
>> +            arinc653_schedule[i].runtime =
>> schedule->sched_entries[i].runtime;
>>
>> +        }
>>
>> +
>>
>> +        /*
>>
>> +         * The newly installed schedule takes effect immediately.
>>
>> +         * We do not even wait for the current major frame to expire.
>>
>> +         * So, we need to update all of our VCPU lists to reflect the
>>
>> +         * new schedule here.
>>
>> +         */
>>
>> +
>>
>> +        /*
>>
>> +         * There should be nothing in the expired_list when we start the
>>
>> +         * next major frame for the new schedule, so move everything
>>
>> +         * currently there into the run_list.
>>
>> +         */
>>
>> +        list_splice_init(&expired_list, &run_list);
>>
>> +
>>
>> +        /*
>>
>> +         * Process entries on the run_list (this will now include
>>
>> +         * entries that just came from the expired list).
>>
>> +         * If the VCPU is in the current schedule, update its
>>
>> +         * runtime and mark it active.
>>
>> +         * The time_left parameter will be updated upon the next
>>
>> +         * invocation of the do_schedule callback function because a
>>
>> +         * new major frame will start.
>>
>> +         * It is just set to zero here "defensively."
>>
>> +         * If the VCPU is not in the new schedule, mark it inactive
>>
>> +         * and move it to the deactivated_run_list.
>>
>> +         */
>>
>> +        list_for_each_entry_safe(avcpu, avcpu_tmp, &run_list, list)
>>
>> +        {
>>
>> +            sched_entry_t * sched_entry = find_sched_entry(avcpu->vc);
>>
>> +            if (sched_entry != NULL)
>>
>> +            {
>>
>> +                avcpu->active = 1;
>>
>> +                avcpu->runtime = sched_entry->runtime;
>>
>> +                avcpu->time_left = 0;
>>
>> +            }
>>
>> +            else
>>
>> +            {
>>
>> +                avcpu->active = 0;
>>
>> +                list_move(&avcpu->list, &deactivated_run_list);
>>
>> +            }
>>
>> +        }
>>
>> +
>>
>> +        /*
>>
>> +         * Process entries on the blocked_list.
>>
>> +         * If the VCPU is in the current schedule, update its
>>
>> +         * runtime and mark it active.
>>
>> +         * The time_left parameter will be updated upon the next
>>
>> +         * invocation of the do_schedule callback function because a
>>
>> +         * new major frame will start.
>>
>> +         * It is just set to zero here "defensively."
>>
>> +         * If the VCPU is not in the new schedule, mark it inactive
>>
>> +         * and move it to the deactivated_blocked_list.
>>
>> +         */
>>
>> +        list_for_each_entry_safe(avcpu, avcpu_tmp, &blocked_list, list)
>>
>> +        {
>>
>> +            sched_entry_t * sched_entry = find_sched_entry(avcpu->vc);
>>
>> +            if (sched_entry != NULL)
>>
>> +            {
>>
>> +                avcpu->active = 1;
>>
>> +                avcpu->runtime = sched_entry->runtime;
>>
>> +                avcpu->time_left = 0;
>>
>> +            }
>>
>> +            else
>>
>> +            {
>>
>> +                avcpu->active = 0;
>>
>> +                list_move(&avcpu->list, &deactivated_blocked_list);
>>
>> +            }
>>
>> +        }
>>
>> +
>>
>> +        /*
>>
>> +         * Process entries on the deactivated_run_list.
>>
>> +         * If the VCPU is now in the current schedule, update its
>>
>> +         * runtime, mark it active, and move it to the run_list.
>>
>> +         * The time_left parameter will be updated upon the next
>>
>> +         * invocation of the do_schedule callback function because a
>>
>> +         * new major frame will start.
>>
>> +         * It is just set to zero here "defensively."
>>
>> +         * If the VCPU is not in the new schedule, do nothing because
>>
>> +         * it is already in the correct place.
>>
>> +         */
>>
>> +        list_for_each_entry_safe(avcpu, avcpu_tmp, &deactivated_run_list,
>> list)
>>
>> +        {
>>
>> +            sched_entry_t * sched_entry = find_sched_entry(avcpu->vc);
>>
>> +            if (sched_entry != NULL)
>>
>> +            {
>>
>> +                avcpu->active = 1;
>>
>> +                avcpu->runtime = sched_entry->runtime;
>>
>> +                avcpu->time_left = 0;
>>
>> +                list_move(&avcpu->list, &run_list);
>>
>> +            }
>>
>> +        }
>>
>> +
>>
>> +        /*
>>
>> +         * Process entries on the deactivated_blocked_list.
>>
>> +         * If the VCPU is now in the current schedule, update its
>>
>> +         * runtime, mark it active, and move it to the blocked_list.
>>
>> +         * The time_left parameter will be updated upon the next
>>
>> +         * invocation of the do_schedule callback function because a
>>
>> +         * new major frame will start.
>>
>> +         * It is just set to zero here "defensively."
>>
>> +         * If the VCPU is not in the new schedule, do nothing because
>>
>> +         * it is already in the correct place.
>>
>> +         */
>>
>> +        list_for_each_entry_safe(avcpu, avcpu_tmp,
>>
>> +                &deactivated_blocked_list, list)
>>
>> +        {
>>
>> +            sched_entry_t * sched_entry = find_sched_entry(avcpu->vc);
>>
>> +            if (sched_entry != NULL)
>>
>> +            {
>>
>> +                avcpu->active = 1;
>>
>> +                avcpu->runtime = sched_entry->runtime;
>>
>> +                avcpu->time_left = 0;
>>
>> +                list_move(&avcpu->list, &blocked_list);
>>
>> +            }
>>
>> +        }
>>
>> +
>>
>> +        /*
>>
>> +         * Signal a new major frame to begin. The next major frame
>>
>> +         * is set up by the do_schedule callback function when it
>>
>> +         * is next invoked.
>>
>> +         */
>>
>> +        next_major_frame = NOW();
>>
>> +    }
>>
>> +    return ret;
>>
>> +}
>>
>> +
>>
>> +/*
>>
>> + * Xen scheduler callback function to initialize a virtual CPU (VCPU)
>>
>> + *
>>
>> + * This function should return 0 if the VCPU is allowed to run and
>>
>> + * nonzero if there is an error.
>>
>> + */
>>
>> +static int arinc653_init_vcpu(struct vcpu * v)
>>
>> +{
>>
>> +    int ret = -1;
>>
>> +
>>
>> +    if (is_idle_vcpu(v))
>>
>> +    {
>>
>> +        /*
>>
>> +         * The idle VCPU is created by Xen to run when no domains
>>
>> +         * are runnable or require CPU time.
>>
>> +         * It is similar to an "idle task" or "halt loop" process
>>
>> +         * in an operating system.
>>
>> +         * We do not track any scheduler information for the idle VCPU.
>>
>> +         */
>>
>> +        v->sched_priv = NULL;
>>
>> +        ret = 0;
>>
>> +    }
>>
>> +    else
>>
>> +    {
>>
>> +        v->sched_priv = xmalloc(arinc653_vcpu_t);
>>
>> +        if (AVCPU(v) != NULL)
>>
>> +        {
>>
>> +            /*
>>
>> +             * Initialize our ARINC653 scheduler-specific information
>>
>> +             * for the VCPU.
>>
>> +             * The VCPU starts out on the blocked list.
>>
>> +             * When Xen is ready for the VCPU to run, it will call
>>
>> +             * the vcpu_wake scheduler callback function and our
>>
>> +             * scheduler will move the VCPU to the run_list.
>>
>> +             */
>>
>> +            sched_entry_t * sched_entry = find_sched_entry(v);
>>
>> +            AVCPU(v)->vc = v;
>>
>> +            AVCPU(v)->last_activation_time = 0;
>>
>> +            if (sched_entry != NULL)
>>
>> +            {
>>
>> +                /* the new VCPU is in the current schedule */
>>
>> +                AVCPU(v)->active = 1;
>>
>> +                AVCPU(v)->runtime = sched_entry->runtime;
>>
>> +                AVCPU(v)->time_left = sched_entry->runtime;
>>
>> +                list_add(&AVCPU(v)->list, &blocked_list);
>>
>> +            }
>>
>> +            else
>>
>> +            {
>>
>> +                /* the new VCPU is NOT in the current schedule */
>>
>> +                AVCPU(v)->active = 0;
>>
>> +                AVCPU(v)->runtime = 0;
>>
>> +                AVCPU(v)->time_left = 0;
>>
>> +                list_add(&AVCPU(v)->list, &deactivated_blocked_list);
>>
>> +            }
>>
>> +            ret = 0;
>>
>> +        }
>>
>> +    }
>>
>> +
>>
>> +    return ret;
>>
>> +}
>>
>> +
>>
>> +/*
>>
>> + * Xen scheduler callback function to remove a VCPU
>>
>> + */
>>
>> +static void arinc653_destroy_vcpu(struct vcpu * v)
>>
>> +{
>>
>> +    if (AVCPU(v) != NULL)
>>
>> +    {
>>
>> +        /* remove the VCPU from whichever list it is on */
>>
>> +        list_del(&AVCPU(v)->list);
>>
>> +        /* free the arinc653_vcpu structure */
>>
>> +        xfree(AVCPU(v));
>>
>> +    }
>>
>> +}
>>
>> +
>>
>> +/*
>>
>> + * This function searches the run list to find the next VCPU
>>
>> + * to run.
>>
>> + * If a VCPU other than domain 0 is runnable, it will be returned.
>>
>> + * Otherwise, if domain 0 is runnable, it will be returned.
>>
>> + * Otherwise, the idle task will be returned.
>>
>> + */
>>
>> +static struct vcpu * find_next_runnable_vcpu(void)
>>
>> +{
>>
>> +    arinc653_vcpu_t * avcpu;                    /* loop index variable */
>>
>> +    struct vcpu * dom0_task = NULL;
>>
>> +    struct vcpu * new_task = NULL;
>>
>> +    int cpu = smp_processor_id();
>>
>> +
>>
>> +    BUG_ON(cpu != 0);           /* this implementation only supports one
>> CPU */
>>
>> +
>>
>> +    /* select a new task from the run_list to run, choosing any runnable
>>
>> +     * task other than domain 0 first */
>>
>> +    list_for_each_entry(avcpu, &run_list, list)
>>
>> +    {
>>
>> +        if (avcpu->vc->domain->domain_id == 0)
>>
>> +        {
>>
>> +            dom0_task = avcpu->vc;
>>
>> +        }
>>
>> +        else if (vcpu_runnable(avcpu->vc))
>>
>> +        {
>>
>> +            new_task = avcpu->vc;
>>
>> +        }
>>
>> +    }
>>
>> +
>>
>> +    if (new_task == NULL)
>>
>> +    {
>>
>> +        /* no non-dom0 runnable task was found */
>>
>> +        if (dom0_task != NULL && vcpu_runnable(dom0_task))
>>
>> +        {
>>
>> +            /* if domain 0 has credit left and is runnable, run it */
>>
>> +            new_task = dom0_task;
>>
>> +        }
>>
>> +        else
>>
>> +        {
>>
>> +            /* otherwise just run the idle task */
>>
>> +            new_task = IDLETASK(cpu);
>>
>> +        }
>>
>> +    }
>>
>> +
>>
>> +    return new_task;
>>
>> +}
>>
>> +
>>
>> +/*
>>
>> + * Xen scheduler callback function to select a VCPU to run.
>>
>> + * This is the main scheduler routine.
>>
>> + */
>>
>> +static struct task_slice arinc653_do_schedule(s_time_t t)
>>
>> +{
>>
>> +    arinc653_vcpu_t * avcpu;                    /* loop index variable */
>>
>> +    struct task_slice ret;                      /* hold the chosen domain
>> */
>>
>> +    struct vcpu * new_task = NULL;
>>
>> +
>>
>> +    /* current_task holds a pointer to the currently executing VCPU across
>>
>> +     * do_schedule invocations for credit accounting */
>>
>> +    static struct vcpu * current_task = NULL;
>>
>> +
>>
>> +    if (unlikely(next_major_frame == 0))        /* first run of do_schedule
>> */
>>
>> +    {
>>
>> +        next_major_frame = t + arinc653_major_frame;
>>
>> +    }
>>
>> +    else if (t >= next_major_frame)             /* entered a new major
>> frame */
>>
>> +    {
>>
>> +        next_major_frame = t + arinc653_major_frame;
>>
>> +        /* move everything that had expired last major frame to
>>
>> +         * the run list for this frame */
>>
>> +        list_splice_init(&expired_list, &run_list);
>>
>> +        list_for_each_entry(avcpu, &run_list, list)
>>
>> +        {
>>
>> +            /* restore domain credits for the new major frame */
>>
>> +            avcpu->time_left = avcpu->runtime;
>>
>> +            BUG_ON(avcpu->time_left <= 0);
>>
>> +        }
>>
>> +        list_for_each_entry(avcpu, &blocked_list, list)
>>
>> +        {
>>
>> +            /* restore domain credits for the new major frame */
>>
>> +            avcpu->time_left = avcpu->runtime;
>>
>> +            BUG_ON(avcpu->time_left <= 0);
>>
>> +        }
>>
>> +    }
>>
>> +    else if (AVCPU(current_task) != NULL)       /* not the idle task */
>>
>> +    {
>>
>> +        /*
>>
>> +         * The first time this function is called one of the previous if
>>
>> +         * statements will be true, and so current_task will get set
>>
>> +         * to a non-NULL value before this function is called again.
>>
>> +         * next_major_frame starts out as 0, so if it is not changed
>>
>> +         * the first if statement will be true.
>>
>> +         * If next_major_frame was changed from 0, it must have been
>>
>> +         * changed in the arinc653_sched_set() function, since that
>>
>> +         * is the only function other than this one that changes that
>>
>> +         * variable. In that case, it was set to the time at which
>>
>> +         * arinc653_sched_set() was called, and so when this function
>>
>> +         * is called the time will be greater than or equal to
>>
>> +         * next_major_frame, and so the second if statement will be true.
>>
>> +         */
>>
>> +
>>
>> +        /* we did not enter a new major frame, so decrease the
>>
>> +         * credits remaining for this domain for this frame */
>>
>> +        AVCPU(current_task)->time_left -=
>>
>> +            t - AVCPU(current_task)->last_activation_time;
>>
>> +        if (AVCPU(current_task)->time_left <= 0)
>>
>> +        {
>>
>> +            /* this domain has expended all of its time, so move it
>>
>> +             * to the expired list */
>>
>> +            AVCPU(current_task)->time_left = 0;
>>
>> +            list_move(&AVCPU(current_task)->list, &expired_list);
>>
>> +        }
>>
>> +    }
>>
>> +    else
>>
>> +    {
>>
>> +        /* only the idle VCPU will get here, and we do not do any
>>
>> +         * "credit" accounting for it */
>>
>> +        BUG_ON(!is_idle_vcpu(current_task));
>>
>> +    }
>>
>> +
>>
>> +    new_task = find_next_runnable_vcpu();
>>
>> +    BUG_ON(new_task == NULL);
>>
>> +
>>
>> +    /* if we are switching to a task that we are tracking
>>
>> +     * information for, set its last activation time */
>>
>> +    if (AVCPU(new_task) != NULL)
>>
>> +    {
>>
>> +        AVCPU(new_task)->last_activation_time = t;
>>
>> +        BUG_ON(AVCPU(new_task)->time_left <= 0);
>>
>> +    }
>>
>> +
>>
>> +    /* Check to make sure we did not miss a major frame.
>>
>> +     * This is a good test for robust partitioning. */
>>
>> +    BUG_ON(t >= next_major_frame);
>>
>> +
>>
>> +    ret.time = is_idle_vcpu(new_task)
>>
>> +        ? next_major_frame - t      /* run idle task until next major frame
>> */
>>
>> +        : AVCPU(new_task)->time_left;   /* run for entire slice this frame
>> */
>>
>> +    ret.task = new_task;
>>
>> +    current_task = new_task;
>>
>> +
>>
>> +    BUG_ON(ret.time <= 0);
>>
>> +
>>
>> +    return ret;
>>
>> +}
>>
>> +
>>
>> +/* Xen scheduler callback function to select a CPU for the VCPU to run on
>> */
>>
>> +static int arinc653_pick_cpu(struct vcpu * v)
>>
>> +{
>>
>> +    /* this implementation only supports one physical CPU */
>>
>> +    return 0;
>>
>> +}
>>
>> +
>>
>> +/*
>>
>> + * Xen scheduler callback function to wake up a VCPU
>>
>> + * Xen may call this scheduler callback function for VCPUs that are
>>
>> + * not in the current ARINC653 run list. We keep track of the fact
>>
>> + * that they have been "woken up" but we still do not let them run.
>>
>> + *
>>
>> + * If the VCPU is not in the current schedule:
>>
>> + *   Move it to the deactivated run list.
>>
>> + * Otherwise:
>>
>> + *   If the VCPU still has credit left for this major frame:
>>
>> + *     Move the VCPU to the run list
>>
>> + *   Otherwise:
>>
>> + *     Move the VCPU to the expired list
>>
>> + */
>>
>> +static void arinc653_vcpu_wake(struct vcpu * vc)
>>
>> +{
>>
>> +    /* boolean flag to indicate first run */
>>
>> +    static bool_t dont_raise_softirq = 0;
>>
>> +
>>
>> +    if (AVCPU(vc) != NULL)  /* check that this is a VCPU we are tracking */
>>
>> +    {
>>
>> +        if (AVCPU(vc)->active)
>>
>> +        {
>>
>> +            /* the VCPU is in the current ARINC653 schedule */
>>
>> +            if (AVCPU(vc)->time_left > 0)
>>
>> +            {
>>
>> +                /* the domain has credit remaining for this frame
>>
>> +                 * so put it on the run list */
>>
>> +                list_move(&AVCPU(vc)->list, &run_list);
>>
>> +            }
>>
>> +            else
>>
>> +            {
>>
>> +                /* otherwise put it on the expired list for next frame */
>>
>> +                list_move(&AVCPU(vc)->list, &expired_list);
>>
>> +            }
>>
>> +        }
>>
>> +        else
>>
>> +        {
>>
>> +            /* VCPU is not allowed to run according to this schedule! */
>>
>> +            list_move(&AVCPU(vc)->list, &deactivated_run_list);
>>
>> +        }
>>
>> +    }
>>
>> +
>>
>> +    /* the first time the vcpu_wake function is called, we should raise
>>
>> +     * a softirq to invoke the do_scheduler callback */
>>
>> +    if (!dont_raise_softirq)
>>
>> +    {
>>
>> +        cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
>>
>> +        dont_raise_softirq = 1;
>>
>> +    }
>>
>> +}
>>
>> +
>>
>> +/*
>>
>> + * Xen scheduler callback function to sleep a VCPU
>>
>> + * This function will remove the VCPU from the run list.
>>
>> + * If the VCPU is in the current schedule:
>>
>> + *   Move it to the blocked list.
>>
>> + * Otherwise:
>>
>> + *   Move it to the deactivated blocked list.
>>
>> + */
>>
>> +static void arinc653_vcpu_sleep(struct vcpu * vc)
>>
>> +{
>>
>> +    if (AVCPU(vc) != NULL)  /* check that this is a VCPU we are tracking */
>>
>> +    {
>>
>> +        if (AVCPU(vc)->active)                  /* if in current schedule
>> */
>>
>> +        {
>>
>> +            list_move(&AVCPU(vc)->list, &blocked_list);
>>
>> +        }
>>
>> +        else
>>
>> +        {
>>
>> +            list_move(&AVCPU(vc)->list, &deactivated_blocked_list);
>>
>> +        }
>>
>> +    }
>>
>> +
>>
>> +    /* if the VCPU being put to sleep is the same one that is currently
>>
>> +     * running, raise a softirq to invoke the scheduler to switch domains
>> */
>>
>> +    if (per_cpu(schedule_data, vc->processor).curr == vc)
>>
>> +    {
>>
>> +        cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
>>
>> +    }
>>
>> +}
>>
>> +
>>
>> +/*
>>
>> + * This structure defines our scheduler for Xen.
>>
>> + * The entries tell Xen where to find our scheduler-specific
>>
>> + * callback functions.
>>
>> + * The symbol must be visible to the rest of Xen at link time.
>>
>> + */
>>
>> +struct scheduler sched_arinc653_def = {
>>
>> +    .name           = "ARINC 653 Scheduler",
>>
>> +    .opt_name       = "arinc653",
>>
>> +    .sched_id       = XEN_SCHEDULER_ARINC653,
>>
>> +
>>
>> +    .init_domain    = NULL,
>>
>> +    .destroy_domain = NULL,
>>
>> +
>>
>> +    .init_vcpu      = arinc653_init_vcpu,
>>
>> +    .destroy_vcpu   = arinc653_destroy_vcpu,
>>
>> +
>>
>> +    .do_schedule    = arinc653_do_schedule,
>>
>> +    .pick_cpu       = arinc653_pick_cpu,
>>
>> +    .dump_cpu_state = NULL,
>>
>> +    .sleep          = arinc653_vcpu_sleep,
>>
>> +    .wake           = arinc653_vcpu_wake,
>>
>> +    .adjust         = NULL,
>>
>> +};
>>
>> diff -rupN a/xen/common/schedule.c b/xen/common/schedule.c
>>
>> --- a/xen/common/schedule.c  2009-08-06 09:57:27.000000000 -0400
>>
>> +++ b/xen/common/schedule.c                2010-03-19 09:13:50.792881300
>> -0400
>>
>> @@ -7,7 +7,8 @@
>>
>>   *        File: common/schedule.c
>>
>>   *      Author: Rolf Neugebauer & Keir Fraser
>>
>>   *              Updated for generic API by Mark Williamson
>>
>> - *
>>
>> + *              ARINC653 scheduler added by DornerWorks
>>
>> + *
>>
>>   * Description: Generic CPU scheduling code
>>
>>   *              implements support functionality for the Xen scheduler API.
>>
>>   *
>>
>> @@ -25,6 +26,7 @@
>>
>>  #include <xen/timer.h>
>>
>>  #include <xen/perfc.h>
>>
>>  #include <xen/sched-if.h>
>>
>> +#include <xen/sched_arinc653.h>
>>
>>  #include <xen/softirq.h>
>>
>>  #include <xen/trace.h>
>>
>>  #include <xen/mm.h>
>>
>> @@ -58,9 +60,11 @@ DEFINE_PER_CPU(struct schedule_data, sch
>>
>>
>>
>>  extern struct scheduler sched_sedf_def;
>>
>>  extern struct scheduler sched_credit_def;
>>
>> +extern struct scheduler sched_arinc653_def;
>>
>>  static struct scheduler *schedulers[] = {
>>
>>      &sched_sedf_def,
>>
>>      &sched_credit_def,
>>
>> +    &sched_arinc653_def,
>>
>>      NULL
>>
>>  };
>>
>>
>>
>> @@ -657,6 +661,27 @@ ret_t do_sched_op(int cmd, XEN_GUEST_HAN
>>
>>          break;
>>
>>      }
>>
>>
>>
>> +    case SCHEDOP_arinc653_sched_set:
>>
>> +    {
>>
>> +        sched_arinc653_sched_set_t sched_set;
>>
>> +
>>
>> +        if (!IS_PRIV(current->domain))
>>
>> +        {
>>
>> +            ret = -EPERM;
>>
>> +            break;
>>
>> +        }
>>
>> +
>>
>> +        if (copy_from_guest(&sched_set, arg, 1) != 0)
>>
>> +        {
>>
>> +            ret = -EFAULT;
>>
>> +            break;
>>
>> +        }
>>
>> +
>>
>> +        ret = arinc653_sched_set(&sched_set);
>>
>> +
>>
>> +        break;
>>
>> +    }
>>
>> +
>>
>>      default:
>>
>>          ret = -ENOSYS;
>>
>>      }
>>
>> diff -rupN a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>>
>> --- a/xen/include/public/domctl.h          2009-08-06 09:57:28.000000000
>> -0400
>>
>> +++ b/xen/include/public/domctl.h       2010-03-19 09:15:27.229190500 -0400
>>
>> @@ -23,6 +23,8 @@
>>
>>   *
>>
>>   * Copyright (c) 2002-2003, B Dragovic
>>
>>   * Copyright (c) 2002-2006, K Fraser
>>
>> + *
>>
>> + * ARINC653 Scheduler type added by DornerWorks <DornerWorks.com>.
>>
>>   */
>>
>>
>>
>>  #ifndef __XEN_PUBLIC_DOMCTL_H__
>>
>> @@ -297,6 +299,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_max_v
>>
>>  /* Scheduler types. */
>>
>>  #define XEN_SCHEDULER_SEDF     4
>>
>>  #define XEN_SCHEDULER_CREDIT   5
>>
>> +#define XEN_SCHEDULER_ARINC653 6
>>
>>  /* Set or get info? */
>>
>>  #define XEN_DOMCTL_SCHEDOP_putinfo 0
>>
>>  #define XEN_DOMCTL_SCHEDOP_getinfo 1
>>
>> diff -rupN a/xen/include/public/sched.h b/xen/include/public/sched.h
>>
>> --- a/xen/include/public/sched.h            2009-08-06 09:57:28.000000000
>> -0400
>>
>> +++ b/xen/include/public/sched.h          2010-03-19 09:15:17.682315500
>> -0400
>>
>> @@ -22,6 +22,8 @@
>>
>>   * DEALINGS IN THE SOFTWARE.
>>
>>   *
>>
>>   * Copyright (c) 2005, Keir Fraser <keir@xxxxxxxxxxxxx>
>>
>> + *
>>
>> + * ARINC653 Schedule set added by DornerWorks <DornerWorks.com>.
>>
>>   */
>>
>>
>>
>>  #ifndef __XEN_PUBLIC_SCHED_H__
>>
>> @@ -108,6 +110,40 @@ DEFINE_XEN_GUEST_HANDLE(sched_remote_shu
>>
>>  #define SHUTDOWN_suspend    2  /* Clean up, save suspend info,
>> kill.         */
>>
>>  #define SHUTDOWN_crash      3  /* Tell controller we've
>> crashed.             */
>>
>>
>>
>> +/*
>>
>> + * Set the ARINC653 schedule. The new schedule takes effect immediately.
>>
>> + * The scheduler does not wait for the current major frame to expire
>>
>> + * before switching to the new schedule.
>>
>> + */
>>
>> +#define SCHEDOP_arinc653_sched_set      5
>>
>> +#define ARINC653_MAX_DOMAINS_PER_SCHEDULE   64
>>
>> +/*
>>
>> + * This structure is used to pass a new ARINC653 schedule from a
>>
>> + * privileged domain (ie dom0) to Xen.
>>
>> + */
>>
>> +struct sched_arinc653_sched_set {
>>
>> +    /* major_frame holds the time for the new schedule's major frame
>>
>> +     * in nanoseconds. */
>>
>> +    int64_t     major_frame;
>>
>> +    /* num_sched_entries holds how many of the entries in the
>>
>> +     * sched_entries[] array are valid. */
>>
>> +    uint8_t     num_sched_entries;
>>
>> +    /* The sched_entries array holds the actual schedule entries. */
>>
>> +    struct {
>>
>> +        /* dom_handle must match a domain's UUID */
>>
>> +        xen_domain_handle_t dom_handle;
>>
>> +        /* If a domain has multiple VCPUs, vcpu_id specifies which one
>>
>> +         * this schedule entry applies to. It should be set to 0 if
>>
>> +         * there is only one VCPU for the domain. */
>>
>> +        int                 vcpu_id;
>>
>> +        /* runtime specifies the amount of time that should be allocated
>>
>> +         * to this VCPU per major frame. It is specified in nanoseconds */
>>
>> +        int64_t             runtime;
>>
>> +    } sched_entries[ARINC653_MAX_DOMAINS_PER_SCHEDULE];
>>
>> +};
>>
>> +typedef struct sched_arinc653_sched_set sched_arinc653_sched_set_t;
>>
>> +DEFINE_XEN_GUEST_HANDLE(sched_arinc653_sched_set_t);
>>
>> +
>>
>>  #endif /* __XEN_PUBLIC_SCHED_H__ */
>>
>>
>>
>>  /*
>>
>> diff -rupN a/xen/include/xen/sched_arinc653.h
>> b/xen/include/xen/sched_arinc653.h
>>
>> --- a/xen/include/xen/sched_arinc653.h              1969-12-31
>> 19:00:00.000000000 -0500
>>
>> +++ b/xen/include/xen/sched_arinc653.h           2010-03-19
>> 08:58:10.346112800 -0400
>>
>> @@ -0,0 +1,44 @@
>>
>> +/*
>>
>> + * File: sched_arinc653.h
>>
>> + * Copyright (c) 2009, DornerWorks, Ltd. <DornerWorks.com>
>>
>> + *
>>
>> + * Description:
>>
>> + *   This file prototypes global ARINC653 scheduler functions.  Scheduler
>>
>> + *   callback functions are static to the sched_arinc653 module and do not
>>
>> + *   need global prototypes here.
>>
>> + *
>>
>> + * This program is free software; you can redistribute it and/or modify it
>>
>> + * under the terms of the GNU General Public License as published by the
>> Free
>>
>> + * software Foundation; either version 2 of the License, or (at your
>> option)
>>
>> + * any later version.
>>
>> + *
>>
>> + * This program is distributed in the hope that it will be useful, but
>> WITHOUT
>>
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>>
>> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
>>
>> + * more details.
>>
>> + */
>>
>> +
>>
>> +#ifndef __SCHED_ARINC653_H__
>>
>> +#define __SCHED_ARINC653_H__
>>
>> +
>>
>> +#include <public/sched.h>
>>
>> +
>>
>> +/*
>>
>> + * arinc653_sched_set() is used to put a new schedule in place for the
>>
>> + * ARINC653 scheduler.
>>
>> + * Whereas the credit scheduler allows configuration by changing weights
>>
>> + * and caps on a per-domain basis, the ARINC653 scheduler allows
>> configuring
>>
>> + * a complete list of domains that are allowed to run, and the duration
>> they
>>
>> + * are allowed to run.
>>
>> + * This is a global call instead of a domain-specific setting, so a
>> prototype
>>
>> + * is placed here for the rest of Xen to access. Currently it is only
>>
>> + * called by the do_sched_op() function in xen/common/schedule.c
>>
>> + * in response to a hypercall.
>>
>> + *
>>
>> + * Return values:
>>
>> + *   0          Success
>>
>> + *   -EINVAL    Invalid Parameter
>>
>> + */
>>
>> +int arinc653_sched_set(sched_arinc653_sched_set_t * schedule);
>>
>> +
>>
>> +#endif /* __SCHED_ARINC653_H__ */
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>
>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>