Task Scheduling I
Task Scheduling I
6
Embedded System
Software
Version 2 EE IIT, Kharagpur 1
Lesson
29
Real-Time Task
Scheduling – Part 1
Version 2 EE IIT, Kharagpur 2
Specific Instructional Objectives
At the end of this lesson, the student would be able to:
• Understand the basic terminologies associated with Real-Time task scheduling
• Classify the Real-Time tasks with respect to their recurrence
• Get an overview of the different types of schedulers
• Get an overview of the various ways of classifying scheduling algorithms
• Understand the logic of clock-driven scheduling
• Get an overview of table-driven schedulers
• Get an overview of cyclic schedulers
• Work out problems related to table-driven and cyclic schedulers
• Understand how a generalized task scheduler would be
• Compare table-driven and cyclic schedulers
Task Instance: Each time an event occurs, it triggers the task that handles this event to run.
In other words, a task is generated when some specific event occurs. Real-time tasks therefore
normally recur a large number of times at different instants of time depending on the event
occurrence times. It is possible that real-time tasks recur at random instants. However, most
real-time tasks recur with certain fixed periods. For example, a temperature sensing task in a
chemical plant might recur indefinitely with a certain period because the temperature is sampled
periodically, whereas a task handling a device interrupt might recur at random instants. Each
time a task recurs, it is called an instance of the task. The first time a task occurs, it is
called the first instance of the task. The next occurrence of the task is called its second
instance, and so on. The jth instance of a task Ti would be denoted as Ti(j). Each instance of a
real-time task is associated with a deadline by which it needs to complete and produce
results. We shall at times refer to task instances as processes and use these two terms
interchangeably when no confusion arises.
Relative Deadline versus Absolute Deadline: The absolute deadline of a task is the
absolute time value (counted from time 0) by which the results from the task are
expected. Thus, absolute deadline is equal to the interval of time between the time 0 and the
actual instant at which the deadline occurs as measured by some physical clock. Whereas,
relative deadline is the time interval between the start of the task and the instant at which
deadline occurs. In other words, relative deadline is the time interval between the arrival
of a task and the corresponding deadline. The difference between relative and absolute
deadlines is illustrated in Fig. 29.1. It can be observed from Fig. 29.1 that the relative deadline
of the task Ti(1) is d, whereas its absolute deadline is φ + d.
Response Time: The response time of a task is the time it takes (as measured from the task
arrival time) for the task to produce its results. As already remarked, task instances get generated
The response time is the time duration from the occurrence of the event generating the task to
the time the task produces its results.
For hard real-time tasks, as long as all their deadlines are met, there is no special advantage
of completing the tasks early. However, for soft real-time tasks, average response time of tasks
is an important metric to measure the performance of a scheduler. A scheduler for soft real-
time tasks should try to execute the tasks in an order that minimizes the average response
time of tasks.
Task Precedence: A task is said to precede another task, if the first task must complete
before the second task can start. When a task Ti precedes another task Tj, then each instance of
Ti precedes the corresponding instance of Tj. That is, if T1 precedes T2, then T1(1) precedes
T2(1), T1(2) precedes T2(2), and so on. A precedence order defines a partial order among tasks.
Recollect from a first course on discrete mathematics that a partial order relation is
reflexive, antisymmetric, and transitive. An example partial ordering among tasks is shown in
Fig. 29.2. Here T1 precedes T2, but we cannot relate T1 with either T3 or T4. We shall later
use task precedence relation to develop appropriate task scheduling algorithms.
T2
T1
T4 T3
Periodic Task: A periodic task is one that repeats after a certain fixed time interval. The
precise time instants at which periodic tasks recur are usually demarcated by clock interrupts.
For this reason, periodic tasks are sometimes referred to as clock-driven tasks. The fixed time
interval after which a task repeats is called the period of the task. If Ti is a periodic task, then the
time from 0 till the occurrence of the first instance of Ti (i.e. Ti(1)) is denoted by φi, and is
called the phase of the task. The second instance (i.e. Ti(2)) occurs at φi + pi. The third instance
(i.e. Ti(3)) occurs at φi + 2 ∗ pi and so on. Formally, a periodic task Ti can be represented by a
4 tuple (φi, pi, ei, di) where pi is the period of task, ei is the worst case execution time of the
task, and di is the relative deadline of the task. We shall use this notation extensively in future
discussions.
ei
Φ = 2000 di
0 Φ Φ + pi Φ + 2*pi
Fig. 29.3 Track Correction Task (2000mSec; pi; ei; di) of a Rocket
To illustrate the above notation to represent real-time periodic tasks, let us consider
the track correction task typically found in a rocket control software. Assume the following
characteristics of the track correction task. The track correction task starts 2000 milliseconds
after the launch of the rocket, and recurs periodically every 50 milliseconds then on. Each
instance of the task requires a processing time of 8 milliseconds and its relative deadline is 50
milliseconds. Recall that the phase of a task is defined by the occurrence time of the first
instance of the task. Therefore, the phase of this task is 2000 milliseconds. This task can formally
be represented as (2000 mSec, 50 mSec, 8 mSec, 50 mSec). This task is pictorially shown in Fig.
29.3. When the deadline of a task equals its period (i.e. pi=di), we can omit the fourth tuple. In
this case, we can represent the task as Ti= (2000 mSec, 50 mSec, 8 mSec). This would
automatically mean pi=di=50 mSec. Similarly, when φi = 0, it can be omitted when no confusion
arises. So, Ti = (20mSec; 100mSec) would indicate a task with φi = 0, pi=100mSec, ei=20mSec,
and di=100mSec. Whenever there is any scope for confusion, we shall explicitly write out the
parameters Ti = (pi=50 mSecs, ei = 8 mSecs, di = 40 mSecs), etc.
Sporadic Task: A sporadic task is one that recurs at random instants. A sporadic task Ti
can be is represented by a three tuple:
Ti = (ei, gi, di)
where ei is the worst case execution time of an instance of the task, gi denotes the minimum
separation between two consecutive instances of the task, di is the relative deadline. The
minimum separation (gi) between two consecutive instances of the task implies that once an
instance of a sporadic task occurs, the next instance cannot occur before gi time units have
elapsed. That is, gi restricts the rate at which sporadic tasks can arise. As done for
periodic tasks, we shall use the convention that the first instance of a sporadic task Ti is denoted
by Ti(1) and the successive instances by Ti(2), Ti(3), etc.
Many sporadic tasks such as emergency message arrivals are highly critical in nature. For
example, in a robot a task that gets generated to handle an obstacle that suddenly appears is a
sporadic task. In a factory, the task that handles fire conditions is a sporadic task. The time of
occurrence of these tasks can not be predicted.
The criticality of sporadic tasks varies from highly critical to moderately critical. For
example, an I/O device interrupt, or a DMA interrupt is moderately critical. However, a
task handling the reporting of fire conditions is highly critical.
Aperiodic Task: An aperiodic task is in many ways similar to a sporadic task. An aperiodic
task can arise at random instants. However, in case of an aperiodic task, the minimum separation
gi between two consecutive instances can be 0. That is, two or more instances of an
aperiodic task might occur at the same time instant. Also, the deadline for an aperiodic
tasks is expressed as either an average value or is expressed statistically. Aperiodic tasks are
generally soft real-time tasks.
It is easy to realize why aperiodic tasks need to be soft real-time tasks. Aperiodic
tasks can recur in quick succession. It therefore becomes very difficult to meet the deadlines
of all instances of an aperiodic task. When several aperiodic tasks recur in a quick
succession, there is a bunching of the task instances and it might lead to a few deadline misses.
As already discussed, soft real-time tasks can tolerate a few deadline misses. An example of an
aperiodic task is a logging task in a distributed system. The logging task can be started by
different tasks running on different nodes. The logging requests from different tasks may arrive
at the logger almost at the same time, or the requests may be spaced out in time. Other examples
of aperiodic tasks include operator requests, keyboard presses, mouse movements, etc. In fact,
all interactive commands issued by users are handled by aperiodic tasks.
Valid Schedule: A valid schedule for a set of tasks is one where at most one task is assigned
to a processor at a time, no task is scheduled before its arrival time, and the precedence and
resource constraints of all tasks are satisfied.
Feasible Schedule: A valid schedule is called a feasible schedule, only if all tasks meet their
respective time constraints in the schedule.
Proficient Scheduler: A task scheduler sch1 is said to be more proficient than another
scheduler sch2, if sch1 can feasibly schedule all task sets that sch2 can feasibly schedule, but not
vice versa. That is, sch1 can feasibly schedule all task sets that sch2 can, but there exists at least
one task set that sch2 can not feasibly schedule, whereas sch1 can. If sch1 can feasibly schedule
all task sets that sch2 can feasibly schedule and vice versa, then sch1 and sch2 are called equally
proficient schedulers.
Optimal Scheduler: A real-time task scheduler is called optimal, if it can feasibly schedule
any task set that can be feasibly scheduled by any other scheduler. In other words, it would
not be possible to find a more proficient scheduling algorithm than an optimal scheduler.
If an optimal scheduler can not schedule some task set, then no other scheduler should be
able to produce a feasible schedule for that task set.
Scheduling Points: The scheduling points of a scheduler are the points on time line at which
the scheduler makes decisions regarding which task is to be run next. It is important to note that
a task scheduler does not need to run continuously, it is activated by the operating system only at
the scheduling points to make the scheduling decision as to which task to be run next. In a
clock-driven scheduler, the scheduling points are defined at the time instants marked by
interrupts generated by a periodic timer. The scheduling points in an event-driven scheduler are
determined by occurrence of certain events.
Utilization: The processor utilization (or simply utilization) of a task is the average time for
which it executes per unit time interval. In notations: for a periodic task Ti, the utilization ui =
ei/pi, where ei is the execution time and pi is the period of Ti. For a set of periodic tasks {Ti}: the
n
total utilization due to all tasks U = i=1∑ ei/pi. It is the objective of any good scheduling
algorithm to feasibly schedule even those task sets that have very high utilization, i.e. utilization
approaching 1. Of course, on a uniprocessor it is not possible to schedule task sets having
utilization more than 1.
Jitter: Jitter is the deviation of a periodic task from its strict periodic behavior. The
arrival time jitter is the deviation of the task from arriving at the precise periodic time of arrival.
It may be caused by imprecise clocks, or other factors such as network congestions. Similarly,
completion time jitter is the deviation of the completion of a task from precise periodic points.
The completion time jitter may be caused by the specific scheduling algorithm employed
which takes up a task for scheduling as per convenience and the load at an instant, rather than
scheduling at some strict time instants. Jitters are undesirable for some applications.
The clock-driven schedulers are those in which the scheduling points are determined by the
interrupts received from a clock. In the event-driven ones, the scheduling points are defined
by certain events which precludes clock interrupts. The hybrid ones use both clock interrupts
as well as event occurrences to define their scheduling points.
A few important members of each of these three broad classes of scheduling algorithms are
the following:
1. Clock Driven
• Table-driven
• Cyclic
2. Event Driven
• Simple priority-based
• Rate Monotonic Analysis (RMA)
• Earliest Deadline First (EDF)
3. Hybrid
• Round-robin
A major cycle of a set of tasks is an interval of time on the time line such that in each major
cycle, the different tasks recur identically.
In the reasoning we presented above for the computation of the size of a schedule table, one
assumption that we implicitly made is that φi = 0. That is, all tasks are in phase.
Start time in
Task
millisecs
T1 0
T2 3
T3 10
T4 12
T5 17
1.5.2. Theorem 1
The major cycle of a set of tasks ST = {T1, T2, … , Tn} is LCM ({p1, p2, … , pn}) even when the
tasks have arbitrary phasing.
Proof: As per our definition of a major cycle, even when tasks have non-zero phasing, task
instances would repeat the same way in each major cycle. Let us consider an example in which
the occurrences of a task Ti in a major cycle be as shown in Fig. 29.4. As shown in the example
of Fig. 29.4, there are k-1 occurrences of the task Ti during a major cycle. The first occurrence
of Ti starts φ time units from the start of the major cycle. The major cycle ends x time units after
the last (i.e. (k-1)th) occurrence of the task Ti in the major cycle. Of course, this must be the
same in each major cycle.
Φ+x=pi
M M
Ti(1) Ti(2) Ti(k-1) Ti(k) Ti(k+1) Ti(2k-1)
Φ x Φ x
time
Assume that the size of each major cycle is M. Then, from an inspection of Fig. 29.4, for the
task to repeat identically in each major cycle:
M = (k-1)pi + φ + x …(2.1)
Now, for the task Ti to have identical occurrence times in each major cycle, φ + x must equal
to pi (see Fig. 29.4).
Substituting this in Expr. 2.1, we get, M = (k-1)∗ pi + pi = k∗ pi …(2.2)
So, the major cycle M contains an integral multiple of pi. This argument holds for each task
in the task set irrespective of its phase. Therefore M = LCM ({p1, p2, … , pn}).
Task Frame
Number Number
T3 f1
T1 f2
T3 f3
T4 f4
The size of the frame to be used by the scheduler is an important design parameter and needs
to be chosen very carefully. A selected frame size should satisfy the following three constraints.
t d
0 kF (k+1)F (k+2)F
3. Satisfaction of Task Deadline: This third constraint on frame size is necessary to meet
the task deadlines. This constraint imposes that between the arrival of a task and its
deadline, there must exist at least one full frame. This constraint is necessary since a task
should not miss its deadline, because by the time it could be taken up for scheduling, the
deadline was imminent. Consider this: a task can only be taken up for scheduling at the
start of a frame. If between the arrival and completion of a task, not even one frame
exists, a situation as shown in Fig. 29.7 might arise. In this case, the task arrives
sometimes after the kth frame has started. Obviously it can not be taken up for
scheduling in the kth frame and can only be taken up in the k+1th frame. But, then it
may be too late to meet its deadline since the execution time of a task can be up to the
size of a full frame. This might result in the task missing its deadline since the task
might complete only at the end of (k+1)th frame much after the deadline d has
passed. We therefore need a full frame to exist between the arrival of a task and its
deadline as shown in Fig. 29.8, so that task deadlines could be met.
t d
(k+1)
F
0 kF (k+2)F
Fig. 29.8 A Full Frame Exists Between the Arrival and Deadline of a Task
More formally, this constraint can be formulated as follows: Suppose a task arises after
∆t time units have passed since the last frame (see Fig. 29.8). Then, assuming that a
single frame is sufficient to complete the task, the task can complete before its deadline
iff (2F − ∆t) ≤ di, or 2F ≤ (di + ∆t).
…(2.4)
Remember that the value of ∆t might vary from one instance of the task to another. The
worst case scenario (where the task is likely to miss its deadline) occurs for the task
instance having the minimum value of ∆t, such that ∆t > 0. This is the worst case
scenario, since under this the task would have to wait the longest before its execution can
start.
It should be clear that if a task arrives just after a frame has started, then the task would
have to wait for the full duration of the current frame before it can be taken up for
execution. If a task at all misses its deadline, then certainly it would be under such
situations. In other words, the worst case scenario for a task to meet its deadline occurs
for its instance that has the minimum separation from the start of a frame. The
determination of the minimum separation value (i.e. min(∆t)) for a task among all
instances of the task would help in determining a feasible frame size. We show by
Theorem 2.2 that min(∆t) is equal to gcd(F, pi). Consequently, this constraint can be
written as:
for every Ti, 2F – gcd(F, pi) ≤ di …(2.5)
Note that this constraint defines an upper-bound on frame size for a task Ti, i.e.,
if the frame size is any larger than the defined upper-bound, then tasks might miss their
deadlines. Expr. 2.5 defined the frame size, from the consideration of one task only.
Now considering all tasks, the frame size must be smaller than max(gcd(F, pi)+di)/2.
1.5.4. Theorem 2
The minimum separation of the task arrival from the corresponding frame start time
(min(∆t)), considering all instances of a task Ti, is equal to gcd(F, pi).
Proof: Let g = gcd(F, pi), where gcd is the function determining the greatest common
divisor of its arguments. It follows from the definition of gcd that g must squarely divide each
of F and pi. Let Ti be a task with zero phasing. Now, assume that this Theorem is violated for
certain integers m and n, such that the Ti(n) occurs in the mth frame and the difference between
1.5.5. Examples
Example 1: A cyclic scheduler is to be used to run the following set of periodic tasks on a
uniprocessor: T1 = (e1=1, p1=4), T2 = (e2=, p2=5), T3 = (e3=1, p3=20), T4 = (e4=2,
p4=20). Select an appropriate frame size.
Solution: For the given task set, an appropriate frame size is the one that satisfies all the
three required constraints. In the following, we determine a suitable frame size F which
satisfies all the three required constraints.
Constraint 1: Let F be an appropriate frame size, then max {ei, F}. From this constraint, we
get F ≥ 1.5.
Constraint 2: The major cycle M for the given task set is given by M = LCM(4,5,20) = 20.
M should be an integral multiple of the frame size F, i.e., M mod F = 0. This consideration
implies that F can take on the values 2, 4, 5, 10, 20. Frame size of 1 has been ruled out since
it would violate the constraint 1.
Constraint 3: To satisfy this constraint, we need to check whether a selected frame size F
satisfies the inequality: 2F − gcd(F, pi) < di for each pi.
Let us first try frame size 2.
For F = 2 and task T1:
2 ∗ 2 − gcd(2, 4) ≤ 4 ≡ 4 − 2 ≤ 4
Therefore, for p1 the inequality is satisfied.
Let us try for F = 2 and task T2:
2 ∗ 2 − gcd(2, 5) ≤ 5 ≡ 4 − 1 ≤ 5
Therefore, for p2 the inequality is satisfied.
Let us try for F = 2 and task T3:
Version 2 EE IIT, Kharagpur 16
2 ∗ 2 − gcd(2, 20) ≤ 20 ≡ 4 − 2 ≤ 20
Therefore, for p3 the inequality is satisfied.
For F = 2 and task T4:
2 ∗ 2 − gcd(2, 20) ≤ 20 ≡ 4 − 2 ≤ 20
For p4 the inequality is satisfied.
Thus, constraint 3 is satisfied by all tasks for frame size 2. So, frame size 2 satisfies all the
three constraints. Hence, 2 is a feasible frame size.
Let us try frame size 4.
For F = 4 and task T1:
2 ∗ 4 − gcd(4, 4) ≤ 4 ≡ 8 − 4 ≤ 4
Therefore, for p1 the inequality is satisfied.
Let us try for F = 4 and task T2:
2 ∗ 4 − gcd(4, 5) ≤ 5 ≡ 8 − 1 ≤ 5
For p2 the inequality is not satisfied. Therefore, we need not look any further. Clearly, F = 4
is not a suitable frame size.
Let us now try frame size 5, to check if that is also feasible.
For F = 5 and task T1, we have
2 ∗ 5 − gcd(5, 4) ≤ 4 ≡ 10 − 1 ≤ 4
The inequality is not satisfied for T1. We need not look any further. Clearly, F = 5 is not a
suitable frame size.
Example 2: Consider the following set of periodic real-time tasks to be scheduled by a cyclic
scheduler: T1 = (e1=1, p1=4), T2 = (e2=2, p2=5), T3 = (e3=5, p3=20). Determine a
suitable frame size for the task set.
Solution:
Using the first constraint, we have F ≥ 5.
Using the second constraint, we have the major cycle M = LCM(4, 5, 20) = 20. So, the
permissible values of F are 5, 10 and 20.
Checking for a frame size that satisfies the third constraint, we can find that no value of F is
suitable. To overcome this problem, we need to split the task that is making the task-set not
cyclic-scheduler() {
current-task T = Schedule-Table[k];
k = k + 1;
k = k mod N; //N is the total number of tasks in the schedule
table
dispatch-current-task(T);
schedule-sporadic-tasks(); //Current task T completed early,
// sporadic tasks can be taken
up
schedule-aperiodic-tasks(); //At the end of the frame, the running
task
// is pre-empted if not complete
idle(); //No task to run, idle
}
The cyclic scheduler routine cyclic-scheduler () is activated at the end of every frame by a
periodic timer. If the current task is not complete by the end of the frame, then it is
suspended and the task to be run in the next frame is dispatched by invoking the routine
cyclic-scheduler(). If the task scheduled in a frame completes early, then any existing sporadic
or aperiodic task is taken up for execution.
1.6. Exercises
1. State whether the following assertions are True or False. Write one or two sentences to
justify your choice in each case.
a. Average response time is an important performance metric for real-time operating
systems handling running of hard real-time tasks.
b. Unlike table-driven schedulers, cyclic schedulers do not require to store a pre-
computed schedule.
If the tasks are to be scheduled using a table-driven scheduler, what is the length of time
for which the schedules have to be stored in the pre-computed schedule table of the
scheduler.
6. A cyclic real-time scheduler is to be used to schedule three periodic tasks T1, T2,
and T3 with the following characteristics:
Suggest a suitable frame size that can be used. Show all intermediate steps in your
calculations.
7. Consider the following set of three independent real-time periodic tasks.
Suppose a cyclic scheduler is to be used to schedule the task set. What is the
major cycle of the task set? Suggest a suitable frame size and provide a feasible schedule
(task to frame assignment for a major cycle) for the task set.