4.CPU Scheduling and Algorithm-Notes
4.CPU Scheduling and Algorithm-Notes
Content:
Scheduling types - scheduling
Objectives, CPU and I/O burst cycles,
Pre-emptive, Non- Pre-emptive Scheduling,
Scheduling criteria.
Types of Scheduling algorithms – First come first served (FCFS),
Types of Scheduling algorithms – Shortest Job First (SJF),
Types of Scheduling algorithms –Shortest Remaining Time(SRTN)
Types of Scheduling algorithms – Round Robin (RR)
Types of Scheduling algorithms – Priority scheduling
Deadlock - System Models,
Necessary Conditions leading to Deadlocks,
Deadlock Handling - Preventions, avoidance.
Scheduling Algorithm
A Scheduling Algorithm is the algorithm which tells us how much CPU time we can allocate to
the processes.
Introduction
Different CPU scheduling algorithms have different properties and the choice of a particular
algorithm depends on the various factors. Many criteria have been suggested for comparing CPU
scheduling algorithms.
The criteria include the following:
1. CPU utilisation
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilisation can range from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the load upon the system.
2. Throughput
A measure of the work done by CPU is the number of processes being executed and completed
per unit time. This is called throughput. The throughput may vary depending upon the length or
duration of processes.
3. Turnaround time
For a particular process, an important criteria is how long it takes to execute that process. The
time elapsed from the time of submission of a process to the time of completion is known as the
turnaround time. Turn-around time is the sum of times spent waiting to get into memory, waiting in
ready queue, executing in CPU, and waiting for I/O.
4. Waiting time
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process i.e. time spent by a process waiting in the ready
queue.
5. Response time
In an interactive system, turn-around time is not the best criteria. A process may produce some
output fairly early and continue computing new results while previous results are being output to the
user. Thus another criteria is the time taken from submission of the process of request until the first
response is produced. This measure is called response time.
Types of Scheduling
1. Preemptive Scheduling-
1. Preemptive Scheduling:
Preemptive scheduling is used when a process switches from running state to ready state or
from the waiting state to ready state.
The resources (mainly CPU cycles) are allocated to the process for a limited amount of time and
then taken away, and the process is again placed back in the ready queue if that process still has CPU
burst time remaining. That process stays in the ready queue till it gets its next chance to execute.
2. Non-Preemptive Scheduling:
In this scheduling, once the resources (CPU cycles) are allocated to a process, the process holds
the CPU till it gets terminated or reaches a waiting state. In the case of non-preemptive scheduling does
not interrupt a process running CPU in the middle of the execution. Instead, it waits till the process
completes its CPU burst time, and then it can allocate the CPU to another process.
Important Concepts
Arrival Time:
Completion Time:
Burst Time:
Waiting Time(W.T): Time Difference between turn around time and burst time.
Gantt Chart-
It is a bar type chart derived by scientist Henry Gantt in 1910.
It is used to illustrate the project schedule.
Ex.
SchedulingAlgorithm
When Process enters for processing its PCB is connected to ready queue.
Ex.1. Consider the following example. Calculate the average waiting time and turn around time.
JOB 2 3
JOB 3 5
Solution :
0 24 27 32
2. Turnaround time
Ex.2. Consider the following example. Calculate the average waiting time and turn around time.
P1 8
P2 4
P3 9
P4 5
Solution
P1 P2 P3 P4
0 8 12 21 26
Waiting time of P1 = 0
Waiting time of P2 = 8
Waiting time of P3 = 12
Waiting time of P4 = 21
2. Turnaround time
Turnaround time of P1 = 8
Turnaround time of P2 = 12
Turnaround time of P3 = 21
Turnaround time of P4 = 26
Solution
P0 P1 P2 P3
0 5 8 16 22
2. Turnaround time
Ex.4. Consider the following example. Calculate the average waiting time and turn around time.
Solution
P4 P3 P1 P5 P2
0 3 11 17 21 23
2. Turnaround time
Advantages of FCFS
Prof.S.V.Gunjal, Lecturer, BD Dept., Polytechnic, Loni
9
Operating Systems
Disadvantages of FCFS
1. It is a Non-Preemptive CPU scheduling algorithm, so after the process has been allocated to the
CPU, it will never release the CPU until it finishes executing.
2. The Average Waiting Time is high.
3. Short processes that are at the back of the queue have to wait for the long process at the front to
finish.
4. Not an ideal technique for time-sharing systems.
5. Because of its simplicity, FCFS is not very efficient.
1. Non-Preemptive SJF: –
In Non-Preemptive Scheduling, if a CPU is located to the process, then the process will hold the
CPU until the process enters into the waiting state or terminated.
Ex.12. Consider the following example. Calculate the average waiting time and turn around
Solution
P4 P2 P3 P1
0 2 5 11 32
Waiting time of P1 = 11
Waiting time of P2 = 2
Waiting time of P3 = 5
Waiting time of P4 = 0
Turnaround time
Turnaround time of P1 = 32
Turnaround time of P2 = 5
Turnaround time of P3 = 11
Turnaround time of P4 = 2
Ex.2. Consider the following example. Calculate the average waiting time and turn around time.
Solution
P1 P3 P2 P0
0 5 7 16 24
2. Preemptive SJF
In non-preemptive scheduling, once the CPU cycle is allocated to process, the process holds it
till it reaches a waiting state or terminated.
Consider the following five processes each having its own unique burst time and arrival time.
Ex.3. Consider the following example. Calculate the average waiting time.
Solution
P4 P1 P5 P2 P5 P1 P3
0 3 4 5 7 10 15 23
Ex.4. Consider the following example. Calculate the average waiting time.
Solution
P1 P2 P3 P2 P4 P1
0 1 2 4 7 10 15
Advantages of SJF
It reduces the average waiting time over FIFO (First in First Out) algorithm.
SJF method gives the lowest average waiting time for a specific set of processes.
It is appropriate for the jobs running in batch, where run times are known in advance.
For the batch system of long-term scheduling, a burst time estimate can be obtained from the
job description.
For Short-Term Scheduling, we need to predict the value of the next burst time.
Disadvantages/Cons of SJF
SJF can’t be implemented for CPU scheduling for the short term. It is because there is no specific
It leads to the starvation that does not reduce average turnaround time.
Elapsed time should be recorded, that results in more overhead on the processor.
3. Priority Algorithm
Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
Each process is assigned a priority. Process with highest priority is to be executed first and so on.
Processes with same priority are executed on first come first served basis.
Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
Solution
P3 P1 P0 P2
0 6 9 14 22
Waiting time of P0 = 9
Waiting time of P1 = 6
Waiting time of P2 = 14
Waiting time of P3 = 0
Ex.2. Consider the following example. Calculate the average waiting time.
Solution
P4 P5 P2 P1 P3
0 4 6 9 13 20
Waiting time of P1 = 9
Waiting time of P2 = 6
Waiting time of P3 = 13
Waiting time of P4 = 0
Waiting time of P5 = 4
2. Processes are executed on the basis of priority so high priority does not need to wait for long
3. This method provides a good mechanism where the relative important of each process may be
precisely defined.
1. If the system eventually crashes, all low priority processes get lost.
2. If high priority processes take lots of CPU time, then the lower priority processes may starve and
3. This scheduling algorithm may leave some low priority processes waiting indefinitely.
4. A process will be blocked when it is ready to run but has to wait for the CPU because some other
5. If a new higher priority process keeps on coming in the ready queue, then the process which is in
the waiting state may need to wait for a long duration of time.
4. Round-Robin Algorithm
The name of this algorithm comes from the round-robin principle, where each person gets an
equal share of something in turns. It is the oldest, simplest scheduling algorithm, which is mostly used
for multitasking.
In Round-robin scheduling, each ready task runs turn by turn only in a cyclic queue for a limited
time slice. This algorithm also offers starvation free execution of processes.
2. The CPU is shifted to the next process after fixed interval time, which is called time
quantum/time slice.
5. Time slice should be minimum, which is assigned for a specific task that needs to be processed.
6. It is a real time algorithm which responds to the event within a specific time limit.
Solution
P1 P2 P3 P1 P2 P3 P3
0 2 4 6 8 9 11 12
Ex.2. Consider the following example. Calculate the average waiting time.
Solution
P1 P2 P3 P4 P5 P6 P1 P2 P5
0 4 8 11 12 16 20 21 23 24
If you know the total number of processes on the run queue, then you can also assume the
This scheduling method does not depend upon burst time. That’s why it is easily implementable
on the system.
Once a process is executed for a specific set of the period, the process is preempted, and another
Allows OS to use the Context switching method to save states of preempted processes.
Decreases comprehension
Lower time quantum results in higher the context switching overhead in the system.
Deadlock
Deadlock happens in operating system when two or more processes need some resource to
complete their execution that is held by the other process.
In the above diagram, the process 1 has resource 1 and needs to acquire resource 2. Similarly
process 2 has resource 2 and needs to acquire resource 1. Process 1 and process 2 are in deadlock as
each of them needs the other’s resource to complete their execution but neither of them is willing to
relinquish their resources.
Deadlock condition
1. Mutual Exclusion
3. No Preemption
4. Circular Wait
1. Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram
below, there is a single instance of Resource 1 and it is held by Process 1 only.
3. No Preemption
A resource cannot be preempted from a process by force. A process can only release a resource
voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be
released when Process 1 relinquishes it voluntarily after its execution is complete.
4. Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource held by the
first process. This forms a circular chain. For example: Process 1 is allocated Resource2 and it is
requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2. This
forms a circular wait loop.
Advantages of Deadlock
This situation works well for processes which perform a single burst of activity
No preemption needed for Deadlock.
Convenient method when applied to resources whose state can be saved and restored easily
Feasible to enforce via compile-time checks
Needs no run-time computation since the problem is solved in system design
Deadlock Prevention
It’s important to prevent a deadlock before it can occur. The system checks every transaction
before it is executed to make sure it doesn’t lead the deadlock situations. Such that even a small
change to occur dead that an operation which can lead to Deadlock in the future it also never allowed
process to execute.
It is a set of methods for ensuring that at least one of the conditions cannot hold.
1. No Mutual Exclusion
3. Removal of No Preemption
1. No Mutual Exclusion
It means more than one process can have access to a single resource at the same time. It’s
impossible because if multiple processes access the same resource simultaneously, there will be chaos.
Additionally, no process will be completed. So this is not feasible. Hence, the OS can’t avoid mutual
exclusion.
To avoid the hold and wait, there are many ways to acquire all the required resources before
starting the execution. But this is also not feasible because a process will use a single resource at a
time. Here, the resource utilization will be very less.
Before starting the execution, the process does not know how many resources would be
required to complete it. In addition to that, the bus time, in which a process will complete and free the
resource, is also unknown.
Another way is if a process is holding a resource and wants to have additional resources, then it
must free the acquired resources. This way, we can avoid the hold and wait condition, but it can result
in starvation.
3. Removal of No Preemption
One of the reasons that cause the deadlock is the no preemption. It means the CPU can’t take
acquired resources from any process forcefully even though that process is in a waiting state. If we can
remove the no preemption and forcefully take resources from a waiting process, we can avoid the
deadlock. This is an implementable logic to avoid deadlock.
For example, it’s like taking the bowl from Jones and give it to Jack when he comes to have
soup. Let’s assume Jones came first and acquired a resource and went into the waiting state. Now
when Jack came, the caterer took the bowl from Jones forcefully and told him not to hold the bowl if
you are in a waiting state.
In the circular wait, two processes are stuck in the waiting state for the resources which have
been held by each other. To avoid the circular wait, we assign a numerical integer value to all
resources, and a process has to access the resource in increasing or decreasing order.
If the process acquires resources in increasing order, it’ll only have access to the new additional
resource if that resource has a higher integer value. And if that resource has a lesser integer value, it
must free the acquired resource before taking the new resource and vice-versa for decreasing order.
Banker's Algorithm
Banker's algorithm is a deadlock avoidance algorithm. It is named so because this algorithm is
used in banking systems to determine whether a loan can be granted or not.
Consider there are n account holders in a bank and the sum of the money in all of their
accounts is S. Every time a loan has to be granted by the bank, it subtracts the loan amount from
the total money the bank has. Then it checks if that difference is greater than S. It is done because,
only then, the bank would have enough money even if all the n account holders draw all their money
at once.
Some data structures that are used to implement the banker's algorithm are:
1. Available
It is an array of length m. It represents the number of available resources of each type.
If Available[j] = k, then there are k instances available, of resource type Rj.
2. Max
It is an n x m matrix which represents the maximum number of instances of each resource that
a process can request. If Max[i][j] = k, then the process Pi can request atmost k instances of resource
type Rj.
3. Allocation
It is an n x m matrix which represents the number of resources of each type currently allocated
to each process. If Allocation[i][j] = k, then process Pi is currently allocated k instances of resource
type Rj.
4. Need
It is a two-dimensional array. It is an n x m matrix which indicates the remaining resource needs
of each process. If Need[i][j] = k, then process Pi may need k more instances of resource type Rj to
complete its task.
1. Safety algorithm
2. Resource request algorithm