UNIT 4
4.1 Scheduling types -scheduling objective,CPU and I/O burst
cycle ,pre_emptive, non_preemptive
CPU Scheduling in Operating Systems
Scheduling of processes/work is done to finish the work on time
Below are different time with respect to a process.
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Tim
Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time
Why do we need scheduling?
A typical process involves both I/O time and CPU time. In a uni programming system like MS-DOS,
time spent waiting for I/O is wasted and CPU is free during this time. In multi programming systems,
one process can use CPU while another is waiting for I/O. This is possible only with process
scheduling.
Objectives of Process Scheduling Algorithm
Max CPU utilization [Keep CPU as busy as possible]
Fair allocation of CPU.
Max throughput [Number of processes that complete their execution per time unit]
Min turnaround time [Time taken by a process to finish execution]
Min waiting time [Time a process waits in ready queue]
Min response time [Time when a process produces first response]
Preemptive and Non-Preemptive Scheduling
1. Preemptive Scheduling:
Preemptive scheduling is used when a process switches from running state to ready state or from
the waiting state to ready state. The resources (mainly CPU cycles) are allocated to the process for a
limited amount of time and then taken away, and the process is again placed back in the ready
queue if that process still has CPU burst time remaining. That process stays in the ready queue till it
gets its next chance to execute.
Algorithms based on preemptive scheduling are: Round Robin (RR),Shortest Remaining Time First
(SRTF), Priority (preemptive version), etc.
2. Non-Preemptive Scheduling:
Non-preemptive Scheduling is used when a process terminates, or a process switches from running
to the waiting state. In this scheduling, once the resources (CPU cycles) are allocated to a process,
the process holds the CPU till it gets terminated or reaches a waiting state. In the case of non-
preemptive scheduling does not interrupt a process running CPU in the middle of the execution.
Instead, it waits till the process completes its CPU burst time, and then it can allocate the CPU to
another process.
Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF basically non
preemptive) and Priority (non preemptive version), etc.
4.2 Types of algorithms – FCFS, SJF, SRTN,RR ,PRIORITY
SCHEDULING ,MULTILEVEL QUEUE SCHEDULING
FCFS CPU Scheduling
Given n processes with their burst times, the task is to find average waiting time and average turn
around time using FCFS scheduling algorithm.
First in, first out (FIFO), also known as first come, first served (FCFS), is the simplest scheduling
algorithm. FIFO simply queues processes in the order that they arrive in the ready queue.
In this, the process that comes first will be executed first and next process starts only after the
previous gets fully executed.
Here we are considering that arrival time for all processes is 0.
Completion Time: Time at which process completes its execution.
Turn Around Time: Time Difference between completion time and arrival time. Turn Around Time =
Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time
In this post, we have assumed arrival times as 0, so turn around and completion times are same.
Important Points:
Non-preemptive
Average Waiting Time is not optimal
Cannot utilize resources in parallel : Results in Convoy effect (Consider a situation when
many IO bound processes are there and one CPU bound process. The IO bound processes
have to wait for CPU bound process when CPU bound process acquires CPU. The IO bound
process could have better taken CPU for some time, then used IO devices).
Advantages of FCFS
Simple
Easy
First come, First serv
Disadvantages of FCFS
The scheduling method is non preemptive, the process will run to the completion.
Due to the non-preemptive nature of the algorithm, the problem of starvation may occur.
Although it is easy to implement, but it is poor in performance since the average waiting
time is higher as compare to other scheduling algorithms.
Example
Let's take an example of The FCFS scheduling algorithm. In the Following schedule, there are 5
processes with process ID P0, P1, P2, P3 and P4. P0 arrives at time 0, P1 at time 1, P2 at time 2, P3
arrives at time 3 and Process P4 arrives at time 4 in the ready queue. The processes and their
respective Arrival and Burst time are given in the following table.
The Turnaround time and the waiting time are calculated by using the following formula.
Turn Around Time = Completion Time - Arrival Time
Waiting Time = Turnaround time - Burst Time
The average waiting Time is determined by summing the respective waiting time of all the processes
and divided the sum by the total number of processes.
Avg Waiting Time=31/5
Program for Shortest Job First (or SJF) CPU Scheduling | Set 1 (Non-
preemptive)
Shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting process
with the smallest execution time to execute next. SJN, also known as Shortest Job Next (SJN), can be
preemptive or non-preemptive.
Characteristics of SJF Scheduling
Shortest Job first has the advantage of having a minimum average waiting time among all
scheduling algorithms.
It is a Greedy Algorithm.
It may cause starvation if shorter processes keep coming. This problem can be solved using
the concept of ageing.
It is practically infeasible as Operating System may not know burst time and therefore may
not sort them. While it is not possible to predict execution time, several methods can be
used to estimate the execution time for a job, such as a weighted average of previous
execution times.
SJF can be used in specialized environments where accurate estimates of running time are
available.
Algorithm:
Sort all the process according to the arrival time.
Then select that process which has minimum arrival time and minimum Burst time.
After completion of process make a pool of process which after till the completion of
previous process and select that process among the pool which is having minimum Burst
time.
Example
In the following example, there are five jobs named as P1, P2, P3, P4 and P5. Their arrival time and
burst time are given in the table below.
Since, No Process arrives at time 0 hence; there will be an empty slot in the Gantt chart from time 0
to 1 (the time at which the first process arrives)
According to the algorithm, the OS schedules the process which is having the lowest burst time
among the available processes in the ready queue.
Till now, we have only one process in the ready queue hence the scheduler will schedule this to the
processor no matter what is its burst time.
This will be executed till 8 units of time. Till then we have three more processes arrived in the ready
queue hence the scheduler will choose the process with the lowest burst time.
Among the processes given in the table, P3 will be executed next since it is having the lowest burst
time among all the available processes.
So that's how the procedure will go on in shortest job first (SJF) scheduling algorithm.
Avg Waiting Time = 27/5
Advantages of SJF:
SJF is better than the First come first serve(FCFS) algorithm as it reduces the average waiting
time.
SJF is generally used for long term scheduling
It is suitable for the jobs running in batches, where run times are already known.
SJF is probably optimal in terms of average turnaround time.
Disadvantages of SJF:
SJF may cause very long turn-around times or starvation.
In SJF job completion time must be known earlier, but sometimes it is hard to predict.
Sometimes, it is complicated to predict the length of the upcoming CPU request.
It leads to the starvation that does not reduce average turnaround time.
Shortest Remaining Time First (Preemptive SJF) Scheduling Algorithm
In the Shortest Remaining Time First (SRTF) scheduling algorithm, the process with the smallest
amount of time remaining until completion is selected to execute. Since the currently executing
process is the one with the shortest amount of time remaining by definition, and since that time
should only reduce as execution progresses, processes will always run until they complete or a new
process is added that requires a smaller amount of time.
Advantages:
Short processes are handled very quickly.
The system also requires very little overhead since it only makes a decision when a process
completes or a new process is added.
When a new process is added the algorithm only needs to compare the currently executing
process with the new process, ignoring all other processes currently waiting to execute.
Disadvantages:
Like shortest job first, it has the potential for process starvation.
Long processes may be held off indefinitely if short processes are continually added.
Program for Round Robin scheduling | Set 1
Round Robin is a CPU scheduling algorithm where each process is assigned a fixed time slot in a
cyclic way
It is simple, easy to implement, and starvation-free as all processes get fair share of CPU.
One of the most commonly used technique in CPU scheduling as a core.
It is preemptive as processes are assigned CPU only for a fixed slice of time at most.
The disadvantage of it is more overhead of context switching.
Round Robin scheduling algorithm is one of the most popular scheduling
algorithm which can actually be implemented in most of the operating systems.
This is the preemptive version of first come first serve scheduling. The
Algorithm focuses on Time Sharing. In this algorithm, every process gets
executed in a cyclic way. A certain time slice is defined in the system which is
called time quantum. Each process present in the ready queue is assigned the
CPU for that time quantum, if the execution of the process is completed during
that time then the process will terminate else the process will go back to
the ready queue and waits for the next turn to complete the execution.
Priority Scheduling
In Priority scheduling, there is a priority number assigned to each process. In some systems, the
lower the number, the higher the priority. While, in the others, the higher the number, the higher
will be the priority. The Process with the higher priority among the available processes is given the
CPU. There are two types of priority scheduling algorithm exists. One is Preemptive priority
scheduling while the other is Non Preemptive Priority scheduling.
The priority number assigned to each of the process may or may not vary. If the priority number
doesn't change itself throughout the process, it is called static priority, while if it keeps changing
itself at the regular intervals, it is called dynamic priority.
Multilevel Queue (MLQ) CPU Scheduling
Prerequisite: CPU Scheduling
It may happen that processes in the ready queue can be divided into different classes where each
class has its own scheduling needs. For example, a common division is a foreground (interactive)
process and a background (batch) process. These two classes have different scheduling needs. For
this kind of situation Multilevel Queue Scheduling is used. Now, let us see how it works.
Ready Queue is divided into separate queues for each class of processes. For example, let us take
three different types of processes System processes, Interactive processes, and Batch Processes. All
three processes have their own queue. Now, look at the below figure.
The Description of the processes in the above diagram is as follows:
System Processes: The CPU itself has its own process to run which is generally termed as
System Process.
Interactive Processes: An Interactive Process is a type of process in which there should be
same type of interaction.
Batch Processes: Batch processing is generally a technique in the Operating system that
collects the programs and data together in the form of the batch before the processing
starts.
All three different type of processes has their own queue. Each queue has its own Scheduling
algorithm. For example, queue 1 and queue 2 uses Round Robin while queue 3 can use FCFS to
schedule their processes.
Scheduling among the queues: What will happen if all the queues have some processes? Which
process should get the CPU? To determine this Scheduling among the queues is necessary. There are
two ways to do so –
1. Fixed priority preemptive scheduling method – Each queue has absolute priority over the
lower priority queue. Let us consider following priority order queue 1 > queue 2 > queue
3.According to this algorithm, no process in the batch queue(queue 3) can run unless queues
1 and 2 are empty. If any batch process (queue 3) is running and any system (queue 1) or
Interactive process(queue 2) entered the ready queue the batch process is preempted.
2. Time slicing – In this method, each queue gets a certain portion of CPU time and can use it to
schedule its own processes. For instance, queue 1 takes 50 percent of CPU time queue 2
takes 30 percent and queue 3 gets 20 percent of CPU time.
Advantages:
The processes are permanently assigned to the queue, so it has advantage of low scheduling
overhead.
Disadvantages:
Some processes may starve for CPU if some higher priority queues are never becoming
empty.
It is inflexible in nature.
Starvation and Aging in Operating
Related to priority scheduling
We have already discussed about the priority scheduling in this post. It is one of the most common
scheduling algorithms in batch systems. Each process is assigned a priority. Process with the highest
priority is to be executed first and so on.
In this post we will discuss a major problem related to priority scheduling and it’s solution
Starvation or indefinite blocking is phenomenon associated with the Priority scheduling algorithms,
in which a process ready to run for CPU can wait indefinitely because of low priority. In heavily
loaded computer system, a steady stream of higher-priority processes can prevent a low-priority
process from ever getting the CPU.
There has been rumors that in 1967 Priority Scheduling was used in IBM 7094 at MIT , and they
found a low-priority process that had not been submitted till 1973.
As we see in the above example process having higher priority than other processes getting CPU
earlier. We can think of a scenario in which only one process is having very low-priority (for example
127) and we are giving other process with high-priority, this can lead indefinitely waiting for the
process for CPU which is having low-priority, this leads to Starvation. Further we have also discuss
about the solution of starvation.
Differences between Deadlock and Starvation in OS :
Deadlock occurs when none of the processes in the set is able to move ahead due to occupancy of
the required resources by some other process as shown in the figure below, on the other hand
Starvation occurs when a process waits for an indefinite period of time to get the resource it
requires.
Other name of deadlock is Circular Waiting. Other name of starvation is Lived lock.
When deadlock occurs no process can make progress, while in starvation apart from the victim
process other processes can progress or proceed.
Solution to Starvation : Aging
Aging is a technique of gradually increasing the priority of processes that wait in the system for a
long time.For example, if priority range from 127(low) to 0(high), we could increase the priority of a
waiting process by 1 Every 15 minutes. Eventually even a process with an initial priority of 127 would
take no more than 32 hours for priority 127 process to age to a priority-0 process.
4.3Critical section problem
Critical Section Problem
Critical section is a code segment that can be accessed by only one process at a time.
Critical section contains shared variables which need to be synchronized to maintain consistency of
data variables.
In the above diagram, the entry section handles the entry into the critical section. It acquires the
resources needed for execution by the process. The exit section handles the exit from the critical
section. It releases the resources and also informs the other processes that the critical section is
free.
The critical section problem needs a solution to synchronize the different
processes. The solution to the critical section problem must satisfy the following
conditions –
Mutual Exclusion : If a process is executing in its critical section, then no other process is allowed to
execute in the critical section.
Progress : Progress means that if a process is not using the critical section, then it
should not stop any other process from accessing it. In other words, any process
can enter a critical section if it is free
Bounded Waiting :
Bounded waiting means that each process must have a limited waiting time. It
should not wait endlessly to access the critical section.
4.4 Deadlock- system , models, necessary condition leading to
deadlocks,deadlock handling_ prevention,avoidance and recovery
Introduction of Deadlock in Operating System
A process in operating system uses resources in the following way.
1) Requests a resource
2) Use the resource
3) Releases the resource
Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and there is
only one track, none of the trains can move once they are in front of each other. A similar situation
occurs in operating systems when there are two or more processes that hold some resources and
wait for resources held by other(s). For example, in the below diagram, Process 1 is holding Resource
1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a
time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes are waiting for each other in circular form.
Methods for handling deadlock
There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to not let the system into a deadlock state.
One can zoom into each category individually, Prevention is done by negating one of above
mentioned necessary conditions for deadlock.
Avoidance is kind of futuristic in nature. By using strategy of “Avoidance”, we have to make an
assumption. We need to ensure that all information about resources which process will need are
known to us prior to execution of the process. We use Banker’s algorithm (Which is in-turn a gift
from Dijkstra) in order to avoid deadlock.
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it once
occurred.
3) Ignore the problem altogether: If deadlock is very rare, then let it happen and reboot the
system. This is the approach that both Windows and UNIX take.
Deadlock Detection And Recovery
Deadlock Detection :
1. If resources have a single instance –
In this case for Deadlock detection, we can run an algorithm to check for the cycle in the Resource
Allocation Graph. The presence of a cycle in the graph is a sufficient condition for deadlock.
In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1 → P1 →
R2 → P2. So, Deadlock is Confirmed.
2. If there are multiple instances of resources –
Detection of the cycle is necessary but not sufficient condition for deadlock detection, in this case,
the system may or may not be in deadlock varies according to different situations.
Deadlock Recovery :
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is a time
and space-consuming process. Real-time operating systems use Deadlock recovery.
Killing the process –
Killing all the processes involved in the deadlock. Killing process one by one. After killing each
process check for deadlock again keep repeating the process till the system recovers from deadlock.
Killing all the processes one by one helps a system to break circular wait condition.
Resource Preemption –
Resources are preempted from the processes involved in the deadlock, preempted resources are
allocated to other processes so that there is a possibility of recovering the system from deadlock. In
this case, the system goes into starvation.
Deadlock Prevention And Avoidance
Deadlock Characteristics
deadlock has following characteristics.
Mutual Exclusion
Hold and Wait
No preemption
Circular wait
Deadlock Prevention
We can prevent Deadlock by eliminating any of the above four conditions.
Eliminate Mutual Exclusion
It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tape drive
and printer, are inherently non-shareable.
Eliminate Hold and wait
1. Allocate all required resources to the process before the start of its execution, this way hold
and wait condition is eliminated but it will lead to low device utilization. for example, if a
process requires printer at a later time and we have allocated printer before the start of its
execution printer will remain blocked till it has completed its execution.
2. The process will make a new request for resources after releasing the current set of
resources. This solution may lead to starvation.
Eliminate No Preemption
Preempt resources from the process when resources required by other high priority processes.
Eliminate Circular Wait
Each resource will be assigned with a numerical number. A process can request the resources
increasing/decreasing. order of numbering. granted
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than R5
such request will not be granted, only request for resources more than R5 will be.
Deadlock Avoidance
In deadlock avoidance, the request for any resource will be granted if the resulting state of the
system doesn't cause deadlock in the system. The state of the system will continuously be checked
for safe and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of resources a process
can request to complete its execution.
The simplest and most useful approach states that the process should declare the maximum number
of resources of each type it may ever need. The Deadlock avoidance algorithm examines the
resource allocations so that there can never be a circular wait condition.
Safe and Unsafe States
The resource allocation state of a system can be defined by the instances of available and allocated
resources, and the maximum instance of the resources demanded by the processes.
A state of the system is called safe if the system can allocate all the resources requested by all the
processes without entering into deadlock
If the system cannot fulfill the request of all processes then the state of the system is called unsafe.
The key of Deadlock avoidance approach is when the request is made for resources then the request
must only be approved in the case if the resulting state is also a safe state.
Deadlock avoidance can be done with Banker’s Algorithm.
Banker’s Algorithm
Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm which test all the
request made by processes for resources, it checks for the safe state, if after granting request system
remains in the safe state it allows the request and if there is no safe state it doesn’t allow the
request made by the process.
Inputs to Banker’s Algorithm:
1. Max need of resources by each process.
2. Currently, allocated resources by each process.
3. Max free available resources in the system.
The request will only be granted under the below condition:
1. If the request made by the process is less than equal to max need to that process.
2. If the request made by the process is less than equal to the freely available resource in the
system.