Operating Systems
(BCS-401)
CPU Scheduling
(UNIT-3)
Course Instructor:
Dr. Sanjeev Kumar
(Associate Professor, IT Department, KIET, Ghaziabad)
Operating Systems Syllabus (Unit-3)
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Program Concept
A program is executable residing on secondary storage.
It is set of instruction stored in secondary storage device that
are intended to carry out specific job.
To execute the program it must bring into primary memory.
Therefore program is termed as passive entity that exist in
secondary storage persistently even if machine reboots.
Few examples :
On MS Windows: \windows\system32\[Link]
On Linux system: ls is program available at :\bin\ls
Process Concept
Executing instance of program is called process.
Some OS refers the term “task” or “job” to refer to a program
being executed.
We call all of them processes.
A system consists of a collection of processes: operating system
processes executing system code and user processes executing user
code.
A process is more than the program code. A program is just a text
section.
A program is a passive entity and a process is an active entity. A
process is a program in execution.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Process Memory space.
It is sequential program in execution.
Process defines the fundamental unit of
computation for the computer.
Any Process contains when in primary memory have four section
Text: Contains the program code
Data: Contains Global variables
Heap: for dynamic memory allocation.
Stack: Contains temporary data (Function parameters, Return address, and local
variables)
Process States
As a process executes, it changes state.
A process may be in one of the following states:
[Link] process is being created.
Running. Instructions are being executed
Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
[Link] process is waiting to be assigned to a processor.
[Link] process has finished execution.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Process States
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Process Control Block
PCB is a data structures used to store
information about the processes which
is used by CPU at run time.
Each process is represented in the
operating system by a process control
block (PCB)—also called a task
control block.
When a process from job pool is
assigned to ready queue, a PCB is
assigned to it.
Components of PCB:
Process state. The state may be new,
OPERATING SYSTEMS (BCS-401)
ready, running, waiting, halted, and so (Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Process Control Block
CPU registers. The registers vary in
number and type, depending on the
computer architecture. They include
accumulators, index registers, stack
pointers, and general- purpose
registers, plus any condition-code
information. Along with the program
counter, this state information must be
saved when an interrupt occurs, to
allow the process to be continued
correctly afterward.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Process Control Block
Program counter. The counter indicates the
address of the next instruction to be executed
for this process.
CPU-scheduling information. This
information includes a process priority, pointers
to scheduling queues, and any other scheduling
parameters.
Memory-management information. This
information may include such items as the
value of the base and limit registers and the
page tables, or the segment tables, depending
on the memory system used by the operating
system. Memory limits refer to the maximum
amount of memory that a process can utilize. As
defined by the operating system. OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Process Control Block
Accounting information. This
information includes the amount of
CPU and real time used, time limits, job
or process numbers, and so on.
I/O status information. This
information includes the list of I/O
devices allocated to the process, a list
of open files, and so on.
“The PCB simply serves as the
repository for any information that
may vary from process to process”
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Process identification Information(Process number or id)
Each process is identified with a unique positive integer called as process
ID or simply PID (Process Identification number). The init process (which
always has a pid of 1) serves as the root parent process for all user
processes.
The kernel usually limits the process ID to 32767, which is configurable.
When the process ID reaches this limit, it is reset again(Recycling)
The unused process IDs from that counter are then assigned to newly
created processes.
The system call getpid() returns the process ID of the calling process.
Each process is created by a creator process.
Creator process is called the parent process. Parent ID or PPID can be
obtained through getppid() call.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
CPU Scheduling : Basic Concepts
Maximum CPU utilization
obtained with
multiprogramming
CPU–I/O Burst Cycle –
Process execution consists of
a cycle of CPU execution
and I/O wait
CPU burst followed by I/O
burst
CPU burst distribution is of
main concern
CPU Scheduler
The CPU scheduler selects from among the processes
in ready queue, and allocates a CPU core to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Process Scheduling
The objective of multiprogramming is to have some process running at
all times, to maximize CPU utilization.
To meet this objectives, the process scheduler selects an available
process (possibly from a set of several available processes) for
program execution on the CPU.
Scheduling Queues: Job Queue & Ready Queue
As processes enter the system, they are put into a job queue, which
consists of all processes in the system.
The processes that are residing in main memory and are ready and
waiting to execute are kept on a list called the ready queue.
The list of processes waiting for a particular I/O device is called a
device queue.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
The ready queue and various I/O device queues
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Queueing-diagram of process scheduling
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
CPU scheduling
CPU scheduling is the basis of multiprogrammed
operating systems.
To make the computer more productive
The objective of multiprogramming: to maximize
CPU utilization.
When one process has to wait, the operating
system takes the CPU away from that process and
gives the CPU to another process.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
CPU–I/O Burst Cycle
The success of CPU scheduling depends on an
observed property of processes: process execution
consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states.
Process execution begins with a CPU burst. That is
followed by an I/O burst, which is followed by another
CPU burst, then another I/O burst, and so on.
Eventually, the final CPU burst ends with a system
request to terminate execution
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
CPU–I/O Burst Cycle
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
I/O-bound and CPU-bound
An I/O-bound program typically has many short CPU
bursts.
A CPU-bound program might have a few long CPU
bursts.
This distribution can be important in the selection of an
appropriate CPU-scheduling algorithm.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Types of Schedulers
The different schedulers that are used for process
scheduling are
Long Term Scheduler
Short Term Scheduler
Middle Term Scheduler
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Long Term Scheduler (or job scheduler)
The job scheduler or long-term scheduler selects processes from the
storage pool in the secondary memory and loads them into the ready
queue in the main memory for execution.
The long-term scheduler controls the degree of multiprogramming.
It must select a careful mixture of I/O bound and CPU bound
processes to yield optimum system throughput. If it selects too
many CPU bound processes then the I/O devices are idle and if it
selects too many I/O bound processes then the processor has
nothing to do.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Short Term Scheduler
The short-term scheduler selects one of the processes from the
ready queue and schedules them for execution.
A scheduling algorithm is used to decide which process will be
scheduled for execution next.
The short-term scheduler executes much more frequently than the
long-term scheduler as a process may execute only for a few
milliseconds.
The choices of the short term scheduler are very important. If it
selects a process with a long burst time, then all the processes after
that will have to wait for a long time in the ready queue. This is
known as starvation and it may happen if a wrong decision is made
by the short-term scheduler.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Medium Term Scheduler
The medium-term scheduler swaps out a process from main
memory. It can again swap in the process later from the point it
stopped executing. This can also be called as suspending and
resuming the process.
This is helpful in reducing the degree of multiprogramming.
Swapping is also useful to improve the mix of I/O bound and CPU
bound processes in the memory.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Long-Term Vs. Short Term Vs. Medium-Term
Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
Long term is also known as a Short term is also known as Medium-term is also called
job scheduler CPU scheduler swapping scheduler.
It is either absent or minimal It is insignificant in the time- This scheduler is an element
in a time-sharing system. sharing order. of Time-sharing systems.
Speed is the fastest compared
Speed is less compared to the
to the short-term and It offers medium speed.
short term scheduler.
medium-term scheduler.
Allow you to select processes It only selects processes that
It helps you to send process
from the loads and pool back is in a ready state of the
back to memory.
into the memory execution.
Reduce the level of
Offers full control Offers less control
multiprogramming.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
CPU Scheduler / short-term scheduler
Whenever the CPU becomes idle, the operating system
must select one of the processes in the ready queue to be
executed.
The selection process is carried out by the short-term
scheduler, or CPU scheduler.
The CPU scheduler selects a process from the processes
in memory that are ready to execute and allocates the
CPU to that process.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Note that…
The ready queue is not necessarily a first-in, first-out
(FIFO) queue.
A ready queue can be implemented as a FIFO queue, a
priority queue, a tree, or simply an unordered linked
list.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Dispatcher
Dispatcher module gives control of
the CPU to the process selected by
the CPU scheduler; this involves:
Switching context
Switching to user mode
Jumping to the proper location in
the user program to restart that
program
Dispatch latency – time it takes
for the dispatcher to stop one
process and start another running
Preemptive and Non-preemptive Scheduling
Preemptive Scheduling is a type of CPU scheduling
method in which the CPU is allocated for a limited time
to a given process.
It is a method that may be used when a process switches
from a running state to a ready state or from a waiting
state to a ready state.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Scheduling Criteria
CPU utilization: keep the CPU as busy as possible
Throughput: the number of processes that are completed per time unit
Turnaround time: The interval from the time of submission of a process
to the time of completion. It is the sum of the periods spent waiting to get
into memory, waiting in the ready queue, executing on the CPU, and doing
I/O.
Waiting time: the sum of the periods spent waiting in the ready queue.
Response time: It is the time it takes to start responding, not the time it
takes to output the response.
“It is desirable to maximize CPU utilization and throughput and to minimize
turnaround time, waiting time, and response time”
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Scheduling Algorithms
CPU scheduling deals with the problem of deciding which of the
processes in the ready queue is to be allocated the CPU. There are
many different CPU scheduling algorithms.
First-Come, First-Served Scheduling (FCFS)
Shortest-Job-First Scheduling (SJF)
Shortest-Remaining-Time-First (SRTF)
Priority Scheduling
Round-Robin Scheduling
Multilevel Queue Scheduling
Multilevel Feedback Queue Scheduling
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Note that…
Gantt chart: A bar chart that illustrates a particular schedule, including
the start and finish times of each of the participating processes.
Completion Time (CT): This is the time when the process
completes its execution.
Arrival Time (AT): This is the time when the process has arrived in
the ready state.
Burst Time (BT): This is the time required by the process for its
execution.
Turnaround Time: TAT = CT – AT
Waiting Time: WT = TAT - BT
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
FCFS Scheduling : Example-1
What is the average waiting and average turnaround time of
the following processes using FCFS scheduling:
Gantt Chart
Process Burst Time Completion Time Turnaround Time Waiting Time
P1 24 24 24 0
P2 3 27 27 24
P3 3 30 30 27
AvgTAT = 27 AvgWT = 17
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
FCFS Scheduling : Example-2
What is the average waiting and average turnaround time of the following processes
(arriving as P2, P3, P1) using FCFS scheduling.
Gantt Chart
Process Burst Time Completion Time Turnaround Time Waiting Time
P1 24 30 30 6
P2 3 3 3 0
P3 3 6 6 3
AvgTAT = 13 AvgWT = 3
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
FCFS Scheduling : Example-3
What is the average waiting and average turnaround time of
the following processes using FCFS scheduling:
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
FCFS Scheduling : Example-3 (Solution)
What is the average waiting and average turnaround time of
the following processes using FCFS scheduling:
Process Arrival Time Burst Time CT TAT WT
P1 0 8 8 8 0
P2 1 4 12 11 7
P3 2 9 21 19 10
P4 3 5 26 23 18
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
FCFS Scheduling : Example-4
What is the average waiting and average turnaround time of
the following processes using FCFS scheduling:
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
FCFS Scheduling : Example-4 (Solution)
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Example 2: Find avg waiting time, Avg .TA, and throughput.
Sol
FCFS Scheduling : Advantages
Advantages of FCFS
The simplest form of a CPU scheduling algorithm
Easy to program
First come first served
Disadvantages of FCFS
It is a Non-Preemptive CPU scheduling algorithm, so after the process has been allocated
to the CPU, it will never release the CPU until it finishes executing.
The AverageWaiting Time is high.
There is a convoy effect as all the other processes wait for the one big process to get off
the CPU.
Not an ideal technique for time-sharing systems.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
SJF Scheduling
Shortest Job First (SJF) is an algorithm in which the
process having the smallest execution time is chosen for
the next execution.
This scheduling method can be preemptive or non-
preemptive.
It significantly reduces the average waiting time for other
processes awaiting execution.
There are basically two types of SJF methods:
Non-Preemptive SJF
Preemptive SJF (SRTF)
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
SJF Scheduling: Example-5
Gantt Chart
Process Burst Time Completion Time Turnaround Time Waiting Time
P1 6 9 9 3
P2 8 24 24 16
P3 7 16 16 9
P4 3 3 3 0
AvgWT = 7, Avg TAT = 13
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
SJF(Non-Preemptive Example problem
SJF Scheduling Advantages
SJF is frequently used for long term scheduling.
It reduces the average waiting time over FIFO (First in First Out)
algorithm.
SJF method gives the lowest average waiting time for a specific set of
processes.
.
Probably optimal with regard to average turnaround time.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
SJF Scheduling Disadvantages
Job completion time must be known earlier, but it is hard to predict.
It is often used in a batch system for long term scheduling.
SJF can’t be implemented for CPU scheduling for the short term. It is
because there is no specific method to predict the length of the upcoming
CPU burst.
It some times leads to the starvation.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
shortest-remaining-time-first (SRTF)
A preemptive SJF algorithm will preempt the
currently executing process, whereas a
nonpreemptive SJF algorithm will allow the
currently running process to finish its CPU burst.
Preemptive SJF scheduling is called shortest-
remaining time-first scheduling.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
SRTF: Example-6
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
SRTF: Example-6 (Solution)
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
SRTF(SJF Preemptive)
Sol
SRTF Scheduling Advantages & Disadvantages
Advantages-
SRTF is optimal and guarantees the minimum average waiting time.
It provides a standard for other algorithms since no other algorithm
performs better than it.
Disadvantages-
It can not be implemented practically since burst time of the
processes can not be known in advance.
It leads to starvation for processes with larger burst time.
Priorities can not be set for the processes.
Processes with larger burst time have poor response time.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Priority Scheduling (Example-7)
Consider the following set of processes, assumed to have
arrived at time 0 in the order P1, P2, · · ·, P5, with the
length of the CPU burst given in milliseconds:
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Priority Scheduling (Example-7) Solution
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Preemptive Priority Scheduling (Example-7a)
If a new process arrives
with better priority the
6 ms currently executing
process will go to ready
state.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Preemptive Priority Scheduling (Example-7a)
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Priority Scheduling
Advantages:
This provides a good mechanism where the relative
importance of each process may be precisely defined.
Disadvantages:
If high-priority processes use up a lot of CPU time, lower-
priority processes may starve and be postponed
indefinitely. The situation where a process never gets
scheduled to run is called starvation.
Another problem is deciding which process gets which
priority level assigned to it.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Aging in Priority Scheduling
A solution to the problem of indefinite blockage of low-
priority processes is aging. Aging involves gradually
increasing the priority of processes that wait in the system
for a long time.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Solve the previous question using Non-Preemptive priority.
Round-Robin Scheduling
Each process is assigned a fixed time slot in a cyclic way. It is
basically the preemptive version of First come First Serve CPU
Scheduling algorithm.
Round Robin CPU Algorithm generally focuses on Time Sharing
technique.
The period of time for which a process or job is allowed to run in a
pre-emptive method is called time quantum.
Each process or job present in the ready queue is assigned the CPU
for that time quantum, if the execution of the process is completed
during that time then the process will end else the process will go
back to the waiting table and wait for its next turn to complete the
execution.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Round-Robin Scheduling (Problem-8)
Assume Quantum =4
Process Burst Time Completion Time Turnaround Time Waiting Time
P1 24
P2 3
P3 3
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Quantum
Average TAT = 15.66
Average WT = 5.66
Process Burst Time Completion Time Turnaround Time Waiting Time
P1 24 30 30 6
P2 3 7 7 4
P3 3 10 10 7
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Apply Round Robin assuming Quantum=2
Solution: Also maintain Ready queue for RR problems.
Round Robin Numerical
Solution
Round-Robin Scheduling: Point to Note
If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time
in chunks of at most q time units.
Each process must wait no longer than (n - 1) × q time
units until its next time quantum.
For example, with five processes and a time quantum of
20 milliseconds, each process will get up to 20
milliseconds every 100 milliseconds.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Round-Robin Scheduling
Advantages:
Every process gets an equal share of the CPU.
RR is cyclic in nature, so there is no starvation.
Disadvantages:
Setting the quantum too short increases the overhead and
lowers the CPU efficiency, but setting it too long may
cause a poor response to short processes.
The average waiting time under the RR policy is often
long.
If time quantum is very high then RR degrades to FCFS. OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
(Problem-9)
In the problem, both priority and arrival time is given. For
SRTF, smaller burst time is given preference. In case of
tie, FCFS is followed. But, since priority is given, high
priority job will be given preference over FCFS order (in
case of tie).
In the problem, both priority and arrival time is given. For SRTF, smaller burst time is
given preference. In case of tie, FCFS is followed. But, since priority is given, high
priority job will be given preference over FCFS order (in case of tie).
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Solution (Problem-9)
In the problem, both
priority and arrival time
is given. For SRTF,
smaller burst time is
given preference. In case
of tie, FCFS is followed.
But, since priority is
given, high priority job
will be given preference
over FCFS order (in case
of tie).
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Multilevel Queue Scheduling
In this scheduling, processes are classified into different groups.
Common division is made between foreground (interactive)
processes and background processes.
A multilevel queue scheduling algorithm partitions the ready queue
into several separate queues.
The processes are permanently assigned to one queue, generally
based on some property of the process, such as memory size,
process priority, or process type.
Each queue has its own scheduling algorithm.
In addition, there must be scheduling among the queues, which is
commonly implemented as fixed-priority preemptive scheduling.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Multilevel Queue Scheduling (Example)
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Multilevel Feedback Queue Scheduling
In Multilevel Queue Sceduling (without feedback), processes do not
move from one queue to the other, since processes do not change
their foreground or background nature.
The multilevel feedback queue scheduling algorithm, in contrast,
allows a process to move between queues.
If a process uses too much CPU time, it will be moved to a lower-
priority queue.
In addition, a process that waits too long in a lower-priority queue
may be moved to a higher-priority queue. This form of aging
prevents starvation.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Multilevel Feedback Queue Scheduling (Example)
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Multilevel Feedback Queue Scheduling (Point to note)
A process entering the ready queue is put in queue 0. A
process in queue 0 is given a time quantum of 8
milliseconds. If it does not finish within this time, it is
moved to the tail of queue 1.
the process at the head of queue 1 is given a quantum of
16 milliseconds. If it does not complete, it is preempted
and is put into queue 2.
Processes in queue 2 are run on an FCFS basis but are
run only when queues 0 and 1 are empty
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Threads and Their Management
A thread is a basic unit of CPU utilization; it
comprises a thread ID, a program counter, a
register set, and a stack.
It shares with other threads belonging to the same
process its code section, data section, and other
operating-system resources, such as open files and
signals.
A traditional (or heavyweight) process has a single
thread of control. If a process has multiple threads
of control, it can perform more than one task at a
time. OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Single-threaded process vs a Multithreaded process
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
More on Multi-Threaded Systems
Most software applications that run on modern computers are
multithreaded.
An application typically is implemented as a separate process with
several threads of control.
For example, a web browser might have one thread display images
or text while another thread retrieves data from the network.
A word processor may have a thread for displaying graphics,
another thread for responding to keystrokes from the user, and a
third thread for performing spelling and grammar checking in the
background.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Multi-Threaded Systems Vs Process-creation
Process-creation method was in common use before threads became
popular. Process creation is time consuming and resource intensive,
however.
If the web-server process is multithreaded, the server will create a
separate thread that listens for client requests. When a request is
made, rather than creating another process, the server creates a new
thread to service the request and resume listening for additional
requests.
Most operating-system kernels are now multithreaded. Several
threads operate in the kernel, and each thread performs a specific
task, such as managing devices, managing memory, or interrupt
handling.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Thread vs Process
Paradignm Process Thread
Definition A process is a program in execution. A thread is a lightweight process.
Threads within the same process share
Resource Each process has its own memory space.
memory space.
Threads are created within processes using
Processes are created using system calls like fork() or
Creation system calls like pthread_create() or
CreateProcess().
thread APIs.
Threads have lower overhead since they
Overhead Processes have higher overhead due to separate memory spaces.
share memory space and resources.
Inter-process communication (IPC) is required for Threads can communicate directly
Communication
communication between processes. through shared memory.
Threads within the same process share
Isolation Processes are isolated from each other.
resources and memory.
Threads within the same process are
Independence Processes are independent entities.
dependent on each other.
Threads within the same process can also
Parallelism Processes can run in parallel on multi-core systems.
run in parallel on multi-core systems.
If one thread crashes, it can potentially
Failure Impact If one process fails, it does not affect other processes.
crash the entire process.
A web server handling multiple client
A web browser running multiple instances of itself is a good
Example requests concurrently using threads is a
example of multiple processes.
good example of threads.
Benefits of Multi-Threaded Systems
Responsiveness: Multithreading an interactive application may
allow a program to continue running even if part of it is blocked or
is performing a lengthy operation
Resource sharing: Threads share the memory and the resources of
the process to which they belong by default.
Economy: Allocating memory and resources for process creation is
costly. Because threads share the resources of the process to which
they belong, it is more economical to create and context-switch
threads.
Scalability: The benefits of multithreading can be even greater in a
multiprocessor architecture, where threads may be running in
parallel on different processing cores.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
User Level Threads And Kernel Level Threads
Support for threads may be provided either at the user level, for user
threads, or by the kernel, for kernel threads.
User threads are supported above the kernel and are managed without
kernel support.
whereas kernel threads are supported and managed directly by the
operating system.
Ultimately, a relationship must exist between user threads and kernel
threads.
88
Multi-threading Models
Many-to-One Model:
o The many-to-one model maps many user-level threads to one kernel
thread.
Thread management is done in user space. When thread makes a blocking
system call, the entire process will be blocks. Only one thread can access
the Kernel at a time, so multiple threads are unable to run in parallel on
multiprocessors.
If the user level thread libraries are implemented in the operating system
in such a way that system does not support them then Kernel threads use
the many to one relationship modes.
89
Multi-threading Models
One-to-One Model:
There is one to one relationship of user level thread to the kernel
level thread. This model provides more concurrency than the many to one
model. It also another thread to run when a thread makes a blocking
system call. It support multiple thread to execute in parallel on
microprocessors.
Disadvantage of this model is that creating user thread requires the
corresponding Kernel thread. OS/2, windows NT and windows 2000 use
one to one relationship model.
90
Multi-threading Models
Many to Many Model:
In this model, many user level threads multiplexes to the Kernel thread of
smaller or equal numbers. The number of Kernel threads may be specific
to either a particular application or a particular machine.
Following diagram shows the many to many model. In this model,
developers can create as many user threads as necessary and the
corresponding Kernel threads can run in parallels on a multiprocessor.
The Solaris operating system supported many to many(two level) model in
versions older than Solaris 9.
91
fork()
Fork system call is used for creating a new process, which is called child process,
which runs concurrently with the process that makes the fork() call (parent
process).
After a new child process is created, both processes will execute the next
instruction following the fork() system call.
A child process uses the same pc(program counter), same CPU registers, same
open files which the parent process is currently using.
After fork() system call, two separate processes exist: the parent process and the
child process. The fork() system call returns different values in each process:
In the parent process, pid will be the process ID of the child process.
In the child process, pid will be 0.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Fork example with pid
Since both the parent and child
processes print a message, the output
might not be deterministic in terms of
which message appears first, as it
depends on the scheduling behavior
of the operating system
fork()
Please note that the below programs don’t compile inWindows
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
fork() : Another Example
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
fork() : Child Creation
The number of times ‘hello’ is printed is equal to number of process
created.
Total Number of Processes = 2n, where n is number of fork system calls. So
here n = 3, 23 = 8 Let us put some label names for the three lines:
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
fork() : Tree hierarchy
So there are total eight processes (seven new child
processes and one original process).
Tree hierarchy:
The main process: P0
Processes created by the 1st fork: P1
Processes created by the 2nd fork: P2, P3
Processes created by the 3rd fork: P4, P5, P6, P7
Main Process (P0): This is the original process. It executes the initial code before any fork
calls. It then forks, creating a child process.
First Fork (in P0): After the first fork, there's one child process created, let's call it P1. At this
point, both P0 (parent) and P1 (child) continue executing the code from the fork call. They are
identical copies at this point, with the same code and execution state.
Second Fork (in both P0 and P1): Now, both P0 and P1 execute the second fork call. Each
of them creates a new child process. So, P0 creates P2, and P1 creates P3. Now, we have
four processes: P0, P1, P2, and P3.
Third Fork (in P0, P1, P2, and P3): At this stage, all processes execute the third fork call.
Each process creates a new child, resulting in four more processes per existing process. So,
P0 creates P4, P1 creates P5, P2 creates P6, and P3 creates P7.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
What does the fork() system call do in a Unix-like operating system?
a) Terminates the current process
b) Creates a new process
c) Suspends the current process
d) Resets the system clock
What is the return value of the fork() system call in the child process?
a) -1
b) 0
c) Parent process ID
d) Child process ID
Which header file is required to use the fork() system call in C
programming?
a) <stdlib.h>
b) <unistd.h>
c) <sys/types.h>
Multiple-Processor Scheduling
In multiple-processor scheduling multiple CPU’s are
available and hence Load Sharing becomes possible.
In multiple processor scheduling there are cases when the
processors are identical i.e. HOMOGENEOUS, in terms
of their functionality, we can use any processor available
to run any process in the queue.
We already know that (From Unit-1) a multiprocessor
system may be Symmetric Multiprocessing system or
Asymmetric Multiprocessing system
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Multiple-Processor Scheduling (Asymmetric Multiprocessing & Symmetric Multiprocessing)
One approach is when all the scheduling decisions and I/O
processing are handled by a single processor which is called
the Master Server and the other processors executes only the user
code. This is simple and reduces the need of data sharing. This
entire scenario is called Asymmetric Multiprocessing.
A second approach uses Symmetric Multiprocessing where each
processor is self scheduling. All processes may be in a common
ready queue or each processor may have its own private queue for
ready processes. The scheduling proceeds further by having the
scheduler for each processor examine the ready queue and select a
process to execute.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Multiple-Processor Scheduling (Processor Affinity)
When a process runs on a specific processor there are certain effects on
the cache memory. The data most recently accessed by the process
populate the cache for the processor and as a result successive memory
access by the process are often satisfied in the cache memory.
If the process migrates to another processor, the contents of the cache
memory must be invalidated for the first processor and the cache for the
second processor must be repopulated.
Because of the high cost of invalidating and repopulating caches, most
of the SMP(symmetric multiprocessing) systems try to avoid migration of
processes from one processor to another and try to keep a process running
on the same processor. This is known as PROCESSOR AFFINITY.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Multiple-Processor Scheduling (Processor Affinity)
There are two types of processor affinity:
Soft Affinity – When an operating system has a policy of
attempting to keep a process running on the same
processor but not guaranteeing it will do so, this situation
is called soft affinity.
Hard Affinity – Hard Affinity allows a process to specify
a subset of processors on which it may run.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
DEADLOCK
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock
Other scenario:
Telephone Conversion
Chess Game
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock
In a multiprogramming environment, several processes
may compete for a finite number of resources.
A process requests resources; if the resources are not
available at that time, the process enters a waiting state.
Sometimes, a waiting process is never again able to
change state, because the resources it has requested are
held by other waiting processes. This situation is called a
deadlock.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
System Model
A system consists of a finite
number of resources to be
distributed among a number of
competing processes.
CPU cycles, files, and I/O devices
(such as printers and DVD drives)
are examples of resource types.
The resources may be partitioned
into several types (or classes), each
consisting of some number of
identical instances.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
System Model (Cont.)
If a process requests an instance of a resource type, the
allocation of any instance of the type should satisfy the
request.
A process must request a resource before using it and
must release the resource after using it.
A process may request as many resources as it requires to
carry out its designated task. Obviously, the number of
resources requested may not exceed the total number of
resources available in the system. In other words, a
process cannot request three printers if the system has
only two. OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
System Model (Cont.)
Under the normal mode of operation, a process may utilize a
resource in only the following sequence:
1. Request. The process requests the resource. If the request
cannot be granted immediately (for example, if the resource is
being used by another process), then the requesting process
must wait until it can acquire the resource.
2. Use. The process can operate on the resource (for example,
if the resource is a printer, the process can print on the printer).
3. Release. The process releases the resource.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Characterization : Necessary Conditions
A deadlock situation can arise if the following four conditions hold
simultaneously in a system:
Mutual exclusion: At least one resource must be held in a
nonsharable mode.
Hold and wait: A process must be holding at least one resource
and waiting to acquire additional resources
No preemption: Resources cannot be preempted
Circular wait: A set {P0, P1, ..., Pn} of waiting processes must
exist such that P0 is waiting for a resource held by P1, P1 is
waiting for a resource held by P2, ..., Pn-1 is waiting for a
resource held by Pn, and Pn is waiting for a resource held by P0.
all four conditions must hold for a deadlock to occur.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Characterization : Resource-Allocation Graph (RAG)
Deadlocks can be described more
precisely in terms of a directed
graph called a system resource-
allocation graph.
A directed edge Pi → Rj is called a
request edge; a directed edge Rj →
Pi is called an assignment edge.
Note that a request edge points to
only the rectangle Rj, whereas an
assignment edge must also
designate one of the dots in the
rectangle.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Detection (by RAG)
If the graph contains no cycles, then no process in the system is
deadlocked. It does not matter that the resources are single
instance or multiple instance resources.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Detection (by RAG)
If each resource type has exactly one instance, then a cycle implies
that a deadlock has occurred. Each process involved in the cycle is
deadlocked. In this case, a cycle in the graph is both a necessary
and a sufficient condition for the existence of deadlock.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Detection (by RAG)
If each resource type has several instances, then a cycle does not
necessarily imply that a deadlock has occurred. In this case, a cycle
in the graph is a necessary but not a sufficient condition for the
existence of deadlock.
No Deadlock
Deadlock
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
RAG Summary
In summary,
“If a resource-allocation graph does not have a
cycle, then the system is not in a deadlocked
state. If there is a cycle, then the system may or
may not be in a deadlocked state.”
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Methods for Handling Deadlocks
Generally speaking, we can deal with the deadlock
problem in one of three ways:
We can use a protocol to prevent or avoid deadlocks,
ensuring that the system will never enter a deadlocked
state.
We can allow the system to enter a deadlocked state,
detect it, and recover.
We can ignore the problem altogether and pretend that
deadlocks never occur in the system.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Prevention
For a deadlock to occur, following four necessary
conditions must hold simultaneously;
Mutual Exclusion
Hold and Wait
No Preemption
Circular Wait
By ensuring that at least one of these conditions cannot
hold, we can prevent the occurrence of a deadlock.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Prevention (cont.)
1. Mutual Exclusion:
Make sure that the resources are sharable. Sharable
resources do not require mutually exclusive access and
thus cannot be involved in a deadlock.
2. Hold and Wait
To ensure that the hold-and-wait condition never occurs in
the system, we must guarantee that, whenever a process
requests a resource, it does not hold any other resources.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Prevention (cont.)
3. No Preemption
If a process is holding some resources and requests another
resource that cannot be immediately allocated to it (that is,
the process must wait), then all resources the process is
currently holding are preempted. In other words, these
resources are implicitly released.
4. Circular Wait
One way to ensure that this condition never holds is to
impose a total ordering of all resource types and to require
that each process requests resources in an increasing order of
enumeration.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Avoidance
Possible side effects of deadlock prevention are low
device utilization and reduced system throughput. An
alternative method for avoiding deadlocks is to require
additional information about how resources are to be
requested.
A deadlock-avoidance algorithm dynamically examines
the resource-allocation state to ensure that a circular-wait
condition can never exist.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Safe State
A state is safe state if the system can allocate resources to each
process (up to its maximum) in some order and still avoid a
deadlock.
More formally, a system is in a safe state only if there exists a safe
sequence.
A sequence of processes <P1, P2, ..., Pn> is a safe sequence for the
current allocation state if, for each Pi, the resource requests that Pi
can still make can be satisfied by the currently available resources
plus the resources held by all Pj, with j < i.
If no such sequence exists, then the system state is said to be
unsafe.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Note that…
A safe state is not a
deadlocked state.
Conversely, a deadlocked
state is an unsafe state.
Not all unsafe states are
deadlocks.
An unsafe state may lead
to a deadlock.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Example
We consider a system with twelve magnetic tape drives and three
processes: P0, P1, and P2. Process P0 requires ten tape drives,
process P1 may need as many as four tape drives, and process P2
may need up to nine tape drives. Suppose that, at time t0, process
P0 is holding five tape drives, process P1 is holding two tape
drives, and process P2 is holding two tape drives. (Thus, there are
three free tape drives.)
Allocated
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Example (cont.)
Is there any safe sequence? (There are 3 free tape drives.)
YES, At time t0, the system is in a safe state. The
sequence <P1, P0, P2> satisfies the safety condition.
Allocated
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Example (cont.)
A system can go from a safe state to an unsafe state. Suppose that,
at time t1, process P2 requests and is allocated one more tape drive.
The system is no longer in a safe state.
At this point, only process P1 can be allocated all its tape drives.
When it returns them, the system will have only four available tape
drives.
Neither, P0 nor P2 can be allocated.
Allocated
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Example (cont.) : Question
What went wrong? Why the state became unsafe?
Our mistake was in granting the request from process P2
for one more tape drive. If we had made P2 wait until
either of the other processes had finished and released its
resources, then we could have avoided the deadlock.
Allocated
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Simple idea of avoidance algorithms
The idea is simply to ensure that the system will always
remain in a safe state.
Initially, the system is in a safe state.
Whenever a process requests a resource that is currently
available, the system must decide whether the resource
can be allocated immediately or whether the process must
wait.
The request is granted only if the allocation leaves the
system in a safe state.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Avoidance algorithms
[Link]-Allocation-Graph Algorithm:
If we have a resource-allocation system with only one
instance of each resource type, we can use a variant of
the resource-allocation graph
[Link]’s Algorithm:
It works with multiple instance resources. The
resource-allocation-graph algorithm is not applicable to a
resource allocation system with multiple instances of each
resource type.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Resource-Allocation-Graph Algorithm
In addition to the request and
assignment edges already described,
there is a new type of edge, called a
claim edge.
A claim edge Pi → Rj indicates that
process Pi may request resource Rj at
some time in the future.
Before process Pi starts executing, all
its claim edges must already appear in
the resource-allocation graph.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Resource-Allocation-Graph Algorithm
Now suppose that process Pi requests
resource Rj.
The request can be granted only if
converting the request edge Pi → Rj to
an assignment edge Rj → Pi does not
result in the formation of a cycle in the
resource-allocation graph.
Suppose that P2 requests R2. Although
R2 is currently free, we cannot allocate
it to P2, since this action will create a
cycle (unsafe state) in the graph.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Banker’s Algorithm
We need the following data structures, where n is the number of processes in the
system and m is the number of resource types:
Available. A vector of length m indicates the number of available resources of
each type. If Available[j] equals k, then k instances of resource type Rj are
available.
Max. An n × m matrix defines the maximum demand of each process. If
Max[i][j] equals k, then process Pi may request at most k instances of resource
type Rj.
Allocation. An n × m matrix defines the number of resources of each type
currently allocated to each process. If Allocation[i][j] equals k, then process Pi is
currently allocated k instances of resource type Rj.
Need. An n × m matrix indicates the remaining resource need of each process. If
Need[i][j] equals k, then process Pi may need k more instances of resource type
Rj to complete its task. Note that Need[i][j] equals Max[i][j] - Allocation[i][j].
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Banker’s Algorithm: Data Structures Example
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Banker’s Algorithm: Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state.
1. Let Work and Finish be vectors of length m and n, respectively. Initialize Work =
Available and Finish[i] = false for i = 0, 1, ..., n - 1.
2. Find an index i such that both
a. Finish[i] == false
b. Needi ≤ Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
Go to step 2.
4. If Finish[i] == true for all i, then the system is in a safe state.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Banker’s Algorithm: Resource-Request Algorithm
The algorithm for determining whether requests can be safely granted.
Let Requesti be the request vector for process Pi. If Requesti [ j] == k, then process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken:
1. If Requesti ≤ Needi, go to step 2. Otherwise, raise an error condition, since the process has exceeded
its maximum claim.
2. If Requesti ≤ Available, go to step 3. Otherwise, Pi must wait, since the resources are not available.
3. Have the system pretend to have allocated the requested resources to process Pi by modifying the state
as follows:
Available = Available–Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi –Requesti;
If the resulting resource-allocation state is safe, the transaction is completed, and process Pi is allocated
its resources. However, if the new state is unsafe, then Pi must wait for Requesti, and the old
resource-allocation state is restored.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Banker’s Algorithm: An Illustrative Example
Resource type A has ten instances, resource type B has five
instances, and resource type C has seven instances.
Suppose that, at time T0, the following snapshot of the system has
been taken:
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Banker’s Algorithm: An Illustrative Example
Is the system safe at time T0?
The system is currently in a safe state. Indeed, the sequence <P1,
P3, P4, P2, P0> satisfies the safety criteria.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Banker’s Algorithm: Run Resource Request Algorithm
Suppose now that process P1 requests one additional instance of resource
type A and two instances of resource type C, so Request1 = (1,0,2).
check that Request1 ≤ Available—that is, that (1,0,2) ≤ (3,3,2), which is
true.
Time T0 Time T1 (Pretend)
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Banker’s Algorithm: Is system safe?
We execute our safety algorithm and find that the sequence <P1,
P3, P4, P0, P2> satisfies the safety requirement. Hence, we can
immediately grant the request of process P1.
The snapshot at time T0 will be finalized.
Time T1 OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Banker’s Algorithm: Answer the questions…
Can request for (3,3,0) by P4 be granted?
Can request for (0,2,0) by P0 be granted?
Time T1 OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Practice Question-1
Short solution
Practice Question-2
Brief Solution
Practice Question-3
0,3,4,1,2
Deadlock Detection & Recovery
If a system does not employ either a deadlock-prevention or a
deadlock avoidance algorithm, then a deadlock situation may occur.
In this environment, the system may provide:
An algorithm that examines the state of the system to determine
whether a deadlock has occurred
An algorithm to recover from the deadlock
Note that a detection-and-recovery scheme requires overhead that
includes not only the run-time costs of maintaining the necessary
information and executing the detection algorithm but also the
potential losses inherent in recovering from a deadlock.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Detection: Single Instance of Each Resource Type
If all resources have only a single instance, then we can
define a deadlock detection algorithm that uses a variant
of the resource-allocation graph, called a wait-for graph.
As before, a deadlock exists in the system if and only if
the wait-for graph contains a cycle.
Note that, The wait-for graph scheme is not applicable to a
resource-allocation system with multiple instances of resource
type.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Detection: Single Instance of Each Resource Type
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Implementation challenges
Need to call algorithm every time to detect the cycle in wait for
graph.
How often this algorithm should be called?
If we call the algorithm every time when a new resource request is
made will lead to increased overhead and computation time.
One modification we make that algorithm can called after defined
fix interval. (e.g. we CPU utililization is reduced to 40%, or time
based) this approach also have problem of getting stuck in
deadlock till the interval starts that makes difficult to find the
deadlock causing process.
Deadlock Detection: Several Instances of a Resource Type( Variant of Bankers)
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Deadlock Detection: Several Instances of a Resource Type
we will find that the sequence <P0, P2, P3, P1> results in
Finish[i] == true for all i. So, the system is not deadlocked.
The system will be deadlocked if P2 requests one additional
resource of C.
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Recovery from Deadlock
Process Termination
1. Abort all the process in deadlocked state(Costly ?)
2. Abort one process at a time and decide next to abort after
deadlock detection.(Overhead ?)
3. Abort the process that has minimum overhead and cost.
Many factors may affect which process is chosen to abort, including:
What the priority of the process is
How long the process has computed and how much longer the process
will compute before completing its designated task
How many and what types of resources the process has used
How many more resources the process needs in order to complete
How many processes will need to be terminated
Whether the process is interactive or batch
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Recovery from Deadlock
Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt some
resources from processes and give these resources to other processes until the
deadlock cycle is broken.
Three issues need to be addressed:
Selecting a victim. Which resources and which processes are to be preempted? we must
determine the order of preemption to minimize cost. That may include the number of
resources a deadlocked process is holding
Rollback. If we preempt a resource from a process, what should be done with that process?
Clearly, it cannot continue with its normal execution; it is missing some needed resource.
We must roll back the process to some safe state and restart it from that state.
Starvation: How do we ensure that starvation will not occur? That is, how can we guarantee
that resources will not always be preempted from the same process?
OPERATING SYSTEMS (BCS-401)
(Dr. Sanjeev Kumar, Associate Professor, IT, KIET)
Identify if deadlock exist in given RAG
Yes
GATE Q1
Consider three processes (process id 0, 1, 2 respectively)
with compute time bursts 2, 4 and 8 time units. All processes
arrive at time zero. Consider the longest remaining time first
(LRTF) scheduling algorithm. In LRTF ties are broken by
giving priority to the process with the lowest process id. The
average turn around time is:
Solution
GATE Q2
Consider three processes, all arriving at time zero, with total
execution time of 10, 20 and 30 units, respectively. Each process
spends the first 20% of execution time doing I/O, the next 70%
of time doing computation, and the last 10% of time doing I/O
again. The operating system uses a shortest remaining compute
time first scheduling algorithm and schedules a new process either
when the running process gets blocked on I/O or when the
running process finishes its compute burst. Assume that all I/O
operations can be overlapped as much as possible. For what
percentage of time does the CPU remain idle?
Solution
GATE Q3
Consider three CPU-intensive processes, which require 10,
20 and 30 time units and arrive at times 0, 2 and 6,
respectively. How many context switches are needed if the
operating system implements a shortest remaining time first
scheduling algorithm? Do not count the context switches at
time zero and at the end.
GATE 4
GATE-5
Process P0 is allocated processor at 0 ms as there is no other
process in ready queue. P0 is preempted after 1 ms as P1
arrives at 1 ms and burst time for P1 is less than remaining
time of P0. P1 runs for 4ms. P2 arrived at 2 ms but P1
continued as burst time of P2 is longer than P1. After P1
completes, P0 is scheduled again as the remaining time for
P0 is less than the burst time of P2.
P0 waits for 4 ms, P1 waits for 0 ms and P2 waits for 11 ms.
So average waiting time is (0+4+11)/3 = 5.
GATE - 6
Solution 6
Gate -7
Solution 7
Gate-8 ( Unit-2)
GATE-10