0% found this document useful (0 votes)
74 views99 pages

Process and Thread Management in OS

The document discusses process and thread management in operating systems, defining a process as an active program with various attributes such as memory and CPU state. It covers process states, types of processes, the Process Control Block (PCB), context switching, and the differences between processes and threads. Additionally, it outlines process scheduling types, including short-term, long-term, and medium-term scheduling strategies.

Uploaded by

zerokiller2554
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views99 pages

Process and Thread Management in OS

The document discusses process and thread management in operating systems, defining a process as an active program with various attributes such as memory and CPU state. It covers process states, types of processes, the Process Control Block (PCB), context switching, and the differences between processes and threads. Additionally, it outlines process scheduling types, including short-term, long-term, and medium-term scheduling strategies.

Uploaded by

zerokiller2554
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

OPERATING SYSTEM”

Unit -2

PROCESS AND THREAD


MANAGEMENT

.
Process
2

➢ A process is a program in execution. Process is not as same as program


code but a lot more than it. A process is an 'active' entity as opposed to
program which is considered to be a 'passive' entity. Attributes
held by process include hardware state, memory, CPU etc.
➢ Process Management for a single tasking or batch processing system is
easy as only one process is active at a time. With multiple processes being
active, the process management becomes complex as a CPU needs to be
efficiently utilized by multiple processes.
➢ Multiple active processes can may share resources like memory and may
communicate with each other.

.
Process
3

➢ This further makes things complex as an Operating System has to do


process synchronization.
➢ the advantages of having multiprogramming are system responsiveness
and better CPU utilization.
➢ We can run multiple processes in interleaved manner on a single CPU.
➢ For example, when the current process is getting busy with IO, we assign
CPU to some other process.

.
Process
4

➢ Process memory is divided into four sections for efficient working :

1. The text section is made up of the compiled program code, read in


from non-volatile storage when the program is launched.

2. The data section is made up the global and static variables, allocated and
initialized prior to executing the main.

3. The heap is used for the dynamc memory allocation, and is managed via
calls to new, delete, malloc, free, etc.

4. The stack is used for local variables. Space on the stack is reserved for
local variables when they are declared.

.
Process
5

.
Process States and Transitions diagram
6

A Process may live in any of the following state during its life cycle

.
Process States and Transitions diagram
7
1. Process Creation(New): Process creation in an operating system (OS) is the act of
generating a new process. This new process is an instance of a program that can execute
independently.

2. Ready/ Scheduling: Once a process is ready to run, it enters the "ready queue."
The scheduler's job is to pick a process from this queue and start its execution.

3. Execution/Running: Execution means the CPU starts working on the process.

4. Waiting: Move to a waiting queue if it needs to perform an I/O operation. Get


blocked if a higher-priority process needs the CPU.

5. Killing / terminate: After the process finishes its tasks, the operating system ends
it and removes its Process Control Block (PCB).

.
Types of Process
8

➢ Processes are classified based on their functionality, Different types of


processes are

1. CPU-Bound
2. I/O-Bound Processes
3. Independent processes
4. Cooperating processes
5. Zombie Process
6. Orphan Process
7. Daemon Process

.
Types of Process
9

1. CPU-Bound : A CPU-bound process requires more CPU time or spends more


time in the running state
2. I/O-Bound Processes: An I/O-bound process requires more I/O time and less
CPU time. An I/O-bound process spends more time in the waiting state.
3. Independent processes: Independent processes do not affect and cannot be
affected by other processes that are running within the operating system, nor do
they share data with any processes or systems.
4. Cooperating processes: Cooperating processes can be affected by other
processes and, in turn, affect other processes within the operating system.

.
Types of Process
10

5. Zombie Process:
➢ A zombie process is a process that has completed its execution but still remains in
the process table because its parent process has not yet read its exit status.
➢ It is called a "zombie" because it is no longer active or running, but it still exists as
a placeholder in the system.
➢ Entry of child process remains in the process table until the parent process
retrieves the exit status.
➢ During this time, the child process is referred to as a zombie process.
➢ This happens because the operating system keeps the process table entry to allow
the parent to gather information about the terminated child.

.
Types of Process
11

6. Orphan Process:
➢ An orphan process is a child process currently performing its execution, whose
parent has finished its execution and has terminated, leaving the process table
without waiting for the child process to finish.
➢ Orphan processes are still active and continue to run normally, but they no longer
have their original parent process to monitor or control them.

.
Types of Process
12

7. Daemon Process:
➢ A daemon process is a background process that runs independently of any user
control and performs specific tasks for the system.
➢ Daemons are usually started when the system starts, and they run until the system
stops.
➢ A daemon process typically performs system services and is available at all times
to more than one task or user.
➢ Daemon processes are started by the root user or root shell and can be stopped
only by the root user.

.
Process Control Block
13
➢ The Process Control block(PCB) is also known as a Task Control Block. it represents a
process in the Operating System.
➢ A process control block (PCB) is a data structure used by a computer to store all
information about a process. It is also called the descriptive process.
➢ When a process is created, the operating system creates a process manager.
➢ In the context switching of two processes, the priority-based process occurs in the
ready queue of the process control block. Following are the steps:
➢ The state of the current process must be saved for rescheduling.
➢ The process state contains records, credentials, and operating system-specific
information stored on the PCB or switch.
➢ The operating system aborts the execution of the current process and selects a process
from the waiting list by tuning its PCB.
➢ Load the PCB's program counter and continue execution in the selected process.
.
Process Control Block
14

.
Process Control Block
15

With respect to process BCB stores following information

.
Process Control Block
16

Process State: This specifies the process state i.e. new, ready, running, waiting or
terminated..

Process ID: and parent process ID.: This shows the number of the
particular process

Program Counter: This contains the address of the next instruction that
needs to be executed in the process.

Registers: This specifies the registers that are used by the process. They may
include accumulators, index registers, stack pointers, general purpose registers etc.

List of Open Files : These are the different files that are associated with the process

.
Process Control Block
17

CPU Scheduling Information : The process priority, pointers to scheduling queues


etc. is the CPU scheduling information that is contained in the PCB. This may also
include any other scheduling parameters.

Memory Management Information : The memory management information


includes the page tables or the segment tables depending on the memory system
used. It also contains the value of the base registers, limit registers etc.

Accounting information : The time limits, account numbers, amount of CPU used,
process numbers etc. are all a part of the PCB accounting information

I/O Status Information : This information includes the list of I/O devices used by

the process, the list of files etc


.
Context Switching of Process
18

Context Switching of Process:


The process of saving the context of one process and loading the context of another
process is known as Context Switching. In simple terms, it is like loading and
unloading the process from the running state to the ready state.
When Does Context Switching Happen?
Context Switching Happen:
1. When a high-priority process comes to a ready state (i.e. with higher priority
than the running process).
2. An Interrupt occurs.
3. User and kernel-mode switch.
4. Preemptive CPU scheduling is used.

.
THREAD IN OS
19

➢ Thread is an execution unit which consists of its own program counter, a


stack, and a set of registers.
➢ Threads are also known as Lightweight processes. Threads are popular way to
improve application through parallelism.
➢ The CPU switches rapidly back and forth among the threads giving illusion that
the threads are running in parallel.

➢ As each thread has its own independent resource for process execution, multpile
processes can be executed parallelly by increasing number of threads.

.
THREAD IN OS
20

Single Vs Multithreaded Executing Process

.
THREAD IN OS
21
There are two types of threads :

1. User Threads

2. Kernel Threads

1. User threads, are above the kernel and without kernel support. These are the
threads that application programmers use in their programs.

2. Kernel threads are supported within the kernel of the OS itself. All modern OSs
support kernel level threads, allowing the kernel to perform multiple simultaneous tasks
and/or to service multiple kernel system calls simultaneously.

.
THREAD IN OS
22
Advantages of Threads:

Resource sharing: hence allowing better utilization of resources.

Economy: Creating and managing threads becomes easier.

Scalability: One thread runs on one CPU. In Multithreaded processes, threads can be
distributed over a series of processors to scale.

Context Switching: is smooth. Context switching refers to the procedure followed by


CPU to change from one task to another.

.
Process Vs Thread
23

Parameter Process Thread


Process means a program is in Thread means a segment of a
Definition
execution. process.
Threads are Lightweight.
Lightweight The process is not Lightweight.

Termination The process takes more time to The thread takes less time to
time terminate. terminate.
Creation
It takes more time for creation. It takes less time for creation.
time
Communication between Communication between threads
Communicat
processes needs more time requires less time compared to
ion
compared to thread. processes.
Context It takes more time for context It takes less time for context
switching switching. switching.
Resource Process consume more resources. Process consume less resources.
.
Process Vs Thread
24

Parameter Process Thread

Treatment Different process are tread All the level peer threads are
by OS separately by OS. treated as a single task by OS.
The process is mostly isolated. Threads share memory.
Memory

Sharing It does not share data Threads share data with each other.

.
Process Scheduling
25
➢ The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategies.
➢ Process scheduling is an essential part of a Multiprogramming operating systems.

➢ An operating system uses two types of scheduling processes


execution preemptive and non - preemptive.
1. Preemptive process: In preemptive scheduling policy, a low priority process has to
be suspend its execution if high priority process is waiting in the same queue for its
execution.
2. Non - Preemptive process: In non - preemptive scheduling policy, processes are
executed in first come first serve basis, which means the next process is executed
only when currently running process finishes its execution.
.
Preemptive Vs Non-Preemptive Scheduling
26
Parameter Preemptive Scheduling Non-Preemptive Scheduling
Once resources(CPU Cycle) are
In this resources(CPU Cycle) are allocated to a process, the process
Basic
allocated to a process for a limited time. holds it till it completes its burst
time or switches to waiting state
Process can not be interrupted until
Interrupt Process can be interrupted in between.
it terminates itself or its time is up
If a process with a long burst time is
If a process having high priority
running CPU, then later coming
Starvation frequently arrives in the ready queue, a
process with less CPU burst time
low priority process may starve
may starve
It has overheads of scheduling the
Overhead It does not have overheads
processes
Flexibility flexible Rigid
Cost Cost associated No cost associated
Preemptive scheduling response time is Non-preemptive scheduling
Response Time
less response time is high
.
Preemptive Vs Non-Preemptive Scheduling
27

Parameter Preemptive Scheduling Non-Preemptive Scheduling


The OS has greater control over The OS has less control over the
Process control
the scheduling of processes scheduling of processes
Lower overhead since context
Higher overhead due to frequent
Overhead switching is less frequent
context switching

More as a process might be


Concurrency Less as a process is never
preempted when it was accessing
Overhead preempted.
a shared resource.
Examples of preemptive Examples of non-preemptive
Examples scheduling are Round Robin and scheduling are First Come First
Shortest Remaining Time First Serve and Shortest Job First

.
Process Scheduling
28

➢ Types of Process Scheduling


1. Short Term Schedular
2. Long Term Schedular
3. Medium Term Schedular

.
Process Scheduling
29

.
Process Scheduling
30

1. Short Term Schedular


➢ The operating system's short-term scheduler, commonly referred to as a CPU
scheduler.
➢ It controls how the central processing unit (CPU) is allotted to processes.
➢ The short-term scheduler's major objective is to make sure that the CPU is
constantly utilized effectively and efficiently.
➢ The short-term scheduler operates by continuously keeping track of the status of
all the system's processes.
➢ The scheduler chooses a process from the ready queue when it is prepared to run
and allows the CPU to do it.
➢ The process then continues to operate until it either completes its work or runs
into an I/O activity that blocks it.
.
Process Scheduling
31
2. Long Term Schedular

➢ It selects the process that are to be placed in ready queue.

➢ The long term scheduler basically decides the priority in which processes must be
placed in main memory.

➢ A long-term scheduler, also known as a job scheduler, is an operating system


component that determines which processes should be admitted to the
system and when.

➢ Processes of long term scheduler are placed in the ready state because in this state
the process is ready to execute waiting for calls of execution from CPU which
takes time that’s why this is known as long term scheduler

.
Process Scheduling
32
2. Long Term Schedular
➢ It is used in batch processing systems and operates at a high level.
➢ The long-term scheduler is in charge of allocating resources such as processor
time and memory to processes based on their needs and priorities.
➢ It also determines the order in which processes are executed and manages the
execution of processes that may take a long time to complete, such as batch jobs
or background tasks.
➢ Because it operates at a higher level and does not need to make scheduling
decisions in real-time.

.
Process Scheduling
33
3. Medium Term Schedular
➢ It places the blocked and suspended processes in the secondary memory of a
computer system.
➢ The task of moving from main memory to secondary memory is called
swapping out.
➢ The task of moving back a swapped out process from secondary memory to
main memory is known as swapping in.
➢ The swapping of processes is performed to ensure the best utilization of main
memory.
➢ The long-term execution of processes in a computer system is managed by a
medium-term scheduler, also referred to as a mid-term scheduler.

.
Process Scheduling
34
3. Medium Term Schedular
➢ Based on a set of predetermined criteria and priorities, this kind of scheduler
decides which processes should be executed next.
➢ Typically, processes that are blocked or waiting must be managed by the
medium-term scheduler.
➢ These processes are not running right now, but they are still awaiting the
occurrence of an event in order to start running.
➢ Which of these blocked processes should be unblocked and allowed to continue
running is up to the medium-term scheduler to decide.

.
Services of Operating System
35

Medium Term Scheduler


Long Term Scheduler Short Term Scheduler

It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.
Speed is lesser than Speed is fastest among Speed is in between both
short term scheduler other two short and long term
scheduler.
It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
It is almost absent or It is also minimal in It is a part of Time
minimal in time time sharing system sharing systems.
sharing system
It selects processes from It selects those It can re-introduce the
pool and loads them into processes which are process into memory and
memory for execution. ready to execute execution can be continued.
.
Various Operation on Process
36

1. Process Creation:
➢ This is the initial step of process execution activity.
➢ Process creation means the construction of a new process for the
execution.
➢ This might be performed by system, user or old process itself. There are
several events that leads to the process creation.
➢ Some of the such events are following:

1. When we start the computer, system creates several background


processes.
2. A user may request to create a new process.
3. A process can create a new process itself while executing.
4. Batch system takes initiation of a batch job.

.
Various Operation on Process
37

2. Scheduling/Dispatching:
➢ The event or activity in which the state of the process is changed from

ready to running.

➢ It means the operating system puts the process from ready state into the

running state.

➢ Dispatching is done by operating system when the resources are free or

the process has higher priority than the ongoing process.

➢ There are various other cases in which the process in running state is

preempted and process in ready state is dispatched by the operating


system.
.
Various Operation on Process
38

3. Blocking:
➢ When a process invokes an input-output system call that blocks the

process and operating system put in block mode.

➢ Block mode is basically a mode where process waits for input-output.

➢ Hence on the demand of process itself, operating system blocks the

process and dispatches another process to the processor.

➢ Hence, in process blocking operation, the operating system puts the

process in ‘waiting’ state.

.
Various Operation on Process
39

4. Preemption:
➢ When a timeout occurs that means the process hadn’t been terminated in

the allotted time interval and next process is ready to execute, then the
operating system preempts the process.

➢ This operation is only valid where CPU scheduling supports preemption.

➢ Basically this happens in priority scheduling where on the incoming of

high priority process the ongoing process is preempted.

➢ Hence, in process preemption operation, the operating system puts the

process in ‘ready’ state.

.
Various Operation on Process
40

4. Termination:
➢ Process termination is the activity of ending the process. In other words,

process termination is the relaxation of computer resources taken by the


process for the execution.

➢ There may be several events that may lead to the process termination.

Some of them are:


1. Process completes its execution fully and it indicates to the OS that it has finished.

2. Operating system itself terminates the process due to service errors.

3. There may be problem in hardware that terminates the process.

4. One process can be terminated by another process.

.
Inter Process Communication
41

➢ Inter-process communication (IPC) is a mechanism that allows processes


to communicate with each other and synchronize their actions.
➢ The communication between these processes can be seen as a method of
co-operation between them.
➢ Inter process communication (IPC) allows different processes running on
a computer to share information with each other.
➢ IPC allows processes to communicate by using different techniques like
sharing memory, sending messages or using files.
➢ It ensures that processes can work together without interfering with
each other
➢ The two fundamental models of Inter Process Communication are:
1. Shared Memory
2. Message Passing
.
Inter Process Communication
42

.
Inter Process Communication
43

1. Shared Memory:
➢ Communication between processes using shared memory requires
processes to share some variable and it completely depends on how the
programmer will implement it.
➢ Suppose process 1 and process 2 are executing simultaneously and they
share some resources or use some information from another process.
➢ Process1 generates information about certain computations or
resources being used and keeps it as a record in shared memory.
➢ When process 2 needs to use the shared information, it will check in the
record stored in shared memory and take note of the information
generated by process 1 and act accordingly.
➢ Processes can use shared memory for extracting information as a record
from another process as well as for delivering any specific information to
other processes.
.
Inter Process Communication
44

2. Message Passing
➢ IPC through Message Passing is a method where processes communicate
by sending and receiving messages to exchange data.
➢ In this method, one process sends a message, and the other process
receives it, allowing them to share information.
➢ Message Passing can be achieved through different methods like Sockets,
Message Queues or Pipes.

.
Inter Process Communication
45

 Purpose of IPC
1. Data Transfer
2. Sharing Data
3. Event Notification
4. Resource Sharing
5. Process Control
6. Preventing Race Conditions
7. etc

.
Scheduling criteria
46
1. CPU utilization : To make out the best use of CPU and not to waste any
CPU cycle, CPU would be working most of the time(Ideally 100% of the
time). Considering a real system, CPU usage should range from 40% (lightly
loaded) to 90% (heavily loaded.)

2. Throughput : It is the total number of processes completed per unit time or


rather say total amount of work done in a unit of time. This may range from
10/second to 1/hour depending on the specific processes.

3. Turnaround time : It is the amount of time taken to execute a particular

process, i.e. The interval from time of submission of the process to the

time of completion of the process(Wall clock time).


.
Scheduling criteria
47

4. Waiting time : The sum of the periods spent waiting in the ready queue
amount of time a process has been waiting in the ready queue to acquire get
control on the CPU.

5. Load average : It is the average number of processes residing in the ready


queue waiting for their turn to get into the CPU.

6. Response time : Amount of time it takes from when a request was


submitted until the first response is produced. Remember, it is the time till the
first response and not the completion of process execution(final response).

.
Scheduling Algorithm
48

Different Scheduling Algorithms are:

1. First-Come, First-Served (FCFS) Scheduling

2. Shortest-Job-First (SJF) Scheduling

3. Priority Scheduling

4. Round Robin(RR) Scheduling

5. SJF Preemptive Scheduling

.
Scheduling Algorithm
49

Before discussing various scheduling algorithms let us discuss some term


related to it.
1. Arrival Time: It is a time at which process arrive in ready queue for
execution.
2. Start Time: It is a time at which process start execution.
3. Finish Time: It is a time at which process Finish execution.
4. CPU/ Burst Time: It is a time for which process was executing with CPU.
5. Turnaround Time: The time elapsed from the time of submission of a
process to the time of completion is known as the turnaround time.
Turn Around Time = Completion Time - Arrival Time.

.
Scheduling Algorithm
50

6. Waiting Time: It is a time spent by a process waiting in the ready queue.

Waiting Time = Turnaround Time - Burst Time

.
FCFS Algorithms
51

➢ First Come, First Serve (FCFS) is one of the simplest types of CPU
scheduling algorithms.
➢ It is exactly what it sounds like: processes are attended to in the order in
which they arrive in the ready queue, much like customers lining up at a
grocery store.
➢ FCFS Scheduling is a non-preemptive algorithm, meaning once a process
starts running, it cannot be stopped until it voluntarily relinquishes the CPU
Easy to understand and implement.

➢ Its implementation is based on FIFO queue.

➢ Poor in performance as average wait time is high


.
FCFS Algorithms
52

➢ How Does FCFS Work?


➢ The mechanics of FCFS are straightforward:
➢ Arrival: Processes enter the system and are placed in a queue in the order
they arrive.
➢ Execution: The CPU takes the first process from the front of the queue,
executes it until it is complete, and then removes it from the queue.
➢ Repeat: The CPU takes the next process in the queue and repeats the
execution process

.
FCFS Algorithms
53

➢ Advantages of FCFS
1. The simplest and basic form of CPU Scheduling algorithm
2. Every process gets a chance to execute in the order of its arrival. This
ensures that no process is arbitrarily prioritized over another.
3. Easy to implement, it doesn't require complex data structures.
4. Since processes are executed in the order they arrive, there’s no risk of
starvation
5. It is well suited for batch systems where the longer time periods for each
process are often acceptable.

.
FCFS Algorithms
54
➢ Disadvantages of FCFS
➢ As it is a Non-preemptive CPU Scheduling Algorithm, FCFS can result in
long waiting times, especially if a long process arrives before a shorter
one.
➢ The average waiting time in the FCFS is much higher than in the others
➢ Since FCFS processes tasks in the order they arrive, short jobs may
have to wait a long time if they arrive after longer tasks, which leads to
poor performance in systems with a mix of long and short tasks.
➢ Processes that are at the end of the queue, have to wait longer to finish.
➢ It is not suitable for time-sharing operating systems where each
process should get the same amount of CPU time.
.
FCFS Algorithms Example-1
55
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 (Processes with Same Arrival Time)
Process Arrival Time Burst Time

p1 0 5

p2 0 3

p3 0 8

Step-by-Step Execution:
1.P1 will start first and run for 5 units of time (from 0 to 5).
2.P2 will start next and run for 3 units of time (from 5 to 8).
3.P3 will run last, executing for 8 units (from 8 to 16).

.
FCFS Algorithms Example-1
56
➢ Let us compute TAT, WT, Average TAT & Average WT

P1 P2 P3 Gantt Chart
0 5 8 16

Processes AT BT ST CT TAT WT

P1 0 5 0 5 5-0 = 5 5-5 = 0

P2 0 3 5 8 8-0 = 8 8-3 = 5

P3 0 8 8 16 16-0 = 16 16-8 = 8

Average 9.67 4.33

 Average Turn around time = 9.67


 Average waiting time = 4.33

.
FCFS Algorithms Example-2
57
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 (Processes with Diffrent Arrival Time)
Process Burst Time ArrivalTime

p1 5 ms 2 ms

p2 3 ms 0 ms

p3 4 ms 4 ms

Step-by-Step Execution:
1.P2 arrives at time 0 and runs for 3 units, so its completion time is: Completion Time
of P2=0+3=3
2.P1 arrives at time 2 but has to wait for P2 to finish. P1 starts at time 3 and runs for 5
units. Its completion time is: Completion Time of P1=3+5=8
3.P3 arrives at time 4 but has to wait for P1 to finish. P3 starts at time 8 and runs for 4
units. Its completion time is: Completion Time of P3=8+4=12
.
FCFS Algorithms Example-2
58
➢ Let us compute TAT, WT, Average TAT & Average WT

P1 P2 P3 Gantt Chart
0 5 8 16

Processes AT BT ST CT TAT WT

P1 2 ms 3 5 ms 8 6 ms 1 ms

P2 0 ms 0 3 ms 3 3 ms ms

P3 4 ms 8 4 ms 12 8 ms 4 ms

Average 5.67 1.67

 Average Turn around time = 5.67


 Average waiting time = 1.67

.
FCFS Algorithms Example-3
59
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3
Process Burst Time

p1 20

p2 3

p3 4

Step-by-Step Execution:
1.P1 arrives at time 0 and runs for 20 units, so its completion time is: Completion
Time of P1=0+20=20
2.P2 arrives at time 0 but has to wait for P1 to finish. P2 starts at time 20 and runs for
3 units. Its completion time is: Completion Time of P2=3+20=23
3.P3 arrives at time 0 but has to wait for P2 to finish. P3 starts at time 23 and runs for
4 units. Its completion time is: Completion Time of P3=23+4=27
.
FCFS Algorithms Example-3
60
➢ Let us compute TAT, WT, Average TAT & Average WT

P1 P2 P3 Gantt Chart
0 20 23 27

Processes AT BT ST CT TAT WT

P1 0 20 0 20 20 0

P2 0 3 20 23 23 20

P3 0 4 23 27 27 23

Average 23.33 14.33

 Average Turn around time = 23.33


 Average waiting time = 14.33

.
FCFS Algorithms Example-4
61
➢ Consider the following table of arrival time and burst time for three
processes P1, P2, P3, P4, & P5
Process Burst Time Arrival Time

p1 3 0

p2 5 1

p3 2 3

P4 5 9

p5 5 12

➢ Let us compute TAT, WT, Average TAT & Average WT


➢ And let us draw gantt chart

.
FCFS Algorithms Example-4
62

P1 P2 P3 P4 P5 Gantt Chart
0 3 8 10 15 20
Processes AT BT ST CT TAT WT

P1 0 3 0 3 3 0

P2 1 5 3 8 7 2

P3 3 2 8 10 7 5

P4 9 5 10 15 6 1

P5 12 5 15 20 8 3

Average 6.2 2.2

 Average Turn around time = 6.2


 Average waiting time = 2.2
.
SJF(SJN) Algorithms
63

➢ This is also known as shortest job first, or SJF


➢ This is a non-preemptive, pre-emptive scheduling algorithm.
➢ Shortest Job First (SJF) or Shortest Job Next (SJN) is a scheduling process
that selects the waiting process with the smallest execution time to execute
next.
➢ Best approach to minimize waiting time.

➢ Easy to implement in Batch systems where required CPU time is


known in advance.

➢ Impossible to implement in interactive systems where required CPU time


is not known.
➢ The processer should know in advance how much time process.
.
SJF(SJN) Algorithms
64

➢ Shortest Job First (SJF) or Shortest Job Next (SJN) is a scheduling process
that selects the waiting process with the smallest execution time to execute
next.
➢ This scheduling method may or may not be preemptive.
➢ Significantly reduces the average waiting time for other processes waiting
to be executed.

.
SJF(SJN) Algorithms
65

 Advantages of SJF Scheduling

1. SJF is better than the First come first serve(FCFS) algorithm as it reduces
the average waiting time.

2. SJF is generally used for long term scheduling.

3. It is suitable for the jobs running in batches, where run times are already
known.

4. SJF is probably optimal in terms of average Turn Around Time (TAT)

.
SJF(SJN) Algorithms
66

 Disadvantages of SJF Scheduling

1. SJF may cause very long turn-around times or starvation.

2. In SJF job completion time must be known earlier.

3. Many times it becomes complicated to predict the length of the upcoming

CPU request.

.
SJF(SJN) Algorithms Example-1
67
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 (Processes with same Arrival Time)
Process Burst Time Arrival Time

p1 20 ms 0

p2 3 ms 0

p3 4 ms 0

Step-by-Step Execution:
1.P2 arrives at time 0 and runs for 3 units first with shortest time, so its completion
time is: Completion Time of P2=0+3=3
2.P3 arrives at time 0 but has to wait for P2 to finish. P3 starts at time 3 and runs for 4
units. Its completion time is: Completion Time of P1=3+4=7
3.P1 arrives at time 0 but has to wait for P3 to finish. P1 starts at time 7 and runs for
20 units. Its completion time is: Completion Time of P3=20+7=27
.
SJF(SJN) Algorithms Example-1
68
➢ Let us compute TAT, WT, Average TAT & Average WT

P2 P3 P1 Gantt Chart
0 3 7 27

Processes AT BT ST CT TAT WT

P1 0 20 ms 7 27 27 7

P2 0 3 ms 0 3 3 0

P3 0 4 ms 3 7 7 3

Average 12.33 3.33

 Average Turn around time = 12.33


 Average waiting time = 3.33

.
SJF(SJN) Algorithms Example-2
69
➢ Consider the following table of arrival time and burst time for three
processes P1, P2, P3, P4, & P5
Process Burst Time Arrival Time

p1 3 0

p2 5 1

p3 2 3

P4 5 9

p5 5 12

➢ Let us compute TAT, WT, Average TAT & Average WT


➢ And let us draw gantt chart

.
SJF(SJN) Algorithms Example-2
70

P1 P3 P2 P4 P5 Gantt Chart
0 3 5 10 15 20
Processes AT BT ST CT TAT WT

P1 0 3 0 3 3 0

P2 1 5 3 10 9 4

P3 3 2 8 5 2 0

P4 9 5 10 15 6 1

P5 12 5 15 20 8 3

Average 5.6 1.6

 Average Turn around time = 1.6


 Average waiting time = 5.6
.
SJF(SJN) Algorithms Example-3
71
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 (Processes with Different arrival Time)
Process Burst Time Arrival Time
P1 6 ms 0 ms

P2 8 ms 2 ms

P3 3 ms 4 ms

Step-by-Step Execution:
1.P1 arrives at time 0 and runs for 6 units first with shortest time at arrival , so its
completion time is: Completion Time of P1=0+6=6
2.P3 arrives at time 4 and P2 at 2 but has to wait for P1 to finish. P3 with shortest CPU
time starts at time 6 and runs for 3 units. Its completion time is: P3=3+6=9
3.P2 arrives at time 2 but has to wait for P1, P3 to finish. P2 starts at time 9 and runs
for 8 units. Its completion time is: Completion Time of P2=9+8=17
.
SJF(SJN) Algorithms Example-3
72
➢ Let us compute TAT, WT, Average TAT & Average WT

P1 P3 P2 Gantt Chart
0 6 9 17

Processes AT ST BT CT TAT WT
P1 0 ms 0 6 ms 6 6 ms 0 ms

P2 2 ms 9 8 ms 17 15 ms 7 ms

P3 4 ms 6 3 ms 9 5 ms 2 ms

Average 8.67 3

 Average Turn around time = 8.67


 Average waiting time = 3

.
RR Algorithms
73

➢ Round Robin is the preemptive process scheduling algorithm.


➢ Each process is provided a fix time to execute, it is called a quantum.
➢ Once a process is executed for a given time period, it is preempted and
other process executes for a given time period.
➢ Context switching is used to save states of preempted processes.

➢ Round Robin Scheduling is a method used by operating systems to manage


the execution time of multiple processes that are competing for CPU
attention.
➢ It is called "round robin" because the system rotates through all the
processes, allocating each of them a fixed time slice or "quantum",
regardless of their priority.

.
RR Algorithms
74

➢ The primary goal of this scheduling method is to ensure that all processes are
given an equal opportunity to execute, promoting fairness among tasks.

.
RR Algorithms
75

 Advantages of Round Robin Scheduling

1. Fairness: Each process gets an equal share of the CPU.

2. Simplicity: The algorithm is straightforward and easy to implement.

3. Responsiveness: Round Robin can handle multiple processes without

significant delays, making it ideal for time-sharing systems.

.
RR Algorithms
76

 Disadvantages of Round Robin Scheduling

1. Overhead: Switching between processes can lead to high overhead,


especially if the quantum is too small.

2. Underutilization: If the quantum is too large, it can cause the CPU to feel
unresponsive as it waits for a process to finish its time.

.
RR Algorithms Example-1
77
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 (Processes with same arrival Time) if
Quantum=5
Process Burst Time Arrival Time
P1 20 ms 0 ms

P2 3 ms 0 ms

P3 4 ms 0 ms
Step-by-Step Execution:
1.P1 arrives at time 0 and required 20 CPU time. It runs for quantum time 5 units first.
2.P2 arrives at time 0 and required 3 CPU time. It runs 3 units 4 time less than
quantum and stops.
3.P3 arrives at time 0 but has to wait for P1, P2 to finish. and required 4 CPU time so
will run for 4 time less than quantum and stop.
4. P1 will gain get chance to execute for 5 unit (quantum time) and other processes
has finished it will continue to execute till it finish.
.
RR Algorithms Example-1
78
➢ Let us compute TAT, WT, Average TAT & Average WT

P1 P2 P3 P1 P1 P1 Gantt Chart
0 5 8 12 17 22 27

Processes AT ST BT CT TAT WT
P1 0 ms 0 20 ms 27 27 7

P2 0 ms 5 3 ms 8 8 5

P3 0 ms 8 4 ms 12 12 8

Average 15.66 6.66

 Average Turn around time = 15.66


 Average waiting time = 6.66

.
RR Algorithms Example-2
79
➢ Consider the following table of arrival time and burst time for three
processes P1, P2, P3, P4, & P5 consider quantum=2,
Process Burst Time Arrival Time

p1 3 0

p2 5 1

p3 2 3

P4 5 9

p5 5 12

➢ Let us compute TAT, WT, Average TAT & Average WT


➢ And let us draw gantt chart

.
RR Algorithms Example-2
80

Time Ready Queue Next Job to execute Time Ready Queue Next Job to execute

P1 execute for 2 Unit of P4 execute for 2 Unit


0 P1 12 P4, P5
time of time

P2 execute for 2 Unit of P5 execute for 2 Unit


2 P2, P1 14 P5, P4
time of time

P1 execute for 2 Unit of P4 execute for 1 Unit


4 P1, P3, P2 16 P4, P5
time & finishes of time & finishes

P3 execute for 1 Unit of P5 execute for 2 Unit


5 P3, P2 17 P5
time & finishes of time

P2 execute for 2 Unit of P5 execute for 1 Unit


7 P2 19 P5
time of time & finishes

P4 execute for 2 Unit of


9 P4, P2
time

P2 execute for 1 Unit of


11 P2, P4
time & finishes

.
RR Algorithms Example-2
81

P1 P2 P1 P3 P2 P4 P2 P4 P5 P4 P5 P5 Gantt Chart

0 2 4 5 7 9 11 12 14 16 17 19 20
Processes AT BT ST CT TAT WT

P1 0 3 0 5 5 2

P2 1 5 2 12 11 6

P3 3 2 5 7 4 2

P4 9 5 9 17 8 3

P5 12 5 14 20 8 3

Average 7.2 3.2

 Average Turn around time = 7.2


 Average waiting time = 3.2
.
SJF Preemption Algorithms
82

➢ It is a pre-emptive version of Shortest Job First (SJF) scheduling, called Shortest


Remaining Time First (SRTF).

➢ In SRTF, the process with the least time left to finish is selected to run.

➢ The running process will continue until it finishes or a new process with a
shorter remaining time arrives.

➢ This way, the process that can finish the fastest is always given priority.

.
SJF Preemption Algorithms
83

 Advantages of SRTF Scheduling

1. Minimizes Average Waiting Time: SRTF reduces the average waiting time by

prioritizing processes with the shortest remaining execution time.

2. Efficient for Short Processes: Shorter processes get completed faster, improving

overall system responsiveness.

3. Ideal for Time-Critical Systems: It ensures that time-sensitive processes are

executed quickly.

.
SJF Preemption Algorithms
84

 Disadvantages of SRTF Scheduling

1. Starvation of Long Processes: Longer processes may be delayed indefinitely if shorter

processes keep arriving.

2. Difficult to Predict Burst Times: Accurate prediction of process burst times is

challenging and affects scheduling decisions.

3. High Overhead: Frequent context switching can increase overhead and slow down

system performance.

4. Not Suitable for Real-Time Systems: Real-time tasks may suffer delays due to frequent

preemptions.

.
SJF Preemption Algorithms Example-1
85
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 (Processes with same arrival Time)
Process Burst Time Arrival Time
P1 10 ms 0 ms

P2 8 ms 0 ms

P3 7 ms 0 ms

Step-by-Step Execution:
1.P3 arrives at time 0 and It runs for 7 units time and finishe.
2.P2 arrives at time 0 and required 8 CPU time. It runs 8 units time and stops.
3.P3 arrives at time 0 and it will run for 10 unit time less and stop.

.
SJF Preemption Algorithms Example-1
86
➢ Let us compute TAT, WT, Average TAT & Average WT

P3 P2 P1 Gantt Chart
0 7 15 25

Processes AT ST BT CT TAT WT
P1 0 ms 15 10 ms 25 25 15

P2 0 ms 7 8 ms 15 15 7

P3 0 ms 0 7 ms 7 7 0

Average 15.66 7.33

 Average Turn around time = 15.66


 Average waiting time = 7.33

.
SJF Preemption Algorithms Example-2
87
➢ Consider the following table of arrival time and burst time for three processes
P1, P2 and P3 (Processes with different arrival Time)
Process Burst Time Arrival Time

P1 6 ms 0 ms

P2 3 ms 1 ms

P3 7 ms 2 ms

Step-by-Step Execution:
[Link] 0-1 (P1): P1 runs for 1 ms (total time left: 5 ms) as it has shortest remaining time left.
[Link] 1-4 (P2): P2 runs for 3 ms (total time left: 0 ms) as it has shortest remaining time left among
P1 and P2.
[Link] 4-9 (P1): P1 runs for 5 ms (total time left: 0 ms) as it has shortest remaining time left among
P1 and P3.
[Link] 9-16 (P3): P3 runs for 7 ms (total time left: 0 ms) as it has shortest remaining time

.
SJF Preemption Algorithms Example-2
88
➢ Let us compute TAT, WT, Average TAT & Average WT

P1 P2 P2 P1 P3 Gantt Chart
0 1 2 4 9 16

Process AT ST BT CT TAT WT
P1 0 0 6 9 9-0 = 9 9-6 = 3

P2 1 1 3 4 4-1 = 3 3-3 = 0

P3 2 9 7 16 16-2 = 14 14-7 = 7

Average 8.6 3.33

 Average Turn around time = 8.6


 Average waiting time = 3.33

.
SJF Preemption Algorithms Example-3
89
➢ Consider the following table of arrival time and burst time for three
processes P1, P2, P3, P4, & P5 consider quantum=2,
Process Burst Time Arrival Time

p1 3 0

p2 5 1

p3 2 3

P4 5 9

p5 5 12

➢ Let us compute TAT, WT, Average TAT & Average WT


➢ And let us draw gantt chart

.
SJF Preemption Algorithms Example-3
90

Time Ready Queue Next Job to execute Time Ready Queue Next Job to execute

P1 execute for 1 Unit of P4 execute for 3 Unit


12 P4, P5
0 P1 time required additional of time & finishes
2 unit of time
P5 execute for 5 Unit
15 P5
P1 execute for 2 Unit of of time & finishes
1 P1, P2
time & finishes

P3 execute for 2 Unit of


3 P2, P3
time & finishes

P2 execute for 4 Unit of


5 P2 time, required additional
1 unit of time

P2 execute for 1 Unit of


9 P2, P4
time & finishes

P4 execute for 2 Unit of


10 P4 time, required additional
3 unit of time

.
SJF Preemption Algorithms Example-3
91

P1 P1 P3 P2 P2 P4 P4 P5 Gantt Chart

0 1 3 5 9 10 12 15 20
Processes AT BT ST CT TAT WT

P1 0 3 0 3 3 0

P2 1 5 2 10 9 4

P3 3 2 5 5 2 0

P4 9 5 9 15 6 1

P5 12 5 14 20 8 3

Average 5.6 1.6

 Average Turn around time = 5.6


 Average waiting time = 1.6
.
Priority Algorithms
92

➢ Priority scheduling is a preemptive as well non-preemptive algorithm and


one of the most common scheduling algorithms in batch systems.

➢ Each process is assigned a priority. Process with highest priority is to be


executed first and so on.

➢ Processes with same priority are executed on first come first served basis.

➢ Priority can be decided based on memory requirements, time requirements


or any other resource requirement.

➢ It can further divided in to Preemptive and Non-Preemptive types

.
Priority Algorithms
93

➢ Non-Preemptive Priority Scheduling


➢ In Non-Preemptive Priority Scheduling, the CPU is not taken away from the
running process. Even if a higher-priority process arrives, the currently
running process will complete first.

➢ Ex: A high-priority process must wait until the currently running process
finishes.

.
Non-Preemptive Priority Algorithms Example
94
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3. Note: Lower number represents higher priority.
Process Arrival Time Burst Time Priority

P1 0 4 2

P2 1 2 1

P3 2 6 3

Step-by-Step Execution:
1. At Time 0: Only P1 has arrived. P1 starts execution as it is the only available
process, and it will continue executing till t = 4 because it is a non-preemptive
approach.
2. At Time 4: P1 finishes execution. Both P2 and P3 have arrived. Since P2 has the
highest priority (Priority 1), it is selected next.
3. At Time 6: P2 finishes execution. The only remaining process is P3, so it starts
execution.
4. At Time 12: P3 finishes execution.
.
Non-Preemptive Priority Algorithms Example
95

Process AT BT CT TAT WT

P1 0 4 4 4 0

P2 1 2 6 5 3

P3 2 6 12 10 4

Average 6.33 2.33

Average Turn around time = 6.33


Average waiting time = 2.33

.
Priority Algorithms
96

➢ Preemptive Priority Scheduling


➢ In Preemptive Priority Scheduling, the CPU can be taken away from the
currently running process if a new process with a higher priority arrives.

➢ Ex: A low-priority process is running, and a high-priority process arrives; the


CPU immediately switches to the high-priority process.

.
Preemptive Priority Algorithms Example
97
➢ Consider the following table of arrival time and burst time for three processes P1, P2
and P3: Note: Higher number represents higher priority.
Process Arrival Time Burst Time Priority

P1 0 7 2

P2 0 4 1

P3 0 6 3
Step-by-Step Execution:
1. At Time 0: All processes arrive at the same time. P3 has the highest priority (Priority 3), so it
starts execution.
2. At Time 6: P3 completes execution. Among the remaining processes, P1 (Priority 2) has a
higher priority than P2, so P1 starts execution.
3. At Time 13: P1 completes execution. The only remaining process is P2 (Priority 1), so it starts
execution.
4. At Time 17: P2 completes execution. All processes are now finished
Prepared By Mr. Vipin K. Wani
Non-Preemptive Priority Algorithms Example
98

Completion Turnaround Waiting Time


Process Arrival Time Burst Time
Time Time (CT - AT) (TAT - BT)

P1 0 7 13 13 6

P2 0 4 17 17 13

P3 0 6 6 6 0

Average 12 6.33

Average Turn around time = 12


Average waiting time = 6.33

.
Thank You…!
99

Any Questions

You might also like