Process and Thread Management in OS
Process and Thread Management in OS
Unit -2
.
Process
2
.
Process
3
.
Process
4
2. The data section is made up the global and static variables, allocated and
initialized prior to executing the main.
3. The heap is used for the dynamc memory allocation, and is managed via
calls to new, delete, malloc, free, etc.
4. The stack is used for local variables. Space on the stack is reserved for
local variables when they are declared.
.
Process
5
.
Process States and Transitions diagram
6
A Process may live in any of the following state during its life cycle
.
Process States and Transitions diagram
7
1. Process Creation(New): Process creation in an operating system (OS) is the act of
generating a new process. This new process is an instance of a program that can execute
independently.
2. Ready/ Scheduling: Once a process is ready to run, it enters the "ready queue."
The scheduler's job is to pick a process from this queue and start its execution.
5. Killing / terminate: After the process finishes its tasks, the operating system ends
it and removes its Process Control Block (PCB).
.
Types of Process
8
1. CPU-Bound
2. I/O-Bound Processes
3. Independent processes
4. Cooperating processes
5. Zombie Process
6. Orphan Process
7. Daemon Process
.
Types of Process
9
.
Types of Process
10
5. Zombie Process:
➢ A zombie process is a process that has completed its execution but still remains in
the process table because its parent process has not yet read its exit status.
➢ It is called a "zombie" because it is no longer active or running, but it still exists as
a placeholder in the system.
➢ Entry of child process remains in the process table until the parent process
retrieves the exit status.
➢ During this time, the child process is referred to as a zombie process.
➢ This happens because the operating system keeps the process table entry to allow
the parent to gather information about the terminated child.
.
Types of Process
11
6. Orphan Process:
➢ An orphan process is a child process currently performing its execution, whose
parent has finished its execution and has terminated, leaving the process table
without waiting for the child process to finish.
➢ Orphan processes are still active and continue to run normally, but they no longer
have their original parent process to monitor or control them.
.
Types of Process
12
7. Daemon Process:
➢ A daemon process is a background process that runs independently of any user
control and performs specific tasks for the system.
➢ Daemons are usually started when the system starts, and they run until the system
stops.
➢ A daemon process typically performs system services and is available at all times
to more than one task or user.
➢ Daemon processes are started by the root user or root shell and can be stopped
only by the root user.
.
Process Control Block
13
➢ The Process Control block(PCB) is also known as a Task Control Block. it represents a
process in the Operating System.
➢ A process control block (PCB) is a data structure used by a computer to store all
information about a process. It is also called the descriptive process.
➢ When a process is created, the operating system creates a process manager.
➢ In the context switching of two processes, the priority-based process occurs in the
ready queue of the process control block. Following are the steps:
➢ The state of the current process must be saved for rescheduling.
➢ The process state contains records, credentials, and operating system-specific
information stored on the PCB or switch.
➢ The operating system aborts the execution of the current process and selects a process
from the waiting list by tuning its PCB.
➢ Load the PCB's program counter and continue execution in the selected process.
.
Process Control Block
14
.
Process Control Block
15
.
Process Control Block
16
Process State: This specifies the process state i.e. new, ready, running, waiting or
terminated..
Process ID: and parent process ID.: This shows the number of the
particular process
Program Counter: This contains the address of the next instruction that
needs to be executed in the process.
Registers: This specifies the registers that are used by the process. They may
include accumulators, index registers, stack pointers, general purpose registers etc.
List of Open Files : These are the different files that are associated with the process
.
Process Control Block
17
Accounting information : The time limits, account numbers, amount of CPU used,
process numbers etc. are all a part of the PCB accounting information
I/O Status Information : This information includes the list of I/O devices used by
.
THREAD IN OS
19
➢ As each thread has its own independent resource for process execution, multpile
processes can be executed parallelly by increasing number of threads.
.
THREAD IN OS
20
.
THREAD IN OS
21
There are two types of threads :
1. User Threads
2. Kernel Threads
1. User threads, are above the kernel and without kernel support. These are the
threads that application programmers use in their programs.
2. Kernel threads are supported within the kernel of the OS itself. All modern OSs
support kernel level threads, allowing the kernel to perform multiple simultaneous tasks
and/or to service multiple kernel system calls simultaneously.
.
THREAD IN OS
22
Advantages of Threads:
Scalability: One thread runs on one CPU. In Multithreaded processes, threads can be
distributed over a series of processors to scale.
.
Process Vs Thread
23
Termination The process takes more time to The thread takes less time to
time terminate. terminate.
Creation
It takes more time for creation. It takes less time for creation.
time
Communication between Communication between threads
Communicat
processes needs more time requires less time compared to
ion
compared to thread. processes.
Context It takes more time for context It takes less time for context
switching switching. switching.
Resource Process consume more resources. Process consume less resources.
.
Process Vs Thread
24
Treatment Different process are tread All the level peer threads are
by OS separately by OS. treated as a single task by OS.
The process is mostly isolated. Threads share memory.
Memory
Sharing It does not share data Threads share data with each other.
.
Process Scheduling
25
➢ The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategies.
➢ Process scheduling is an essential part of a Multiprogramming operating systems.
.
Process Scheduling
28
.
Process Scheduling
29
.
Process Scheduling
30
➢ The long term scheduler basically decides the priority in which processes must be
placed in main memory.
➢ Processes of long term scheduler are placed in the ready state because in this state
the process is ready to execute waiting for calls of execution from CPU which
takes time that’s why this is known as long term scheduler
.
Process Scheduling
32
2. Long Term Schedular
➢ It is used in batch processing systems and operates at a high level.
➢ The long-term scheduler is in charge of allocating resources such as processor
time and memory to processes based on their needs and priorities.
➢ It also determines the order in which processes are executed and manages the
execution of processes that may take a long time to complete, such as batch jobs
or background tasks.
➢ Because it operates at a higher level and does not need to make scheduling
decisions in real-time.
.
Process Scheduling
33
3. Medium Term Schedular
➢ It places the blocked and suspended processes in the secondary memory of a
computer system.
➢ The task of moving from main memory to secondary memory is called
swapping out.
➢ The task of moving back a swapped out process from secondary memory to
main memory is known as swapping in.
➢ The swapping of processes is performed to ensure the best utilization of main
memory.
➢ The long-term execution of processes in a computer system is managed by a
medium-term scheduler, also referred to as a mid-term scheduler.
.
Process Scheduling
34
3. Medium Term Schedular
➢ Based on a set of predetermined criteria and priorities, this kind of scheduler
decides which processes should be executed next.
➢ Typically, processes that are blocked or waiting must be managed by the
medium-term scheduler.
➢ These processes are not running right now, but they are still awaiting the
occurrence of an event in order to start running.
➢ Which of these blocked processes should be unblocked and allowed to continue
running is up to the medium-term scheduler to decide.
.
Services of Operating System
35
1. Process Creation:
➢ This is the initial step of process execution activity.
➢ Process creation means the construction of a new process for the
execution.
➢ This might be performed by system, user or old process itself. There are
several events that leads to the process creation.
➢ Some of the such events are following:
.
Various Operation on Process
37
2. Scheduling/Dispatching:
➢ The event or activity in which the state of the process is changed from
ready to running.
➢ It means the operating system puts the process from ready state into the
running state.
➢ There are various other cases in which the process in running state is
3. Blocking:
➢ When a process invokes an input-output system call that blocks the
.
Various Operation on Process
39
4. Preemption:
➢ When a timeout occurs that means the process hadn’t been terminated in
the allotted time interval and next process is ready to execute, then the
operating system preempts the process.
.
Various Operation on Process
40
4. Termination:
➢ Process termination is the activity of ending the process. In other words,
➢ There may be several events that may lead to the process termination.
.
Inter Process Communication
41
.
Inter Process Communication
43
1. Shared Memory:
➢ Communication between processes using shared memory requires
processes to share some variable and it completely depends on how the
programmer will implement it.
➢ Suppose process 1 and process 2 are executing simultaneously and they
share some resources or use some information from another process.
➢ Process1 generates information about certain computations or
resources being used and keeps it as a record in shared memory.
➢ When process 2 needs to use the shared information, it will check in the
record stored in shared memory and take note of the information
generated by process 1 and act accordingly.
➢ Processes can use shared memory for extracting information as a record
from another process as well as for delivering any specific information to
other processes.
.
Inter Process Communication
44
2. Message Passing
➢ IPC through Message Passing is a method where processes communicate
by sending and receiving messages to exchange data.
➢ In this method, one process sends a message, and the other process
receives it, allowing them to share information.
➢ Message Passing can be achieved through different methods like Sockets,
Message Queues or Pipes.
.
Inter Process Communication
45
Purpose of IPC
1. Data Transfer
2. Sharing Data
3. Event Notification
4. Resource Sharing
5. Process Control
6. Preventing Race Conditions
7. etc
.
Scheduling criteria
46
1. CPU utilization : To make out the best use of CPU and not to waste any
CPU cycle, CPU would be working most of the time(Ideally 100% of the
time). Considering a real system, CPU usage should range from 40% (lightly
loaded) to 90% (heavily loaded.)
process, i.e. The interval from time of submission of the process to the
4. Waiting time : The sum of the periods spent waiting in the ready queue
amount of time a process has been waiting in the ready queue to acquire get
control on the CPU.
.
Scheduling Algorithm
48
3. Priority Scheduling
.
Scheduling Algorithm
49
.
Scheduling Algorithm
50
.
FCFS Algorithms
51
➢ First Come, First Serve (FCFS) is one of the simplest types of CPU
scheduling algorithms.
➢ It is exactly what it sounds like: processes are attended to in the order in
which they arrive in the ready queue, much like customers lining up at a
grocery store.
➢ FCFS Scheduling is a non-preemptive algorithm, meaning once a process
starts running, it cannot be stopped until it voluntarily relinquishes the CPU
Easy to understand and implement.
.
FCFS Algorithms
53
➢ Advantages of FCFS
1. The simplest and basic form of CPU Scheduling algorithm
2. Every process gets a chance to execute in the order of its arrival. This
ensures that no process is arbitrarily prioritized over another.
3. Easy to implement, it doesn't require complex data structures.
4. Since processes are executed in the order they arrive, there’s no risk of
starvation
5. It is well suited for batch systems where the longer time periods for each
process are often acceptable.
.
FCFS Algorithms
54
➢ Disadvantages of FCFS
➢ As it is a Non-preemptive CPU Scheduling Algorithm, FCFS can result in
long waiting times, especially if a long process arrives before a shorter
one.
➢ The average waiting time in the FCFS is much higher than in the others
➢ Since FCFS processes tasks in the order they arrive, short jobs may
have to wait a long time if they arrive after longer tasks, which leads to
poor performance in systems with a mix of long and short tasks.
➢ Processes that are at the end of the queue, have to wait longer to finish.
➢ It is not suitable for time-sharing operating systems where each
process should get the same amount of CPU time.
.
FCFS Algorithms Example-1
55
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 (Processes with Same Arrival Time)
Process Arrival Time Burst Time
p1 0 5
p2 0 3
p3 0 8
Step-by-Step Execution:
1.P1 will start first and run for 5 units of time (from 0 to 5).
2.P2 will start next and run for 3 units of time (from 5 to 8).
3.P3 will run last, executing for 8 units (from 8 to 16).
.
FCFS Algorithms Example-1
56
➢ Let us compute TAT, WT, Average TAT & Average WT
P1 P2 P3 Gantt Chart
0 5 8 16
Processes AT BT ST CT TAT WT
P1 0 5 0 5 5-0 = 5 5-5 = 0
P2 0 3 5 8 8-0 = 8 8-3 = 5
P3 0 8 8 16 16-0 = 16 16-8 = 8
.
FCFS Algorithms Example-2
57
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 (Processes with Diffrent Arrival Time)
Process Burst Time ArrivalTime
p1 5 ms 2 ms
p2 3 ms 0 ms
p3 4 ms 4 ms
Step-by-Step Execution:
1.P2 arrives at time 0 and runs for 3 units, so its completion time is: Completion Time
of P2=0+3=3
2.P1 arrives at time 2 but has to wait for P2 to finish. P1 starts at time 3 and runs for 5
units. Its completion time is: Completion Time of P1=3+5=8
3.P3 arrives at time 4 but has to wait for P1 to finish. P3 starts at time 8 and runs for 4
units. Its completion time is: Completion Time of P3=8+4=12
.
FCFS Algorithms Example-2
58
➢ Let us compute TAT, WT, Average TAT & Average WT
P1 P2 P3 Gantt Chart
0 5 8 16
Processes AT BT ST CT TAT WT
P1 2 ms 3 5 ms 8 6 ms 1 ms
P2 0 ms 0 3 ms 3 3 ms ms
P3 4 ms 8 4 ms 12 8 ms 4 ms
.
FCFS Algorithms Example-3
59
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3
Process Burst Time
p1 20
p2 3
p3 4
Step-by-Step Execution:
1.P1 arrives at time 0 and runs for 20 units, so its completion time is: Completion
Time of P1=0+20=20
2.P2 arrives at time 0 but has to wait for P1 to finish. P2 starts at time 20 and runs for
3 units. Its completion time is: Completion Time of P2=3+20=23
3.P3 arrives at time 0 but has to wait for P2 to finish. P3 starts at time 23 and runs for
4 units. Its completion time is: Completion Time of P3=23+4=27
.
FCFS Algorithms Example-3
60
➢ Let us compute TAT, WT, Average TAT & Average WT
P1 P2 P3 Gantt Chart
0 20 23 27
Processes AT BT ST CT TAT WT
P1 0 20 0 20 20 0
P2 0 3 20 23 23 20
P3 0 4 23 27 27 23
.
FCFS Algorithms Example-4
61
➢ Consider the following table of arrival time and burst time for three
processes P1, P2, P3, P4, & P5
Process Burst Time Arrival Time
p1 3 0
p2 5 1
p3 2 3
P4 5 9
p5 5 12
.
FCFS Algorithms Example-4
62
P1 P2 P3 P4 P5 Gantt Chart
0 3 8 10 15 20
Processes AT BT ST CT TAT WT
P1 0 3 0 3 3 0
P2 1 5 3 8 7 2
P3 3 2 8 10 7 5
P4 9 5 10 15 6 1
P5 12 5 15 20 8 3
➢ Shortest Job First (SJF) or Shortest Job Next (SJN) is a scheduling process
that selects the waiting process with the smallest execution time to execute
next.
➢ This scheduling method may or may not be preemptive.
➢ Significantly reduces the average waiting time for other processes waiting
to be executed.
.
SJF(SJN) Algorithms
65
1. SJF is better than the First come first serve(FCFS) algorithm as it reduces
the average waiting time.
3. It is suitable for the jobs running in batches, where run times are already
known.
.
SJF(SJN) Algorithms
66
CPU request.
.
SJF(SJN) Algorithms Example-1
67
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 (Processes with same Arrival Time)
Process Burst Time Arrival Time
p1 20 ms 0
p2 3 ms 0
p3 4 ms 0
Step-by-Step Execution:
1.P2 arrives at time 0 and runs for 3 units first with shortest time, so its completion
time is: Completion Time of P2=0+3=3
2.P3 arrives at time 0 but has to wait for P2 to finish. P3 starts at time 3 and runs for 4
units. Its completion time is: Completion Time of P1=3+4=7
3.P1 arrives at time 0 but has to wait for P3 to finish. P1 starts at time 7 and runs for
20 units. Its completion time is: Completion Time of P3=20+7=27
.
SJF(SJN) Algorithms Example-1
68
➢ Let us compute TAT, WT, Average TAT & Average WT
P2 P3 P1 Gantt Chart
0 3 7 27
Processes AT BT ST CT TAT WT
P1 0 20 ms 7 27 27 7
P2 0 3 ms 0 3 3 0
P3 0 4 ms 3 7 7 3
.
SJF(SJN) Algorithms Example-2
69
➢ Consider the following table of arrival time and burst time for three
processes P1, P2, P3, P4, & P5
Process Burst Time Arrival Time
p1 3 0
p2 5 1
p3 2 3
P4 5 9
p5 5 12
.
SJF(SJN) Algorithms Example-2
70
P1 P3 P2 P4 P5 Gantt Chart
0 3 5 10 15 20
Processes AT BT ST CT TAT WT
P1 0 3 0 3 3 0
P2 1 5 3 10 9 4
P3 3 2 8 5 2 0
P4 9 5 10 15 6 1
P5 12 5 15 20 8 3
P2 8 ms 2 ms
P3 3 ms 4 ms
Step-by-Step Execution:
1.P1 arrives at time 0 and runs for 6 units first with shortest time at arrival , so its
completion time is: Completion Time of P1=0+6=6
2.P3 arrives at time 4 and P2 at 2 but has to wait for P1 to finish. P3 with shortest CPU
time starts at time 6 and runs for 3 units. Its completion time is: P3=3+6=9
3.P2 arrives at time 2 but has to wait for P1, P3 to finish. P2 starts at time 9 and runs
for 8 units. Its completion time is: Completion Time of P2=9+8=17
.
SJF(SJN) Algorithms Example-3
72
➢ Let us compute TAT, WT, Average TAT & Average WT
P1 P3 P2 Gantt Chart
0 6 9 17
Processes AT ST BT CT TAT WT
P1 0 ms 0 6 ms 6 6 ms 0 ms
P2 2 ms 9 8 ms 17 15 ms 7 ms
P3 4 ms 6 3 ms 9 5 ms 2 ms
Average 8.67 3
.
RR Algorithms
73
.
RR Algorithms
74
➢ The primary goal of this scheduling method is to ensure that all processes are
given an equal opportunity to execute, promoting fairness among tasks.
.
RR Algorithms
75
.
RR Algorithms
76
2. Underutilization: If the quantum is too large, it can cause the CPU to feel
unresponsive as it waits for a process to finish its time.
.
RR Algorithms Example-1
77
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 (Processes with same arrival Time) if
Quantum=5
Process Burst Time Arrival Time
P1 20 ms 0 ms
P2 3 ms 0 ms
P3 4 ms 0 ms
Step-by-Step Execution:
1.P1 arrives at time 0 and required 20 CPU time. It runs for quantum time 5 units first.
2.P2 arrives at time 0 and required 3 CPU time. It runs 3 units 4 time less than
quantum and stops.
3.P3 arrives at time 0 but has to wait for P1, P2 to finish. and required 4 CPU time so
will run for 4 time less than quantum and stop.
4. P1 will gain get chance to execute for 5 unit (quantum time) and other processes
has finished it will continue to execute till it finish.
.
RR Algorithms Example-1
78
➢ Let us compute TAT, WT, Average TAT & Average WT
P1 P2 P3 P1 P1 P1 Gantt Chart
0 5 8 12 17 22 27
Processes AT ST BT CT TAT WT
P1 0 ms 0 20 ms 27 27 7
P2 0 ms 5 3 ms 8 8 5
P3 0 ms 8 4 ms 12 12 8
.
RR Algorithms Example-2
79
➢ Consider the following table of arrival time and burst time for three
processes P1, P2, P3, P4, & P5 consider quantum=2,
Process Burst Time Arrival Time
p1 3 0
p2 5 1
p3 2 3
P4 5 9
p5 5 12
.
RR Algorithms Example-2
80
Time Ready Queue Next Job to execute Time Ready Queue Next Job to execute
.
RR Algorithms Example-2
81
P1 P2 P1 P3 P2 P4 P2 P4 P5 P4 P5 P5 Gantt Chart
0 2 4 5 7 9 11 12 14 16 17 19 20
Processes AT BT ST CT TAT WT
P1 0 3 0 5 5 2
P2 1 5 2 12 11 6
P3 3 2 5 7 4 2
P4 9 5 9 17 8 3
P5 12 5 14 20 8 3
➢ In SRTF, the process with the least time left to finish is selected to run.
➢ The running process will continue until it finishes or a new process with a
shorter remaining time arrives.
➢ This way, the process that can finish the fastest is always given priority.
.
SJF Preemption Algorithms
83
1. Minimizes Average Waiting Time: SRTF reduces the average waiting time by
2. Efficient for Short Processes: Shorter processes get completed faster, improving
executed quickly.
.
SJF Preemption Algorithms
84
3. High Overhead: Frequent context switching can increase overhead and slow down
system performance.
4. Not Suitable for Real-Time Systems: Real-time tasks may suffer delays due to frequent
preemptions.
.
SJF Preemption Algorithms Example-1
85
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3 (Processes with same arrival Time)
Process Burst Time Arrival Time
P1 10 ms 0 ms
P2 8 ms 0 ms
P3 7 ms 0 ms
Step-by-Step Execution:
1.P3 arrives at time 0 and It runs for 7 units time and finishe.
2.P2 arrives at time 0 and required 8 CPU time. It runs 8 units time and stops.
3.P3 arrives at time 0 and it will run for 10 unit time less and stop.
.
SJF Preemption Algorithms Example-1
86
➢ Let us compute TAT, WT, Average TAT & Average WT
P3 P2 P1 Gantt Chart
0 7 15 25
Processes AT ST BT CT TAT WT
P1 0 ms 15 10 ms 25 25 15
P2 0 ms 7 8 ms 15 15 7
P3 0 ms 0 7 ms 7 7 0
.
SJF Preemption Algorithms Example-2
87
➢ Consider the following table of arrival time and burst time for three processes
P1, P2 and P3 (Processes with different arrival Time)
Process Burst Time Arrival Time
P1 6 ms 0 ms
P2 3 ms 1 ms
P3 7 ms 2 ms
Step-by-Step Execution:
[Link] 0-1 (P1): P1 runs for 1 ms (total time left: 5 ms) as it has shortest remaining time left.
[Link] 1-4 (P2): P2 runs for 3 ms (total time left: 0 ms) as it has shortest remaining time left among
P1 and P2.
[Link] 4-9 (P1): P1 runs for 5 ms (total time left: 0 ms) as it has shortest remaining time left among
P1 and P3.
[Link] 9-16 (P3): P3 runs for 7 ms (total time left: 0 ms) as it has shortest remaining time
.
SJF Preemption Algorithms Example-2
88
➢ Let us compute TAT, WT, Average TAT & Average WT
P1 P2 P2 P1 P3 Gantt Chart
0 1 2 4 9 16
Process AT ST BT CT TAT WT
P1 0 0 6 9 9-0 = 9 9-6 = 3
P2 1 1 3 4 4-1 = 3 3-3 = 0
P3 2 9 7 16 16-2 = 14 14-7 = 7
.
SJF Preemption Algorithms Example-3
89
➢ Consider the following table of arrival time and burst time for three
processes P1, P2, P3, P4, & P5 consider quantum=2,
Process Burst Time Arrival Time
p1 3 0
p2 5 1
p3 2 3
P4 5 9
p5 5 12
.
SJF Preemption Algorithms Example-3
90
Time Ready Queue Next Job to execute Time Ready Queue Next Job to execute
.
SJF Preemption Algorithms Example-3
91
P1 P1 P3 P2 P2 P4 P4 P5 Gantt Chart
0 1 3 5 9 10 12 15 20
Processes AT BT ST CT TAT WT
P1 0 3 0 3 3 0
P2 1 5 2 10 9 4
P3 3 2 5 5 2 0
P4 9 5 9 15 6 1
P5 12 5 14 20 8 3
➢ Processes with same priority are executed on first come first served basis.
.
Priority Algorithms
93
➢ Ex: A high-priority process must wait until the currently running process
finishes.
.
Non-Preemptive Priority Algorithms Example
94
➢ Consider the following table of arrival time and burst time for three
processes P1, P2 and P3. Note: Lower number represents higher priority.
Process Arrival Time Burst Time Priority
P1 0 4 2
P2 1 2 1
P3 2 6 3
Step-by-Step Execution:
1. At Time 0: Only P1 has arrived. P1 starts execution as it is the only available
process, and it will continue executing till t = 4 because it is a non-preemptive
approach.
2. At Time 4: P1 finishes execution. Both P2 and P3 have arrived. Since P2 has the
highest priority (Priority 1), it is selected next.
3. At Time 6: P2 finishes execution. The only remaining process is P3, so it starts
execution.
4. At Time 12: P3 finishes execution.
.
Non-Preemptive Priority Algorithms Example
95
Process AT BT CT TAT WT
P1 0 4 4 4 0
P2 1 2 6 5 3
P3 2 6 12 10 4
.
Priority Algorithms
96
.
Preemptive Priority Algorithms Example
97
➢ Consider the following table of arrival time and burst time for three processes P1, P2
and P3: Note: Higher number represents higher priority.
Process Arrival Time Burst Time Priority
P1 0 7 2
P2 0 4 1
P3 0 6 3
Step-by-Step Execution:
1. At Time 0: All processes arrive at the same time. P3 has the highest priority (Priority 3), so it
starts execution.
2. At Time 6: P3 completes execution. Among the remaining processes, P1 (Priority 2) has a
higher priority than P2, so P1 starts execution.
3. At Time 13: P1 completes execution. The only remaining process is P2 (Priority 1), so it starts
execution.
4. At Time 17: P2 completes execution. All processes are now finished
Prepared By Mr. Vipin K. Wani
Non-Preemptive Priority Algorithms Example
98
P1 0 7 13 13 6
P2 0 4 17 17 13
P3 0 6 6 6 0
Average 12 6.33
.
Thank You…!
99
Any Questions