0% found this document useful (0 votes)
22 views

#OS Lecture Note 2 Process Management-1

The document discusses processes and process management in operating systems. It covers topics like what a process is, process control blocks, multithreading models, process states, creation and termination of processes. It also discusses process scheduling, inter-process communication and deadlocks.

Uploaded by

Dani Getachew
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

#OS Lecture Note 2 Process Management-1

The document discusses processes and process management in operating systems. It covers topics like what a process is, process control blocks, multithreading models, process states, creation and termination of processes. It also discusses process scheduling, inter-process communication and deadlocks.

Uploaded by

Dani Getachew
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Operating System

Mattu University
Engineering and Technology College
Department of Computer Science

Operating System

Target Dept.:- Computer Science 3rd year Regular

By: Tekalign B.
Lecture Two

Processes and Process Management

Engineering and Technology College Mattu University


Outline
3

The process and threads concept

Multithreading and thread usage

Process scheduling

Inter-process communication

Deadlock
Detection
Prevention
Avoidance
2/1/2024
What is a Process?
4

 Process is a program in execution.

 It can be defined as an execution unit where a program runs.

 The entity that can be assigned to and executed on a processor

 The OS helps you to create, schedule, and terminates the

processes which is used by CPU.

 A process created by the main process is called a child process.

 Only one process can be running in the CPU at any given time!

2/1/2024
The Process model
5
 Multiprogramming of four
programs
 Conceptual model
 4 independent processes
 Processes run sequentially
 Only one program active at any
instant!
 That instant can be very short…

2/1/2024
Process Control Blocks (PCBs)/Process Descriptors
6
 Process operations can be easily controlled with the help of
process control block (PCB).
 Typically include information such as
 Process identification number (PID)
 Process state - running, waiting, etc
 Program counter – location of instruction to next execute
 CPU scheduling information- priorities, scheduling pointers
 A pointer to the process’s parent process

 Pointers to the process’s child processes

 Pointers to allocated resources

 Memory-management information –
 memory allocated to the process

 I/O status information – I/O devices allocated to process,


list of open files
Cont’d…
7

 Process Table
 The OS maintains pointers to
each process’s PCB in a system-
wide or per-user process table
 Allows for quick access to PCBs
 When a process is terminated,
the OS removes the process from
the process table and frees all of
the process’s resources
What is Thread?
8

 Is the unit of execution within a process.


 Threads belongs to a process.
 No thread exist outside processes.
 Two examples
 Three processes, each with one thread
 One process with three threads
Cont’d…
9

Thread Example
 Suppose that the word processor is written as a two threaded
program.
 One thread interacts with the user and
 the other handles reformatting in the background.
 As soon as the sentence is deleted from page 1, the interactive
thread tells the reformatting thread to reformat the whole book.

 Meanwhile, the interactive thread continues to listen to the


keyboard and mouse and responds to simple commands like
scrolling page 1 while the other thread is computing madly in the
background.
Cont’d…
10

 While we are at it, why not add a third thread?


 Many word processors have a feature of automatically saving the entire
file to disk every few minutes to protect the user against losing a day's work
in the event of a program crash, system crash, or power failure.
 The third thread can handle the disk backups without interfering
with the other two
Why use Threads?
11

 Allow a single application to do


many things at once
 Simpler programming model
 Less waiting
 Threads are faster to create or
destroy
 No separate address space
 Overlap computation and I/O
 Could be done without threads, but it’s
harder
 Example: word processor
 Thread to read from keyboard
 Thread to format document
 Thread to write to disk
Cont’d…
12

Single and Multithreaded Processes


Cont’d…
13
Difference between process and threads
Multithreading Models
14

 When the computers were first invented, they were capable of


executing one program at a time.
 Thus once one program was completely executed, then they
picked the second one to execute and so on.
 The programs wanted to do more than one thing at the same
time (this is called Multi-threading).
 A browser, for example, might want to download one file in one
window, while it is trying to upload another and print some other
file.
 This ability of a program to do multiple things simultaneously
is implemented through threads (Multi-threading).
 It shares with peer threads its code section, data section, and
operating-system resources such as open files and signals.
Multithreaded Web Server
15
Process Management

 A process needs certain resources, including CPU time,


memory, files, and I/O devices, to accomplish its task.
 The operating system is responsible for the following activities
in connection with process management
 Process Creation and Deletion

 Process Suspension and Resumption

 Provision of mechanisms for:


 Process Synchronization

 Process Communication
Cont’d…
17

Process Creation
When is a new process created?
 System initialization : one or more processes created when the
OS starts up
 Execution of a process creation system call by a running
process : something explicitly asks for a new process
 A user request to create a process (system call executed from
user shell)
 Initiation of a batch job: Already running processes
 User programs

 System daemons
Cont’d…
18

Process Creation …
 Parent process creates children processes, which, in turn create
other processes, forming a tree of processes
 Generally, process identified and managed via a process identifier
(pid) init
pid = 1

 Resource sharing options


 Parent and children share all resources

 Children share subset of parent’s resources


login kthreadd sshd
pid = 8415 pid = 2 pid = 3028

 Parent and child share no resources

 Execution options
 Parent and children execute concurrently bash khelper pdflush sshd
pid = 8416 pid = 6 pid = 200 pid = 3610

 Parent waits until children terminate

emacs tcsch
ps
pid = 9204 pid = 4005
pid = 9298
Cont’d…
19

Process Termination
When is a new process Terminated?
 Conditions that terminate processes can be
1. Voluntary
 Normal Exit
 Error Exit

2. Involuntary
 Fatal error (only sort of involuntary)
 Killed by another process
Cont’d…
20

Process Termination …
 Process executes last statement and then asks the operating system
to delete it using the exit() system call.
 Returns status data from child to parent (via wait())

 Parent may terminate the execution of children processes using the


abort() system call. Some reasons for doing so:
 Child has exceeded allocated resources

 Task assigned to child is no longer required


Cont’d…
21

Process State
 As a process executes, it changes state

 new: The process is being created

 running: Instructions are being executed

 waiting: The process is waiting for some event to occur

 ready: The process is waiting to be assigned to a processor:


(Waiting for CPU)

 terminated: The process has finished execution


Cont’d…
22

Diagram of Process State


Cont’d…
23

Process States
Cont’d…
24

Five-State Process Model


Cont’d…
25

Queuing model: Single Blocked Queue


Cont’d…
26

Queuing model: Multiple Blocked Queue


Process Scheduling
27
 Why schedule processes?
 Bursts of CPU usage alternate with periods of I/O wait
 Some processes are CPU-bound: they don’t make many I/O
requests
 Other processes are I/O-bound and make many kernel
requests
Cont’d…
28
 CPU Scheduler
 Selects from among the processes in memory that are ready to
execute, and allocates the CPU to one of them
 CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
 Scheduling under 1 and 4 is non-preemptive
 All other scheduling is preemptive – implications for data
sharing between threads/processes
Cont’d…
29
 CPU Scheduler …
 The OS makes three types of scheduling decision with respect to
execution of processes:
1. Long-term scheduler (or Job scheduler): selects which
processes should be brought into the ready queue
 It determines when new processes are admitted to the system
 New____Ready, Blocked/suspend___Ready/suspend
2. Medium-term scheduler: The decision of which ready process
to execute next
• It determines a program is brought partially or fully into main memory
so that it may be executed
• It is performed when swapping is done
• Ready/suspend___Ready
• Blocked/suspended _____ Blocked
Cont’d…
30

 CPU Scheduler …

3. Short-term scheduler (CPU scheduler)

 The decision of which ready process to execute next


 It determines which ready process will get processor time
next
 Ready___Running
Cont’d…
31
 There are two main characteristics of short-term scheduling algorithms:
1. Selection Function
 It determines which process, among ready processes, is selected for
execution
 It may be based on: Priority, Resource requirement or Execution
behavior:
2. Decision mode:
 It specifies the instants in time at which the selection function is
exercised
 There are two types of decision mode:
 Pre-emptive: Allowing processes that is logically runnable to be
temporarily suspended and be moved to the ready state.
 Non pre-emptive: Run to completion method: once a process is in
running state ,it continues to execute until it terminates or blocks itself to
wait for some event
Cont’d…
32
 Scheduling Goals
 All systems
 Fairness: give each process a fair share of the CPU
 Enforcement: ensure that the stated policy is carried out
 Balance: keep all parts of the system busy
 Batch systems
 Throughput: maximize jobs per unit time (hour)
 Turnaround time: minimize time users wait for jobs
 CPU utilization: keep the CPU as busy as possible
 Interactive systems
 Response time: respond quickly to users’ requests
 Proportionality: meet users’ expectations
 Real-time systems
 Meet deadlines: missing deadlines is a system failure!
 Predictability: same type of behavior for each time slice
Cont’d…
33
 Scheduling Criteria (measuring scheduling performance)
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their execution per time unit
• Higher throughput usually means better utilized system
 Turnaround time – amount of time to execute a particular process
• Like response time, but for batch jobs (response is the completion of the
process)
 Waiting time – amount of time a process has been waiting in the ready
queue
 Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-sharing
environment)
• Can measure average, variance, minimum, maximum, …
• May be more useful to measure time spent waiting
 Usually not possible to optimize for all metrics with the
same scheduling algorithm
Cont’d…
34
 Scheduling algorithm Optimization Criteria
 Max CPU utilization

 Max throughput

 Min turnaround time

 Min waiting time

 Min response time


Cont’d…
35
1. First-Come, First-Served (FCFS) Scheduling
 Goal: do jobs in the order they
arrive
 Fair in the same way a bank teller line is
fair
 Simple algorithm!
 FIFO
 The FCFS algorithm is non-
preemptive
 Problem: long jobs delay every job
after them
 Many processes may wait for a single
long job
Cont’d…
36
Cont’d…
37
Cont’d…
38
Cont’d…
39
Cont’d…
40
Cont’d…
41
Cont’d…
42
Solution
Cont’d…
43

2. Shortest Job First (SJF) Scheduling Algorithm


 Goal: do the shortest job first
 Short jobs complete first
 Long jobs delay every job after them
 Jobs sorted in increasing order of
execution time
 Ordering of ties doesn’t matter
 Shortest Remaining Time First
(SRTF): preemptive form of SJF
 Problem: how does the scheduler
know how long a job will take?
Cont’d…
44
Cont’d…
45
Cont’d…
46
Cont’d…
47
Cont’d…
48
Cont’d…
49
3. Shortest Remaining Time (SRT) Scheduling Algorithm

 the process that has the shortest expected remaining process time.
 its selection function is remaining execution time
 Uses preemptive decision mode
Cont’d…
50
Cont’d…
51
Cont’d…
52
Cont’d…
53
Cont’d…
54
Cont’d…
55
4. Round Robin (RR) Scheduling Algorithm
 Round Robin scheduling
 Give each process a fixed time slot (quantum)
 Rotate through “ready” processes
 Each process makes some progress
 Oldest, simplest, fairest algorithm
 What’s a good quantum?
 Too short: many process switches hurt efficiency
 Too long: poor response to interactive requests
 Typical length: 10–100 ms
 Advantages of RR
 It doesn’t face the issues of starvation
 It deals with all process without any priority
Cont’d…
56
Cont’d…
57
Cont’d…
58

 Advantages of RR
 It doesn’t face the issues of starvation
 All the jobs get a fair allocation of CPU
 It deals with all process without any priority

 Disadvantages of RR
 Priorities cannot be set for the processes
 Finding a correct time quantum is a quite difficult task in this
system
Cont’d…
59
5. Priority Scheduling Algorithm
 Assign a priority to each process
 “Ready” process with highest priority
allowed to run
 Running process may be interrupted after its
quantum expires
 Priorities may be assigned dynamically
 Reduced when a process uses CPU time
 Increased when a process waits for I/O
 SJF is a priority scheduling where priority is
the predicted next CPU burst time.
 Problem => Starvation – low priority
processes may never execute.
 Solution => Aging – as time progresses
increase the priority of the process.
Cont’d…
60
Inter-Process Communication
61
 Mechanism for process to communicate and to
synchronize their action.
 Very briefly, there are three issues here:
 The first: how one process can pass information to another.
 The second: making sure two or more processes do not get in each
other's way,
• for example, two processes in an airline reservation system each
trying to grab the last seat on a plane for a different customer.
 The third: proper sequencing when dependencies are present:
• if process A produces data and process B prints them, B has to
wait until A has produced some data before starting to print.
Cont’d…
62

 Why Inter Process Communication (IPC) needed?


 Resource Sharing: IPC enables multiple processes to share resources, such as
memory and file systems,

 Coordination and Synchronization: IPC provides a way for processes to


coordinate their activities and synchronize access to shared resources,

 Communication: IPC enables processes to communicate with each other,

 Modularity: IPC enables the development of modular software, where


processes can be developed and executed independently,

 Flexibility: IPC allows processes to run on different hosts or nodes in a


network,
Synchronization in Inter-Process Communication
63

 Semaphores: are a synchronization tool used to control access to


shared resources by multiple processes.
 Semaphores are essentially counters that are used to regulate access to shared
resources.
 Mutexes: Mutexes (short for "mutual exclusion") are a
synchronization mechanism used to ensure that only one process
can access a shared resource at a time.
 Monitors: Monitors are a synchronization mechanism used to
regulate access to shared resources in a multi-process environment.
 Condition Variables: Condition variables are used to coordinate
access to shared resources by multiple processes.
Methods of Inter-Process Communication
64

 Pipe:
 A pipe is a data channel that is unidirectional.
 Two pipes can be used to create a two-way data channel between two
processes. Pipes are used in Windows operating systems.
 Socket:
 The socket is the endpoint for sending or receiving data in a network.
 Most of the operating systems use sockets for IPC.
 File:
 Signal: A signal is a way of communicating between processes.
 Shared Memory: is the memory that can be simultaneously accessed by
multiple processes. Windows operating systems use shared memory.
 Message Passing and Queue
Race Conditions
65
 A race condition is potential IPC problem that occurs in an
operating system (OS) where two or more processes or threads
are executing concurrently.
 A race condition is a situation that may occur inside a critical
section.
 example
 If two threads are simultaneously accessing and changing the
same shared resource, such as a variable or a file, the final
state of that resource depends on the order in which the
threads execute.
 If the threads are not correctly synchronized, they can
overwrite each other's changes, causing incorrect results or
even system crashes.
Cont’d…
66
Cont’d…
67
Effects of Race Condition in OS
 Deadlocks: This can happen if two processes try to access the
same resource simultaneously, and the OS doesn't have a
mechanism to ensure that only one can access the resource at a
time.
 Data Corruption
 Security Vulnerability
 Performance Degradation
Cont’d…
68

How do you prevent race conditions?


 Avoid shared states.
 This means reviewing code to ensure when shared resources are part of a system or
process, atomic operations are in place that run independently of other processes and
locking is used to enforce the atomic operation of critical sections of code. Immutable
objects can also be used that cannot be altered once created.
 Use thread synchronization.
 Here, a given part of the program can only execute one thread at a time.
Critical Section (Critical Region)
69
 A Critical Section of a program is where global shared memory is
being accessed.
 Sometimes a process has to access shared memory or files, or do
other critical things that can lead to races. That part of the
program where the shared memory is accessed is called the
critical region.
 How do we avoid race conditions? Mutual exclusion is a solution
which describes some way of making sure that if one process is
using a shared data, the other processes will be excluded from
doing the same thing.
Mutual Exclusion
70
 In this section we will examine various proposals for achieving
mutual exclusion, so that while one process is busy updating
shared memory in its critical region, no other process will enter its
critical region and cause trouble.
 Four conditions to provide mutual exclusion:
1. No two processes simultaneously in critical region
2. No assumptions made about speeds or numbers of CPUs
3. No process running outside its critical region may block another process
4. No process must wait forever to enter its critical region
Cont’d…
71
Deadlock
72

 A set of blocked processes each holding a resource and waiting to


acquire a resource held by another process in the set.
 A process request resources; if the resource is available at
that time a process enters the wait state.
 Waiting process may never change its state because the
resources requested are held by other waiting process.
Cont’d…
73

Example
 Two trains approaches each other at crossing, both will come
to full stop and neither shall start until other has gone.
 Traffic only in one direction.
 Each section of a bridge can be viewed as a resource.
 If a deadlock occurs, it can be resolved if one car backs up
(preempt resources and rollback).
 Several cars may have to be backed up if a deadlock occurs.
Cont’d…
74

Example2
Conditions for deadlock
75
These conditions of policy must be present for a deadlock to be
possible:
1. Mutual Exclusion – only one process at a time can use a resource.
 If another process requests that resource, requesting process must wait
until the resource has been released.
2. No preemption – a resource only released by process’s volunteer
3. Hold and Wait - a process holding at least one resource is waiting
to acquire additional resources held by other processes
4. Circular Wait: : A set {P0, P1, …….Pn} of waiting state/
process must exists such that P0 is waiting for a resource that is
held by P1, P1 is waiting for the resource that is held by P2
…..P(n – 1) is waiting for the resource that is held by Pn and Pn is
waiting for the resources that is held by P4.
Methods for handling Deadlock
76

 Ensure that the system will never enter a deadlock state


(traffic lights)
 Allow the system to enter a deadlock state and then recover
(back up cars)
 Ignore the problem and pretend that deadlocks never occur
in the system; used by most operating systems, including
UNIX
Deadlock Prevention
77

 The OS is designed in such a way as to exclude a priori the

possibility of deadlock

 Indirect methods of deadlock prevention:

 to disallow one of the 3 policy conditions

 Direct methods of deadlock prevention:

 to prevent the occurrence of circular wait


Cont’d…
78

Indirect methods of deadlock prevention:


• Mutual Exclusion • No preemption
– cannot be disallowed – Can be prevented in several ways. But
– ex: only 1 process at a time can whenever a process must release a
write to a file resource who’s usage is in progress, the
state of this resource must be saved
• Hold-and-Wait for later resumption.
– can be disallowed by requiring – Hence: practical only when the state
that a process request all its of a resource can be easily saved and
required resources at one time restored later, such as the processor.
– block the process until all requests
can be granted simultaneously
Deadlock Avoidance
79
• We allow the 3 policy conditions but make judicious choices to assure
that the deadlock point is never reached

• Allows more concurrency than prevention


• Two approaches:
– do not start a process if it’s demand might lead to deadlock
– do not grant an incremental resource request if this allocation might
lead to deadlock
• In both cases: maximum requirements of each resource must
be stated in advance
Deadlock Detection
80

• Resource access are granted to processes whenever


possible. The OS needs:
– an algorithm to check if deadlock is present
– an algorithm to recover from deadlock
• The deadlock check can be performed at every
resource request
• Such frequent checks consume CPU time
Deadlock Recovery
81
Needed when deadlock is detected. The following approaches are possible:
 Process Termination–
 Abort all deadlocked processes.
 Abort one process at a time until the deadlock cycle is eliminated.
 In which order should we choose to abort?
 Priority of the process.
 How long process has computed, and how much longer to completion.
 Resources the process has used.
 Resources process needs to complete.

 Resource Pre-emption
 Selecting a victim - minimize cost.
 Rollback - return to some safe state, restart process from that state.
 Starvation - same process may always be picked as victim; include number of rollback
in cost factor.
Examples of Process Management os Commands and
Tools
82

 ps: provides information about running processes


 top: shows the current state of the system and the processes
running on it
 htop: an interactive process viewer and system monitor
 kill: used to terminate a process
 renice: used to change the priority of a process
 nice: used to set the priority of a process at launch
 Task Manager (Windows): a graphical user interface for managing processes
 Activity Monitor (macOS): a graphical user interface for managing processes
End of Chapter Two

You might also like