0% found this document useful (0 votes)
8 views

Module 3 ProcessSyncronization (1)

Uploaded by

saksh9864
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Module 3 ProcessSyncronization (1)

Uploaded by

saksh9864
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 133

Module 3 Concurrent Processes

Out Line
• Process Concept
• Principle of Concurrency
• Producer / Consumer Problem
• Mutual Exclusion
• Critical Section Problem
• Dekker’s solution
• Peterson’s solution
• Semaphores, Test and Set operation
• Classical Problem in Concurrency-Dining Philosopher Problem Sleeping Barber Problem;
• Inter Process Communication models and Schemes, Process generation.
• Deadlock: System model, Deadlock characterization, Prevention, Avoidance and detection,
Recovery from deadlock.

2
Introduction to Inter-process communication

Processes in system can be independent or cooperating.


Independent processes: cannot affect or be affected by
the execution of another process.
Cooperating processes: can affect or be affected by the
execution of another process.

3
Introduction to IPC
▪ Inter-process communication (IPC) is a set of
methods for the exchange of data among
multiple threads in one or more processes.
▪ Processes may be running on one or more computers
connected by a network.
▪ IPC may also be referred to as inter-thread
communication and inter-application communication.

4
Why IPC ?
There are several reasons for providing an environment that
allows process cooperation:
▪ Information sharing
▪ Computational speedup
▪ Modularity
▪ Convenience

5
Methods of communication
▪ The communication between these processes can be seen as a method of co-operation between
them. Processes can communicate with each other using these two ways:
a. Message Passing (Process A send the message to Kernel and then Kernel send that message
to Process B)
b. Shared Memory (Process A put the message into Shared Memory and then Process B read
that message from Shared Memory)

6
Concurrent Processes in Operating System
• Concurrency is the execution of a set of multiple instruction sequences at
the same time.
• This occurs when there are several process threads running in parallel.
• These threads communicate with the other threads/processes through a
concept of shared memory or through message passing.
• Concurrency results in resource sharing, which causes issues like
deadlocks and resource scarcity.
• It aids with techniques such as process coordination, memory allocation,
and execution schedule to maximize throughput.

7
Principle of Concurrency
• Today's technology, like multi-core processors and parallel processing,
allows multiple processes and threads to be executed simultaneously.
• Multiple processes and threads can access the same memory space, the
same declared variable in code, or even read or write to the same file.
• Interleaved and overlapping processes are two types of concurrent
processes with the same problems.
• It is impossible to predict the relative speed of execution, and the following
factors determine it:
• The way operating system handles interrupts
• Other processes activities.
• The operating system's scheduling policies
8
Type of concurrency processing in operating system

• Process scheduling − This is the most basic form of concurrency processing, in which
the operating system executes multiple processes one after the other, with each process
given a time slice to execute before being suspended and replaced by the next process
in the queue.
• Multi-threading − This involves the use of threads within a process, with each thread
executing a different task concurrently.
• Parallel processing − This involves the use of multiple processors or cores within a
system to execute multiple tasks simultaneously.
• Distributed processing − This involves the use of multiple computers or nodes
connected by a network to execute a single task.

9
Problem in Concurrency
▪ There are various problems in concurrency:-
▪ Locating the programming errors : It's difficult to spot a programming error because
reports are usually repeatable due to the varying states of shared components each time
the code is executed.
▪ Sharing Global Resources: Sharing global resources is difficult. If two processes
utilize a global variable and both alter the variable's value, the order in which the many
changes are executed is critical.
▪ Locking the channel : It could be inefficient for the OS to lock the resource and prevent
other processes from using it.
▪ Optimal Allocation of Resources: It is challenging for the OS to handle resource
allocation properly.

10
Advantages of Concurrency in OS
▪ Better Performance:It improves the operating system's performance.
When one application only utilizes the processor, and another only uses the
disk drive, the time it takes to perform both apps simultaneously is less than
the time it takes to run them sequentially.
▪ Better Resource Utilization: It enables resources that are not being used
by one application to be used by another.
▪ Running Multiple Applications: It enables you to execute multiple
applications simultaneously.

11
Dis-Advantages of Concurrency in OS
▪ It is necessary to protect multiple applications from each other.
▪ It is necessary to use extra techniques to coordinate several applications.
▪ Additional performance overheads and complexities in OS are needed for
switching between applications.

12
Process Synchronization in OS
▪ When two or more process cooperates with each other, their order of execution must be
preserved otherwise there can be conflicts in their execution and inappropriate outputs
can be produced.
▪ A cooperative process is the one which can affect the execution of other process or can
be affected by the execution of other process.
▪ Such processes need to be synchronized so that their order of execution can be
guaranteed.
▪ The procedure involved in preserving the appropriate order of execution of cooperative
processes is known as Process Synchronization.
▪ There are various synchronization mechanisms that are used to synchronize the
processes.

13
Example: Process synchronization
Initially Shared = 10
Let’s say there are two processes P1 and P2
which share a common variable (shared=10), both
processes are present in – queue and waiting for
their turn to be executed. Suppose, Process P1
first come under execution, and the CPU store a
common variable between them (shared=10) in
the local variable (X=10) and increment it by
1(X=11), after then when the CPU read line
sleep(1),it switches from current process P1 to
process P2 present in ready-queue. The process
P1 goes in a waiting state for 1 second.

14
Continue…
• Now CPU execute the Process P2 line by line and store common variable (Shared=10)
in its local variable (Y=10) and decrement Y by 1(Y=9), after then when CPU read
sleep(1), the current process P2 goes in waiting for state and CPU remains idle for some
time as there is no process in ready-queue, after completion of 1 second of process P1
when it comes in ready-queue, CPU takes the process P1 under execution and execute
the remaining line of code (store the local variable (X=11) in common variable
(shared=11) ), CPU remain idle for sometime waiting for any process in ready-queue,
after completion of 1 second of Process P2, when process P2 comes in ready-queue,
CPU start executing the further remaining line of Process P2(store the local variable
(Y=9) in common variable (shared=9) ).
• Note: We are assuming the final value of a common variable(shared) after execution of
Process P1 and Process P2 is 10 (as Process P1 increment variable (shared=10) by 1 and
Process P2 decrement variable (shared=11) by 1 and finally it becomes shared=10). But
we are getting undesired value due to a lack of proper synchronization.
15
Race Condition
A Race condition is an undesirable situation that occurs when a device
or system attempts to perform two or more operations at the same
time.
Race condition: Situations like this where processes access the same
data concurrently and the outcome of execution depends on the
particular order in which the access takes place is called race
condition.

16
Producer Consumer problem
▪ The Producer-Consumer problem is a classical multi-process
synchronization problem, that is we are trying to achieve synchronization
between more than one process.
▪ There is one Producer in the producer-consumer problem, Producer is
producing some items, whereas there is one Consumer that is consuming
the items produced by the Producer. The same memory buffer is shared by
both producers and consumers which is of fixed-size.
▪ The task of the Producer is to produce the item, put it into the memory
buffer, and again start producing items. Whereas the task of the Consumer
is to consume the item from the memory buffer.

17
The Producer-Consumer Problem

▪ It is also known as Bounded Buffer Problem in (multi-process


synchronization problem).
▪ Consider two processes Producer and Consumer , who share common, fixed
size buffer.
▪ Producer puts information into the buffer
▪ Consumer consume this information from buffer.
▪ Trouble arises when the producer wants to put a new item in the buffer, but it
is already full.
▪ And consumer wants to remove a item from buffer, but it is already empty.

18
Producer Consumer problem
#define BUFFER_SIZE 10
typedef struct {
DATA data;
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
int counter = 0;

item nextProduced; item nextConsumed;


/* produce an item in nextProduced */ /* consume the item in nextConsumed */
while (TRUE) { while (TRUE) {
while (counter == BUFFER_SIZE); while (counter == 0);
nextConsumed = buffer[out];
buffer[in] = nextProduced;
out = (out + 1) % BUFFER_SIZE;
in = (in + 1) % BUFFER_SIZE;
counter--;
counter++; }
}
19
Producer Consumer problem
Note that counter++;  this line is NOT what it seems!!
is really --> register = counter
register = register + 1
counter = register
At a micro level, the following scenario could occur using this code:

TO; Producer Execute register1 = counter register1 = 5


T1; Producer Execute register1 = register1 + 1 register1 = 6
T2; Consumer Execute register2 = counter register2 = 5
T3; Consumer Execute register2 = register2 - 1 register2 = 4
T4; Producer Execute counter = register1 counter = 6
T5; Consumer Execute counter = register2 counter = 4

20
The Producer-Consumer Problem
Consumer Producer

Void consumer() int count = 0;


{ void producer()
int item; {
While(True) While(True)
{ {
while(count==0); /*buffer empty*/ Produce_item(item);
item= buffer(out); while(count==n); /*buffer full*/
out=(out+1)mod n; Buffer[in]=item;
count = count – 1; in=(in+1) mod n;
Consume_item(item) count = count+1;
} }
} }
21
Basic Definitions: Critical Section
▪ Critical Section: The part of program where the shared resource is accessed is
called critical section or critical region.

22
Critical Section or Critical Region
▪ Sometimes a process has to access shared memory or files, or do other critical
things that can lead to races.
▪ If no two processes are ever in their critical regions at the same time then we
could avoid race.

23
Solving Critical-Section Problem
1. Mutual Exclusion
✓ Only one process at an instant of time can enter into critical section.
✓ Also, critical section must be accessed in mutual exclusion fashion.

2. Progress
✓ If a process is not using the critical section then it should not stop any other
process from accessing it. In other words any process can enter a critical section if
it is free
3. Bounded Wait
✓ There should be a maximum bound up to which a process can wait.
✓ No process should have to wait forever to enter a critical section.

24
Types of solution
• Software solutions
-Algorithms whose correctness dose not rely on any assumptions
other than positive processing speed.

• Hardware solutions
- Rely on some special machine instructions.

• Operating system solutions


- Extending hardware solutions to provide some functions and
data structure support to the programmer.

25
Software Solutions

• Initial Attempts to Solve Problem.


• Only 2 processes, P0 and P1.
• General structure of process Pi (other process Pj )
repeat
entry section
critical section
exit section
remainder section
until false;
• Processes may share some common variables to synchronize
their actions.
26
Busy Waiting Solutions for CS Problem

A list of proposals to achieve mutual exclusion


1. Lock variables
2. Strict alternation
3. Test and Set Instruction
4. Peterson's solution.

27
Lock Variable
▪ In this mechanism, a Lock variable lock is used. Two values of lock can be
possible, either 0 or 1.
▪ Lock value 0 means that the critical section is vacant while the lock value 1
means that it is occupied.
▪ A process which wants to get into the critical section first checks the value of
the lock variable.
▪ If it is 0 then it sets the value of lock as 1 and enters into the critical section,
otherwise it waits.

28
Lock variable

1. Entry Section →
2. While (lock! = 0);
3. Lock = 1;
4. //Critical Section
5. Exit Section →
6. Lock =0;

29
Lock variable
Problem - Race condition
• Suppose that one process reads the lock and sees that it is 0, Before
it can set the lock to 1,another process is scheduled, runs and sets
the lock to 1.
• When the first process runs again it will also set the lock to 1 and
two processes will be in their critical regions at the same time.

30
Strict Alternation (Turn Variable)
▪ Integer variable ‘turn’ keeps track of whose turn is to enter the critical section.
Initially turn =0
▪ Process 0 inspects turn, finds it to be 0, and enters in critical section.
▪ Process 1 also finds it to be 0 and therefore sits in loop continually testing ‘turn’ to
see when it becomes 0.
▪ Continuously testing a variable until some value appears is called busy waiting.
▪ When process 0 exits from critical region it sets turn to 1 and now process 1 can find
it to be 1 and enters in to critical region.
▪ In this way, both the processes get alternate turn to enter critical region.

31
Strict Alternation

For Process P1
For Process P0
While (TRUE) While (TRUE)
{ {
while ( turn != 1); /* trap */
while ( turn != 0); /* trap */
Critical_region();
Critical_region();
turn =0; /*a process exit CS*/
turn =1; /*a process exit CS*/
noncritical_region();
noncritical_region();
}
}
32
Test and Set Instruction

33
Test and Set Instruction

34
Peterson’s Solution for Critical-section Problem
• A Classic software-based solution to the critical-section problem known as
Peterson’s solution.
• May not work correctly on modern computer architecture.
• How ever, It provides a good algorithmic description of solving the critical-
section problem and illustrates some of the complexities involved in
designing software that addresses the requirements of mutual exclusion,
progress, and bounded waiting.
• Peterson’s solution is restricted to two processes that alternate execution between their
critical sections and remainder sections.

35
Peterson’s Solution for Critical-section Problem
• It ensures that if a process is in the critical section, no other process must be
allowed to enter it. This property is termed mutual exclusion.
• If more than one process wants to enter the critical section, the process that
should enter the critical region first must be established. This is termed
progress.
• There is a limit to the number of requests that processors can make to enter
the critical region, provided that a process has already requested to enter
and is waiting. This is termed bounding.
• It provides platform neutrality as this solution is developed to run in user
mode, which doesn't require any permission from the kernel.

36
Peterson’s Solution for Critical-section Problem
▪ Advantages of Peterson’s Solution
✓ Peterson's solution allows multiple processes to share and access a resource without
conflict between the resources.
✓ Every process gets a chance of execution.
✓ It is simple to implement and uses simple and powerful logic.
✓ It can be used in any hardware as it is completely software dependent and executed in
the user mode.
✓ Prevents the possibility of a deadlock.
▪ Disadvantages of Peterson’s Solution
✓ The process may take a long time to wait for the other processes to come out of the
critical region. It is termed as Busy waiting.
✓ This algorithm may not work on systems having multiple CPUs.
37 ✓ The Peterson solution is restricted to only two processes at a time.
Peterson’s Solution
int flag[10]=false //intialize all process to false //represent which process turn to enter the critical region.
int turn;
void Entry_Section(int i) // i represnt process
{
int j;
j=i-1; // j represent other process
flag[i]= true; // allow process to enter the region.
turn = j;
// loop infintely until process j is in the critical region
while(flag[j]==true && turn=j);
}
void Exit_section(int i)
{// allow next process to enter
flag[i]=false;
38 }
Hardware Solutions
• Many systems provide hardware support for critical section code
• Uniprocessors – could disable interrupts
– Currently running code would execute without preemption
– Generally too inefficient on multiprocessor systems
• Operating systems using this not broadly scalable

• Modern machines provide special atomic hardware instructions


• Atomic = non-interruptable
– Either test memory word and set value
– Or swap contents of two memory words

39
Solution - Producer-Consumer Problem

▪ Solution for producer:


• Producer either go to sleep or discard data if the buffer is full.
• Once the consumer removes an item from the buffer, it notifies
(wakeups) the producer to put the data into buffer.

▪ Solution for consumer:


• Consumer can go to sleep if the buffer is empty.
• Once the producer puts data into buffer,
it notifies (wakeups) the
consumer to remove (use) data from buffer.
40
Producer Consumer problem using Sleep & Wakeup

#define N 4 count 10
int count=0; item Item 1
void producer (void)
{ int item;
while (true) { Producer Buffer Consumer
1
item=produce_item();
2
if (count==N) sleep();
3
insert_item(item);
4
count=count+1;
if(count==1)
wakeup(consumer);
}
41 }
Producer Consumer problem using Sleep & Wakeup

void consumer (void) count 10


{ int item; item
while (true)
{
Producer Buffer Consumer
if (count==0) sleep(); 1 Item 1
item=remove_item(); 2
count=count-1; 3

if(count==N-1) 4

wakeup(producer);
consume_item(item);
}
42 }
Sleep and Wake up
▪ To avoid busy waiting we have IPC primitives ( pair of sleep and
wakeup)
▪ Solution : Replace busy waiting by blocking calls
• Sleep (blocks process): it is a system call that causes the caller
to block, that is, be suspended until another process wakes it
up.

• Wakeup (unblocks process): The wakeup call has one


parameter, the process to be awakened.

43
The problem with sleep and wake-up calls
▪ Empty buffer, count==0, Consumer gets replaced by
producer before it goes to sleep Produces something,
count++, sends wakeup to consumer Consumer not asleep,
ignores wakeup, thinks count= = 0, goes to sleep Producer
fills buffer, goes to sleep P and C sleep forever So, the
problem is lost wake-up calls.

44
Semaphore
▪ A semaphore is a special integer variable that apart from
initialization, is accessed only through two standard operations.
Synchronization tool that does not require busy waiting.
• Semaphore is un integer flag, indicated that it is safe to proceed.
• Two standard operations modify S: wait() and signal()
• Originally called P() and V()
• Less complicated
• Can only be accessed via two indivisible (atomic) operations.

45
Semaphore Implementation with no Busy waiting
• Implementation of wait:

wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}

• Implementation of signal:

Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
46 }
Wait() & Signal() Operation
A simple way to understand wait() and signal() operations are:
▪ wait(): Decrements the value of semaphore variable by 1. If the
value becomes negative or 0, the process executing wait() is
blocked (like sleep), i.e., added to the semaphore's queue.
▪ signal(): Increments the value of semaphore variable by 1. After
the increment, if the pre-increment value was negative (meaning
there are processes waiting for a resource), it transfers a blocked
process from the semaphore's waiting queue to the ready queue.
(like Wake up)

47
Usage of Semaphore

▪ Semaphore is used for –


1. For solving CS problem ( Initial Value = 1)
2. For deciding the order of execution among processes
3. For managing resources
▪ P1, P2, P3 (order of execution that we want is – P2, P1, P3)
We are going to use two semaphore i.e. S1 = 0, S2 = 0
wait(S1) wait(S2)
P1 P2 P3
signal(S2) signal(S1)
48
Types of Semaphores
▪ There are two types of semaphores as follows:
1. Binary Semaphore
2. Counting Semaphore
▪ Semaphores which are restricted to the values 0 and 1 (or
locked/unlocked, unavailable/available, up/down) are
called binary semaphores (same functionality that mutexes have).
▪ Semaphores which allow an arbitrary resource count are called counting
semaphores(can have possible values more than two)

49
Problem on counting Semaphore
1. A counting semaphore S is initialized to 10. Then, 6 P operations and 4 V
operations are performed on S. What is the final value of S?
• P operation also called as wait operation decrements the value of semaphore variable
by 1.
• V operation also called as signal operation increments the value of semaphore variable
by 1.
• Final value of semaphore variable S= 10 – (6 x 1) + (4 x 1) = 10 – 6 + 4 = 8

2. A counting semaphore S is initialized to 7. Then, 20 P operations and 15 V


operations are performed on S. What is the final value of S?
• P operation also called as wait operation decrements the value of semaphore variable
by 1.
• V operation also called as signal operation increments the value of semaphore variable
by 1.
• Final value of semaphore variable S= 7 – (20 x 1) + (15 x 1)= 7 – 20 + 15= 2
50
Semaphore
▪ In the Producer-Consumer problem, semaphores are used for
two purpose:
▪ mutual exclusion and
▪ synchronization.
▪ In the following example there are three semaphores, full, used
for counting the number of slots that are full; empty, used for
counting the number of slots that are empty; and mutex, used to
enforce mutual exclusion.
▪ BufferSize = 3; semaphore mutex = 1; // Controls access to
critical section semaphore empty = BufferSize; // counts
number of empty buffer slots, semaphore full = 0; // counts
number of full buffer slots

51
Solution to the Producer-Consumer problem using Semaphores
▪ Producer() {
while (TRUE) {
make_new(item);
wait(&empty);
wait(&mutex); // enter critical section
put_item(item); //buffer access
signal(&mutex); // leave critical section
signal(&full); // increment the full semaphore } }

52
Solution to the Producer-Consumer problem using Semaphores
Consumer() {while (TRUE)
{
wait(&full); // decrement the full semaphore
wait(&mutex); // enter critical section
remove_item(item); // take a widget from the buffer
signal(&mutex); // leave critical section
signal(&empty); // increment the empty semaphore
consume_item(item); // consume the item
}
}

53
Difficulties with Semaphore
• Wait(s) and signal(s) are scattered among several processes
therefore it is difficult to understand their effect.
• Usage must be correct in all the processes.
• One bad process or one program error can kill the whole
system.

54
Deadlock and starvation problems
• Deadlock – two or more processes are waiting indefinitely for an event
that can be caused by only one of the waiting processes
• Let S and Q be two semaphores initialized to 1
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);

• Starvation: indefinite blocking.


• A process may never be removed from the semaphore queue in which
55 it is suspended.
Classical Inter-process communication Problems
▪ Readers and Writers Problem
▪ Dinning Philosopher Problem
▪ Sleeping Barber Problem

56
Readers-Writers Problem
• A data set is shared among a number of concurrent processes
– Readers – only read the data set; they do not perform any updates.
– Writers – can both read and write.

• Problem – allow multiple readers to read at the same time.


• Only one single writer can access the shared data at the same time.
• Shared Data
– Data set
– Semaphore mutex initialized to 1.
– Semaphore wrt initialized to 1.
– Integer readcount initialized to 0.

57
Readers-Writers Problem (Cont.)
• The structure of a writer process

while (true) {
wait (wrt) ;

// writing is performed

signal (wrt) ;
}

58
Readers-Writers Problem (Cont.)

• The structure of a reader process

while (true) {
wait (mutex) ;
readcount ++ ;
if (readercount == 1) wait (wrt) ;
signal (mutex)

// reading is performed

wait (mutex) ;
readcount - - ;
if (redacount == 0) signal (wrt) ;
signal (mutex) ;
}

59
Dining-Philosophers Problem
▪ The dining philosopher's problem is the classical problem of synchronization
which says that Five philosophers are sitting around a circular table.
▪ Their job is to think and eat alternatively.
▪ A bowl of noodles is placed at the center of the table along with five
chopsticks for each of the philosophers.
▪ To eat a philosopher needs both their right and a left chopstick.
▪ A philosopher can only eat if both immediate left and right chopsticks of the
philosopher is available.
▪ In case if both immediate left and right chopsticks of the philosopher are not
available then the philosopher puts down their (either left or right) chopstick
and starts thinking again.
60
Dinning Philosopher Problem
P1

▪ Philosophers eat and F1


F2
think.
P5
1. To eat, they must first
acquire a left fork and P2
then a right fork (or vice
versa). F5
F3
2. Then they eat.
3. Then they put down the
forks. P3 F4 P4

4. Then they think.


5. Go to 1.
61
Dinning Philosopher Problem
▪ Problems:

1. Deadlock

2. Starvation

62
Dinning Philosopher Problem
• The structure of Philosopher i:

While (true) {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );

// eat
signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think

63
Sleeping Barber Problem
▪ One barber, one barber
chair, and N seats for
waiting.
▪ No customers so barber
sleeps.
▪ Customer comes in &
wakes up barber.
▪ More customers come in.
• If there is an empty
seat, they take a seat.
• Otherwise, they leave.
64
Sleeping Barber Problem
Semaphore customer = 0 // num of customers waiting for haircut
Semaphore barber = 0 // barber waiting for customers(sleeping)
Mutex = 1 // for Mutual exclusion among chairs available
int NumofEmptyChairs = N; /*total num of seats available barber shop*/

For Barber {
while(T) {
wait(customer); /* waits for a customer (sleeps). */
wait(mutex); /* mutex to protect the number of available
seats.*/
NumofEmptyChairs++; /* a chair gets free.*/
signal(barber); /* bring customer for haircut.*/
signal(mutex); /* barber is cutting hair.*/
}}
65
Sleeping Barber Problem
For Customer
{
while(T) {
wait(mutex);
if(NumofEmptyChairs>0)
{
NumofEmptyChairs--; /* sitting down.*/
signal(customer); /* notify the barber. */
signal(mutex); /* release the lock */
wait(barber); /* wait in the waiting room if barber is busy. */
}
else
signal(mutex); /* release the lock customer leaves */
}
66
Message Passing
▪ Semaphores, monitor and event counters are all designed to function
within a single system (that is, a system with a single primary
memory).
▪ They do not support synchronization of processes running on
separate machines connected to a network (Distributed System).
▪ Messages, which can be sent across the network, can be used to
provide synchronization.
▪ So message passing is strategy for inter process communication in
distributed environment.

67
Typical Message-Passing Functions
▪ source and destination addresses must identify the machine and
the process being addressed. message is a generic data item to
be delivered.
✔ send (destination, &message);
The calling process sends message to destination without
blocking.
✔ receive (source, &message);
The calling process receives the next sequential message from
source, blocking until it is available.
68
Producer-Consumer using Message Passing

In this solution, each message has two components:


▪ an empty/full flag, and a data component being passed from the
producer to the consumer.
▪ Initially, the consumer sends N messages marked as “ empty” to the
producer.
▪ The producer receives an empty message, blocking until one is
available, fills it, and sends it to the consumer.
▪ The consumer receives a filled message, blocking if necessary,
processes the data it contains, and returns the empty to the producer.

69
Producer-Consumer using Message Passing

#define N 100 //number of slots in the buffer


void producer()
{
int item;
message m; // message buffer
while (TRUE)
{
item = produce_item( ); // generate something to put in buffer
receive(consumer, &m); // wait for an empty to arrive
build_message (&m, item); // construct a message to send
send(consumer, &m); // send item to consumer
}
}
70
Producer-Consumer using Message Passing
void consumer()
{
int item, i;
message m;
for (i = 0; i < N; i++) send(producer, &m); //send N empty messages to producer
while (TRUE)
{
receive(producer, &m); // get message containing item
item = extract_item(&m); //extract item from message
send(producer, &m); // send back empty reply
consume_item(tem); // do something with the item
}
}

71
Mutex
▪ When semaphore ability is not needed a simplified version of the
semaphore, called a mutex is used.
▪ Mutexes are good only for managing mutual exclusion to some shared
resource.
▪ A mutex is a variable that can be in one of the two states (only one bit
is required to represent it) with 0 meaning unlocked and 1 meaning
locked:
• Unlocked
• Locked
▪ Two procedures are used for Mutex:
• Mutex_Lock //to enter into CS
72 • Mutex_Unlock //to exit from CS
Monitors
▪ Tony Hoare (in 1974) and Per Brinch Hansen (in 1975) proposed a
new synchronization structure called a monitor.
▪ Monitors are features to be included in high level programming
languages.
▪ A monitor is a collection of procedures, variables, and data
structures that are all grouped together in a special kind of module
or package.
▪ Processes may call the procedures in a monitor whenever they want
to, but they cannot directly access the monitor’s internal data
structures from procedures declared outside the monitor.

73
Monitors
▪ Monitors have an important property that makes them useful for achieving
mutual exclusion:
▪ Only one process can be active in a monitor at any instant.
▪ When a process calls a monitor procedure, the first few instructions of the
procedure will check to see, if any other process is currently active within the
monitor.
▪ If so, the calling process will be suspended until the other process has left the
monitor.
▪ If no other process is using the monitor, the calling process may enter.

74
Monitors

▪ The solution proposes condition variables , along with two operations on


them wait and signal.

▪ When a monitor procedure discovers that it cannot continue (e.g., the


producer finds the buffer full), it does a wait on some condition variable,
say, full.

▪ This action causes the calling process to block. It also allows another
process that had been previously prohibited from entering the monitor to
enter now.

75
Monitors
▪ This other process, for example, the consumer, can wake up its
sleeping partner by doing a signal on the condition variable that its
partner is waiting on.

▪ To avoid having two active processes in the monitor at the same time a
signal statement may appear only as the final statement in a monitor
procedure.

▪ If a signal is done on a condition variable on which several processes


are waiting, only one of them determined by the system scheduler, is
revived(restore).

76
Monitor Syntax
monitor Demo // Name of monitor
Variables;
condition variables;
procedure p1();
.....
end;
procedure p2();
.....
end;
end monitor;

77
Producer Consumer Problem using Monitors

monitor ProducerConsumer
condition full , empty ;
integer count ;

procedure insert (item : integer );


begin
if count = N then wait (full );
insert_item (item );
count := count + 1:
if count = 1 then signal (empty )
end ;
78
Producer Consumer Problem using Monitors

function remove (item : integer) ;


begin
if count = 0 then wait (empty);
remove = remove_item ;
count := count - 1;
if count = N - 1 then signal (full )
end ;
count := 0;
end monitor ;

79
Producer Consumer Problem using Monitors

procedure producer ; procedure consumer ;


begin
begin
while true
while true do
do begin
begin item = ProducerConsumer
.remove ;
item = produce_item ;
consume_item (item )
ProducerConsumer.insert(item) end
end
end ; end ;

80
Dead Lock
▪ A deadlock consists of a set of blocked processes, each holding a resource
and waiting to acquire a resource held by another process in the set.
For Example:
▪ two processes each want to record a scanned document on a CD.
▪ process 1 requests for scanner & gets it
▪ process 2 requests for CD writer & gets it
▪ process 1 requests CD writer but is blocked
▪ process 2 requests scanner but is blocked.
▪ At this point both processes are blocked and will remain so forever,
▪ This situation is called a deadlock

81
Scenario of Dead Lock
▪ Let us assume that there are three processes P1, P2 and P3.
▪ There are three different resources R1, R2 and R3. R1 is assigned to P1, R2 is assigned
to P2 and R3 is assigned to P3.
▪ After some time, P1 demands for R1 which is being used by P2. P1 halts its execution
since it can't complete without R2. P2 also demands for R3 which is being used by P3.
P2 also stops its execution because it can't continue without R3. P3 also demands for
R1 which is being used by P1 therefore P3 also stops its execution.

82
Deadlock
▪ The processes are the cars.
▪ The resources are the spaces occupied by the cars

Deadlock possible Deadlock occurred


83
Sequence of events to use a Resource

84
Deadlock Characterization
1. Mutual exclusion:
▪ At least one resource type in the system which can be used in non-
shareable mode (i.e. one at a time) for example printer.
▪ Because many resource can be shared by more than one process at a time
(e.g., Memory location).
▪ In the diagram below, there is a single instance of Resource 1 and it is held by
Process 1 only.

85
Deadlock Characterization
2. Hold and Wait:
▪ Processes are allowed to request for new resources without releasing the
resources they are currently holding.
▪ In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is
requesting the Resource 1 which is held by Process 1.

86
Deadlock Characterization
3. No pre-emption:
▪ A resource cannot be preempted from a process by force. A process can
only release a resource voluntarily.
▪ In the diagram below, Process 2 cannot preempt Resource 1 from
Process 1. It will only be released when Process 1 relinquishes it
voluntarily after its execution is complete.

87
Deadlock Characterization
4. Circular wait: Two or more processes must form a circular chain in which each process is
waiting for a resource that is held by the next member of the chain.
▪ All four of these conditions must hold simultaneously in a system for a deadlock to
occur.

88
Handling Deadlocks
▪ Handling of deadlocks in distributed systems is more complex than in
centralized systems.
▪ Because the resources, the processes, and other relevant information are
scattered on different nodes of the system.
▪ Strategies to handle deadlocks:
1. Prevention – constraints are imposed on the way in which processes
request resources in order to prevent deadlocks.
2. Avoidance – resources are carefully allocated to avoid deadlocks.
3. Detection and recovery – deadlocks are allowed to occur and a
detection algorithm is used to detect them. After deadlock is detected it is
resolved by certain means.
89 4. Ignore – do nothing, just ignore the problem.
Deadlock Prevention
▪ Deadlock can be prevented by violating the one of the four
conditions that leads to deadlock.
▪ Attacking the Mutual Exclusion Condition
▪ Attacking the Hold and Wait Condition
▪ Attacking the No Preemption Condition
▪ Attacking the Circular Wait Condition

90
Violation of Mutual Exclusion Condition
▪ No deadlock if each resource can be assigned to more than one
process.
▪ This can violate the hardware properties of a resource.
▪ We can not assign some resources to more than one process at a
time such as printer.
▪ Not feasible

91
Violation of Hold and Wait Condition
1. Conservative Approach : Process is allowed to start
execution if and only if it has acquired all the resources (less
efficient, not implementable, easy, deadlock independence)
▪ A process is allowed to run if all resources it needed is
available. Otherwise nothing will be allocated and it will just
wait.
▪ Problem with this strategy is that a process may not know
required resources at start of run.
▪ Resource will not be used optimally.
▪ It may cause starvation of a process that needs may resources.

92
Violation of Hold and Wait Condition

2. Do not Hold: A process will acquire all desired resources but


before making any fresh request it must release all the
resources that it currently hold (efficient, implementable)

3. Wait Timeout: We place a maximum time up to which a


process can wait after which process and release all the holding
resources.

93
Violation of No Preemption Condition
Forcefully Pre-emption: We allow a process to forcefully
pre-empt the resource by other processes
▪ For example - When a process request for a resource that is
currently not available, all the resources held by the process
are taken away from it and process is blocked.
▪ This method may be used by high priority process or
system process.
▪ The process which are in waiting state must be selected as
a victim instead of process in the running state.

94
Violation of Circular Wait Condition
▪ To provide a global numbering of all the resources.
▪ Processes can request resources whenever they want to, but all requests must be
made in increasing or decreasing order.
▪ A process need not acquire them all at once.
▪ Circular wait is prevented if a process holding resource n cannot wait for resource m,
if m > n.
1.Printer, 2.Scanner, 3.Plotter, 4.Tape drive, 5.CD ROM
▪ A process may request 1st a CD ROM drive, then tape drive. But it may not request
1st a plotter, then a Tape drive.
▪ Resource graph can never have cycle.

▪ P1 – R1, R2, R4, R43, R56


▪ P1 – R1, R4, R2 NOT POSSIBLE
95
Deadlock Avoidance
▪ In most systems, however, resources are requested one at
a time. The system must be able to decide whether
granting a resource is safe or not and only make the
allocation when it is safe.
Safe and Unsafe state
▪ A state is said to be safe if it is not deadlocked and there
is some scheduling order in which every process can run
to completion if all of them suddenly request their
maximum number of resources immediately.
96
Deadlock Avoidance
▪ The resource allocation state of a system can be defined by the instances of available
and allocated resources, and the maximum instance of the resources demanded by the
processes.
▪ A state of a system recorded at some random time is shown below.
▪ Resources Assigned

97
Deadlock Avoidance

Resources still needed

▪ E = (7 6 8 4)
▪ P = (6 2 8 3)
▪ A = (1 4 0 1)
98
Deadlock Avoidance

▪ Above tables and vector E, P and A describes the resource allocation state of a system.
▪ There are 4 processes and 4 types of the resources in a system.
▪ Table 1 shows the instances of each resource assigned to each process.
▪ Table 2 shows the instances of the resources, each process still needs.
▪ Vector E is the representation of total instances of each resource in the system.
▪ Vector P represents the instances of resources that have been assigned to processes.
▪ Vector A represents the number of resources that are not in use.
▪ A state of the system is called safe if the system can allocate all the resources requested
by all the processes without entering into deadlock.
• If the system cannot fulfill the request of all processes then the state of the system is
called unsafe.
• The key of Deadlock avoidance approach is when the request is made for resources then
the request must only be approved in the case if the resulting state is also a safe state.

99
Deadlock Modeling
▪ For Deadlock modeling a directed graph, called Resource
Allocation Graph (RAG) is used.
▪ Notation used for representation of RAG -

10
0
Resource Allocation Graph(RAG)
▪ The resource allocation graph is the pictorial representation of the state
of a system.
▪ It also contains the information about all the instances of all the resources
whether they are available or being used by the processes.
▪ In Resource allocation graph, the process is represented by a Circle while
the Resource is represented by a rectangle.
▪ Edges in RAG are also of two types, one represents assignment and other
represents the wait of a process for a resource.
▪ A resource is shown as assigned to a process if the tail of the arrow is
attached to an instance to the resource and the head is attached to a
process.
▪ A process is shown as waiting for a resource if the tail of an arrow is
10 attached to the process while the head is pointing towards the resource.
1
Construction of Resource Allocation Graph

Figure : Resource allocation graphs


(a) Holding a resource. (b) Requesting a resource. (c) Deadlock.
10
2
Resource Allocation Graph
Problem : Find the system is in a deadlock state or not?
Let’s consider 3 processes P1, P2 and P3, and two
types of resources R1 and R2. The resources are
having 1 instance each.
According to the graph, R1 is being used by P1,
P2 is holding R2 and waiting for R1, P3 is
waiting for R1 as well as R2.
The graph is deadlock free since no cycle is being
formed in the graph.

10
3
Dead Lock Detection using RAG
▪ If a cycle is being formed in a Resource allocation graph where all the resources have
the single instance then the system is deadlocked.
▪ In Case of Resource allocation graph with multi-instanced resource types, Cycle is a
necessary condition of deadlock but not the sufficient condition.
▪ The following example contains three processes P1, P2, P3 and three resources R2,
R2, R3. All the resources are having single instances each.

10
4
Allocated Matrix
Allocation matrix can be formed by using the Resource allocation graph of a
system. In Allocation matrix, an entry will be made for each of the resource
assigned. For Example, in the following matrix, entry is being made in front
of P1 and below R3 since R3 is assigned to P1.

10
5
Requested Matrix
In request matrix, an entry will be made for each of the resource requested.
As in the following example, P1 needs R1 therefore an entry is being made
in front of P1 and below R1.

Avial = (0,0,0)

10
6
Bankers Algorithm
▪ Banker algorithm used to avoid deadlock and allocate
resources safely to each process in the computer system.
▪ The 'S-State' examines all possible tests or activities before deciding
whether the allocation should be allowed to each process.
▪ The banker's algorithm is named because it checks whether a person
should be sanctioned a loan amount or not to help the bank system
safely simulate allocation resources.
▪ The banker’s algorithm is a resource allocation and deadlock avoidance
algorithm that tests for safety by simulating the allocation for the
predetermined maximum possible amounts of all resources, then makes
an “s-state” check to test for possible activities, before deciding whether
10
allocation should be allowed to continue.
7
Bankers Algorithm
▪ When working with a banker's algorithm, it requests to know about three things:
1. How much each process can request for each resource in the system. It is denoted by
the [MAX] request.
2. How much each process is currently holding each resource in a system. It is denoted by
the [ALLOCATED] resource.
3. It represents the number of each resource currently available in the system. It is
denoted by the [AVAILABLE] resource.
4. The Banker's Algorithm is the combination of the safety algorithm and the resource
request algorithm to control the processes and avoid deadlock in a system.

10
8
Bankers Algorithm
Suppose n is the number of processes, and m is the number of each type of resource used in a
computer system.
1. Available: It is an array of length 'm' that defines each type of resource available in the system. When
Available[j] = K, means that 'K' instances of Resources type R[j] are available in the system.
2. Max: It is a [n x m] matrix that indicates each process P[i] can store the maximum number of
resources R[j] (each type) in a system.
3. Allocation: It is a matrix of m x n orders that indicates the type of resources currently allocated to
each process in the system. When Allocation [i, j] = K, it means that process P[i] is currently
allocated K instances of Resources type R[j] in the system.
4. Need: It is an M x N matrix sequence representing the number of remaining resources for each
process. When the Need[i] [j] = k, then process P[i] may require K more instances of resources type
Rj to complete the assigned work. Need[i][j] = Max[i][j] - Allocation[i][j].
5. Finish: It is the vector of the order m. It includes a Boolean value (true/false) indicating whether the
process has been allocated to the requested resources, and all resources have been released after
10 finishing its task.
9
Safety Algorithm
▪ The algorithm for finding out whether or not a system is in a safe state
can be described as follows:
▪ 1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
▪ Initialize: Work = Available
▪ Finish[i] = false; for i=1, 2, 3, 4….n
▪ 2) Find an i such that both
▪ a) Finish[i] = false
▪ b) Needi <= Work
▪ if no such i exists goto step (4)
▪ 3) Work = Work + Allocation[i]
▪ Finish[i] = true
▪ goto step (2)
▪ 4) if Finish [i] = true for all i
110 ▪ then the system is in a safe state
Resources Request Algorithm

▪ Let Requesti be the request array for process Pi. Requesti [j] = k means process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the
following actions are taken:
▪ 1) If Requesti <= Needi
▪ Goto step (2) ; otherwise, raise an error condition, since the process has exceeded its maximum claim.
▪ 2) If Requesti <= Available
▪ Goto step (3); otherwise, Pi must wait, since the resources are not available.
▪ 3) Have the system pretend to have allocated the requested resources to process Pi by modifying the
state as
▪ follows:
▪ Available = Available – Requesti
▪ Allocationi = Allocationi + Requesti
▪ Needi = Needi– Requesti

111
Numerical on Bankers Algorithm
▪ Example: Consider a system that contains five processes P1, P2, P3, P4, P5 and the
three resource types A, B and C. Following are the resources types: A has 10, B has 5
and the resource type C has 7 instances.

112
Numerical on Bankers Algorithm
▪ Answer the following questions using the banker's algorithm:
▪ What is the reference of the need matrix?
▪ Determine if the system is safe or not.
▪ What will happen if the resource request (1, 0, 0) for process P1 can the system accept
this request immediately?
▪ Context of the need matrix is as follows:
▪ Need [i] = Max [i] - Allocation [i]
▪ Need for P1: (7, 5, 3) - (0, 1, 0) = 7, 4, 3
▪ Need for P2: (3, 2, 2) - (2, 0, 0) = 1, 2, 2
▪ Need for P3: (9, 0, 2) - (3, 0, 2) = 6, 0, 0
▪ Need for P4: (2, 2, 2) - (2, 1, 1) = 0, 1, 1
▪ Need for P5: (4, 3, 3) - (0, 0, 2) = 4, 3, 1
▪ Available Resources of A, B and C are 3, 3, and 2.
▪ Now we check if each type of resource request is available for each process.
113
Numerical on Bankers Algorithm
▪ Step 1: For Process P1:
▪ Need <= Available
▪ 7, 4, 3 <= 3, 3, 2 condition is false.
▪ So, we examine another process, P2.
▪ Step 2: For Process P2:
▪ Need <= Available
▪ 1, 2, 2 <= 3, 3, 2 condition true
• New available = available + Allocation
• (3, 3, 2) + (2, 0, 0) => 5, 3, 2
• Similarly, we examine another process P3.
• Step 3: For Process P3:
• P3 Need <= Available
• 6, 0, 0 < = 5, 3, 2 condition is false.
• Similarly, we examine another process, P4.
114
Numerical on Bankers Algorithm
▪ Step 4: For Process P4:
▪ P4 Need <= Available
▪ 0, 1, 1 <= 5, 3, 2 condition is true
▪ New Available resource = Available + Allocation
▪ 5, 3, 2 + 2, 1, 1 => 7, 4, 3
▪ Similarly, we examine another process P5.
▪ Step 5: For Process P5:
▪ P5 Need <= Available
▪ 4, 3, 1 <= 7, 4, 3 condition is true
▪ New available resource = Available + Allocation
▪ 7, 4, 3 + 0, 0, 2 => 7, 4, 5
▪ Now, we again examine each type of resource request for processes P1 and P3.

115
Numerical on Bankers Algorithm
▪ Step 6: For Process P1:
▪ P1 Need <= Available
▪ 7, 4, 3 <= 7, 4, 5 condition is true
▪ New Available Resource = Available + Allocation
▪ 7, 4, 5 + 0, 1, 0 => 7, 5, 5
▪ So, we examine another process P2.
▪ Step 7: For Process P3:
▪ P3 Need <= Available
▪ 6, 0, 0 <= 7, 5, 5 condition is true
▪ New Available Resource = Available + Allocation
▪ 7, 5, 5 + 3, 0, 2 => 10, 5, 7
▪ Hence, we execute the banker's algorithm to find the safe state and the safe sequence like P2, P4, P5, P1
and P3.
▪ For granting the Request (1, 0, 2), first we have to check that Request <= Available, that is (1, 0, 2) <= (3,
116 3, 2), since the condition is true. So the process P1 gets the request immediately.
Numerical - The Banker’s Algorithm
▪ A single processor system has three resource types X, Y and Z, which are shared
by three processes. There are 5 units of each resource type. Consider the
following scenario, where the column allocated denotes the number of units of
each resource type allocated to each process, and the column request denotes the
number of units of each resource type requested by a process in order to
complete execution. Which of these processes will finish LAST.

(A) P0
(B) P1
(C) P2
(D) None of the above, since
the system is in a deadlock

117
Numerical - The Banker’s Algorithm

118
Advantages of Bankers Algorithm
▪ It contains various resources that meet the requirements of each
process.
▪ Each process should provide information to the operating system for
upcoming resource requests, the number of resources, and how long the
resources will be held.
▪ It helps the operating system manage and control process requests for
each type of resource in the computer system.
▪ The algorithm has a Max resource attribute that represents indicates
each process can hold the maximum number of resources in a system.

119
Dis Advantages of Bankers Algorithm
▪ It requires a fixed number of processes, and no additional processes can
be started in the system while executing the process.
▪ The algorithm does no longer allows the processes to exchange its
maximum needs while processing its tasks.
▪ Each process has to know and state their maximum resource
requirement in advance for the system.
▪ The number of resource requests can be granted in a finite time, but the
time limit for allocating the resources is one year.

12
0
Deadlock Detection
▪ When this technique is used, the system does not attempt to prevent
deadlocks from occurring.
▪ Instead, it lets them occur, tries to detect when this happens, and then
takes some action to recover after the fact.
▪ For deadlock detection, the system must provide
• An algorithm that examines the state of the system to detect
whether a deadlock has occurred
• And an algorithm to recover from the deadlock

12
1
Deadlock Detection: one Resource of each type

▪ There are seven process, A to G, and six resources R through W


• Process A hold R and wants S.
• Process B holds nothing but wants T.
• Process C holds nothing but wants S.
• Process D holds U and wants S and T.
• Process E holds T and wants V.
• Process F holds W and wants S.
• Process G holds V and wants U.

12 Is this system deadlock ? And if so which processes are involved ?


2
12
3
Deadlock Detection: one Resource of each type

Algorithm for detecting Deadlock


1. For each node, N in the graph, perform the following five steps with N as the
starting node.
2. Initialize L to the empty list, designate all arcs as unmarked.
3. Add current node to end of L, check to see if node now appears in L two
times. If it does, graph contains a cycle (listed in L), algorithm terminates.
4. From given node, see if any unmarked outgoing arcs. If so, go to step 5; if
not, go to step 6.
5. Pick an unmarked outgoing arc at random and mark it. Then follow it to the
new current node and go to step 3.
6. If this is initial node, graph does not contain any cycles, algorithm terminates.
12 Otherwise, dead end. Remove it, go back to previous node, make that one
4 current node, go to step 3.
Deadlock Detection: one Resource of each type

• We start at R and initialize L to the empty list. Then we add R to the list and
move to the only possibility, A, and add it to L, giving
• L = [R, A].
• A we go to S, giving L = [R, A, S].
• Dead end.
• we restart the algorithm starting at A, resetting L to the empty list. This search,
too, quickly stops, so we start again at B.
• From B get to D, at which time L = [B, T, E, V, G, U, D].
• If we pick S we come to a dead end and backtrack to D. The second time we
pick T and
• update L to be [B, T, E, V, G, U, D, T], at which point we discover the cycle
and stop the algorithm.
12
5
Deadlock Detection: Multiple Resources of Each Type

▪ Each process is initially said to be unmarked. As the algorithm progresses,


processes will be marked, indicating that they are able to complete and are
thus not deadlocked. When the algorithm terminates, any unmarked
processes are known to be deadlocked
If such a
Look for an
process is
unmarked
found, add If no such
process, Pi ,
Deadlock the i-th row process
for which
detection of C to A, exists, the
the i-th row
algorithm mark the algorithm
of R is less
process, terminates.
than or
and go back
equal to A.
12 to step 1.
6
Deadlock Detection: Multiple Resources of Each Type

▪ Three process and four classes for device are there

12
7
Deadlock Detection: Multiple Resources of Each Type

▪ We will check the request matrix,


process 0 request can not satisfied.
▪ The process 1 request can not be
satisfied.
▪ Process 2 request can be satisfied so
resources are granted.
▪ Now A=(0 0 0 0)
▪ After completion of process 2
▪ A=(2 2 2 0)
▪ Now we can assign resources to process
1.
▪ After process 1 A=(4 2 2 1)
12 ▪ So there no dead lock.
8
Deadlock Recovery

Recovery

Recovery Recovery Recovery


through through through killing
preemption rollback processes

12
9
Deadlock Recovery
▪ Recovery through preemption
▪ In some cases, it may be possible to temporarily take a resource away
from its current owner and give it to another process.
▪ Recovering this way is frequently difficult or impossible.
▪ Choosing the process to suspend depends largely on which ones have
resources that can easily be taken back.
P1

R1 R2

13 P2
0
Deadlock Recovery
▪ Recovery through killing processes
• The simplest way to break a deadlock is to kill one or more processes.
• Kill all the process involved in deadlock.
• Kill process one by one.
• After killing each process check for deadlock.
• If deadlock recovered then stop killing more process.
• Otherwise kill another process.

13
1
Deadlock Recovery
▪ Recovery through rollback
▪ Checkpoint a process periodically.
▪ Check pointing a process means that its state is written to a file so
that it can be restarted later.
▪ When deadlock is detected, rollback the preempted process up to
the previous safe state before it acquired that resource.

13
2
13
3

You might also like