0% found this document useful (0 votes)
3 views45 pages

SPOS_Unit 5

Uploaded by

sudarshan2003sk2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
3 views45 pages

SPOS_Unit 5

Uploaded by

sudarshan2003sk2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 45

Synchronization &

Concurrency Control
By: Mohammed Asad
Concurrency: Introduction

• Concurrency is the execution of the multiple instruction sequences at the


same time.

• It
happens in the operating system when there are several process threads
running in parallel.

• The running process threads always communicate with each other through
shared memory or message passing.
Concurrency: Introduction

• Concurrencyresults in sharing of resources which may leads to problems like


deadlocks and resources starvation.

• Ithelps in techniques like coordinating execution of processes, memory


allocation and execution scheduling for maximizing throughput
Principle of Concurrency
• Bothinterleaved and overlapped processes can be viewed as examples of
concurrent processes, they both present the same problems.

• The relative speed of execution cannot be predicted. It depends on the


following:

1. The activities of other processes


2. The way operating system handles interrupts
3. The scheduling policies of the operating system
Problems in Concurrency
1. Sharing global resources:
• Sharing of global resources safely is difficult.
• If
two processes both make use of a global variable and both perform read
and write on that variable, then the order in which various read and write are
executed is critical.

2. Optimal allocation of resources:


• It
is difficult for the operating system to manage the allocation of resources
optimally.
Problems in Concurrency

3. Locating programming errors:


• It
is very difficult to locate a programming error because reports are usually
not reproducible.

4. Locking the channel:


• It
may be inefficient for the operating system to simply lock the channel and
prevents its use by other processes.
Advantages of Concurrency
1. Running of multiple applications:
• It enable to run multiple applications at the same time.

2. Better resource utilization:


• It
enables that the resources that are unused by one application can be used
for other applications.

3. Better average response time:


• Withoutconcurrency, each application has to be run to completion before the
next one can be run.
Disadvantages of Concurrency
1. It is required to protect multiple applications from one another.

2. It is required to coordinate multiple applications through additional


mechanisms.

3. Additional performance overheads and complexities in operating systems


are required for switching among applications.

4. Sometimes running too many applications concurrently leads to severely


degraded performance.
Issues of Concurrency

• Non-atomic:
• Operations that are non-atomic but interruptible by multiple processes can
cause problems.

• Race conditions:
•A race condition occurs of the outcome depends on which of several
processes gets to a point first.
Issues of Concurrency
• Blocking:
• Processescan block waiting for resources. A process could be blocked for
long period of time waiting for input from a terminal.
• If
the process is required to periodically update some data, this would be very
undesirable.

• Starvation: It occurs when a process does not obtain service to progress.

• Deadlock: It occurs when two processes are blocked and hence neither can
proceed to execute.
Synchronization
• On the basis of synchronization, processes are categorized as one of the
following two types:

1. Independent Process : Execution of one process does not affects the


execution of other processes.

2. Cooperative Process : Execution of one process affects the execution of


other processes.

• “Theprocedure involved in preserving the appropriate order of execution of


cooperative processes is known as Process Synchronization.”
Synchronization Mechanism
• Race Condition:
•A Race Condition typically occurs when two or more threads try to read,
write and possibly make the decisions based on the memory that they are
accessing concurrently.

• Critical Section:
• The regions of a program that try to access shared resources and may cause
race conditions are called critical section.
• Toavoid race condition among the processes, we need to assure that only one
process at a time can execute within the critical section.
Critical Section Problem
• Critical Section is the part of a program which tries to access shared resources.

• That resource may be any resource in a computer like a memory location, Data
structure, CPU or any IO device.

• The critical section cannot be executed by more than one process at the same
time; operating system faces the difficulties in allowing and disallowing the
processes from entering the critical section.

• The critical section problem is used to design a set of protocols which can
ensure that the Race condition among the processes will never arise.
Critical Section Problem
• Critical section is a code segment that can be accessed by only one process at
a time.

• Critical
section contains shared variables which need to be synchronized to
maintain consistency of data variables.
Critical Section Problem
• In the entry section, the process requests for entry in the Critical Section.

• Any solution to the critical section problem must satisfy three requirements:

1. Mutual exclusion
2. Progress
3. Bounded waiting
Requirements of Synchronization mechanisms
Primary
• Mutual Exclusion
• Our solution must provide mutual exclusion. By Mutual Exclusion, we mean
that if one process is executing inside critical section then the other process
must not enter in the critical section.

• Progress
• Progress means that if one process doesn't need to execute into critical section
then it should not stop other processes to get into the critical section.
Mutual Exclusion (Mutex)
Requirements of Synchronization mechanisms
Secondary
• Bounded Waiting
• We should be able to predict the waiting time for every process to get into the
critical section.
• The process must not be endlessly waiting for getting into the critical section.

• Architectural Neutrality
• Our mechanism must be architectural natural. It means that if our solution is
working fine on one architecture then it should also run on the other ones as
well.
Interprocess Communication (IPC)
• Incomputer science, Interprocess Communication (IPC) allows
communicating processes to exchange the data and information.

• There are two methods of IPC :


1. Shared memory
2. Message passing
Interprocess Communication (IPC)
• Shared memory :
• In this, processes are interact with each other through shared variable;
processes are exchange information by reading & writing data using shared
variable.
• Message Passing:
• Inthis, instead of reading or writing, processes send and receive the
messages.
• Send and receive functions are implemented in OS.
SEND (B, message)
RECEIVE (A, memory address)
Semaphore in IPC
• Incomputer science, a semaphore is a variable or abstract data type used to
control access to a common resource by multiple processes in a concurrent
system such as a multiprogramming operating system.

• Semaphore is a simply a variable. This variable is used to solve critical


section problem and to achieve process synchronization in the multi
processing environment.

• Semaphores are a useful tool in the prevention of race conditions; however,


their use is by no means a guarantee that a program is free from these
problems.
Types of Semaphores
• The two most common kinds of semaphores are counting semaphores and
binary semaphores.

• Countingsemaphore can take non-negative integer values and Binary


semaphore can take the value 0 & 1 only.

• Semaphores which allow an arbitrary resource count are called counting


semaphores.
• While semaphores which are restricted to the values 0 and 1 (or
locked/unlocked, unavailable/available) are called binary semaphores and
are used to implement locks.
Primitives of Semaphores
• Two types of primitives:
1. Wait():
• A semaphore is initialized to a non negative value.
• Thewait operation decrements the semaphore value. If the value becomes
negative, then the process is blocked and is put in waiting queue.

2. Signal():
• The signal operation increments the semaphore value. If the value is greater
than or equal to zero, then the process blocked by wait() operation is removed
from the waiting queue and sent to ready queue.
Monitor
•A monitor is a synchronization construct that allows threads to have both
mutual exclusion and the ability to wait (block) for a certain condition to
become true.
• Monitors also have a mechanism for signaling other threads that their
condition has been met.
•A monitor consists of a mutex (lock) object and condition variables. A
condition variable is basically a container of threads that are waiting for a
certain condition.
• Monitors provide a mechanism for threads to temporarily give up exclusive
access in order to wait for some condition to be met, before regaining
exclusive access and resuming their task.
Classical Problems of Synchronization
Producer Consumer Problem
• Incomputing, the producer–consumer problem (also known as the
bounded-buffer problem) is a classic example of a multiprocess
synchronization problem.

• The problem describes two processes, the producer and the consumer, who
share a common, fixed-size buffer used as a queue.

• The producer's job is to generate data, put it into the buffer, and start again.

• Atthe same time, the consumer is consuming the data (i.e., removing it from
the buffer), one piece at a time.
Producer Consumer Problem
• The problem is to make sure that the producer won't try to add data into the
buffer if it's full and that the consumer won't try to remove data from an empty
buffer.

• The solution for the producer is to either go to sleep or discard data if the buffer
is full.

• Thenext time the consumer removes an item from the buffer, it notifies the
producer, who starts to fill the buffer again.

• In the same way, the consumer can go to sleep if it finds the buffer to be empty.
Producer Consumer Problem
• Thenext time the producer puts data into the buffer, it wakes up the sleeping
consumer.

• The solution can be reached by means of inter-process communication,


typically using semaphores.

• An inadequate solution could result in a deadlock where both processes are


waiting to be awakened.

• The problem can also be generalized to have multiple producers and


consumers.
Reader Writer Problem
• The R-W problem is another classic problem for which design of
synchronization and concurrency mechanisms can be tested.
• Definition
• There is a data area that is shared among a number of processes.
• Any number of readers may simultaneously read to the data area.
• Only one writer at a time may write to the data area.
• If a writer is writing to the data area, no reader may read it.
• If there is at least one reader reading the data area, no writer may write to it.
• Readers only read and writers only write.
• A process that reads and writes to a data area must be considered a writer.
Dining Philosopher Problem
• The Dining Philosopher Problem states that K philosophers seated around a
circular table with one chopstick between each pair of philosophers.

• There is one chopstick between each philosopher.

• A philosopher may eat if he can pickup the two chopsticks adjacent to him.

• One chopstick may be picked up by any one of its adjacent followers but not
both.
Deadlock
•A deadlock is a situation in which two computer programs sharing the same
resource are effectively preventing each other from accessing the resource,
resulting in both programs ceasing to function.
Deadlock Condition
• Mutual Exclusion: One or more than one resource
are non-sharable (Only one process can use at a time).

• Hold and Wait: A process is holding at least


one resource and waiting for resources.
Deadlock Condition
• No Preemption: A resource cannot be taken from
a process unless the process releases the resource.

• Circular Wait: A set of processes are waiting for


each other in circular form.
Methods of Handling Deadlock
• There are three ways to handle deadlock
1. Deadlock prevention or avoidance: The idea is to not let the system into
deadlock state.

2. Deadlock detection and recovery: Let deadlock occur, then do


preemption to handle it once occurred.

3. Ignore the problem all together: If deadlock is very rare, then let it
happen and reboot the system. This is the approach that both windows and
Unix take.
Deadlock Prevention
• We can prevent Deadlock by eliminating any of the above four condition.
• Eliminate Mutual Exclusion:
• It
is not possible to dis-satisfy the mutual exclusion because some resources,
such as the tap drive and printer, are inherently non-shareable.

• Eliminate Hold and wait


• Allocate all required resources to the process before start of its execution, this
way hold and wait condition is eliminated but it will lead to low device
utilization.
Deadlock Prevention
• forexample, if a process requires printer at a later time and we have allocated
printer before the start of its execution printer will remained blocked till it has
completed its execution.

• Processwill make new request for resources after releasing the current set of
resources.

• This solution may lead to starvation.


Deadlock Prevention
• Eliminate No Preemption:
• Preempt resources from process when resources required by other high
priority process.

• Eliminate Circular Wait:


• Each resource will be assigned with a numerical number. A process can
request for the resources only in increasing order of numbering.
• ForExample, if P1 process is allocated R5 resources, now next time if P1 ask
for R4, R3 lesser than R5 such request will not be granted, only request for
resources more than R5 will be granted.
Deadlock Avoidance
• Banker’s Algorithm
• Banker's algorithm is a deadlock avoidance algorithm. It is named so because
this algorithm is used in banking systems to determine whether a loan can be
granted or not.

• Consider there are n account holders in a bank and the sum of the money in
all of their accounts is S.

• Everytime a loan has to be granted by the bank, it subtracts the loan amount
from the total money the bank has.
Deadlock Avoidance
• Then it checks if that difference is greater than S.

• It
is done because, only then, the bank would have enough money even if all
the n account holders draw all their money at once.

• Banker's algorithm works in a similar way in computers.

• Whenever a new process is created, it must exactly specify the maximum


instances of each resource type that it needs.
Deadlock Detection
• If Resources Have a Single Instance:
• In
this case for Deadlock detection, we can run an algorithm to check for the
cycle in the Resource Allocation Graph.
• The presence of a cycle in the graph is a sufficient condition for deadlock.

• If There are Multiple Instances of Resources:


• Detection of the cycle is necessary but not a sufficient condition for deadlock
detection, in this case, the system may or may not be in deadlock varies
according to different situations.
Deadlock Detection
• Wait-For Graph Algorithm
• The Wait-For Graph Algorithm is a deadlock detection algorithm used to
detect deadlocks in a system where resources can have multiple instances.

• Thealgorithm works by constructing a Wait-For Graph, which is a directed


graph that represents the dependencies between processes and resources.
Deadlock Recovery
• Killing The Process:
• Killing all the processes involved in the deadlock.

• Killing process one by one.

• Afterkilling each process check for deadlock again and keep repeating the
process till the system recovers from deadlock.

• Killingall the processes one by one helps a system to break circular wait
conditions.
Deadlock Recovery
• Resource Preemption:
• Resources are preempted from the processes involved in the deadlock, and
preempted resources are allocated to other processes so that there is a
possibility of recovering the system from the deadlock.
• In this case, the system goes into starvation.

• Concurrency Control:
• Concurrency control mechanisms are used to prevent data inconsistencies in
systems with multiple concurrent processes.
• These mechanisms ensure that concurrent processes do not access the same
data at the same time, which can lead to inconsistencies and errors.
Deadlock Recovery
• Deadlockscan occur in concurrent systems when two or more processes are
blocked, waiting for each other to release the resources they need.

• This can result in a system-wide stall, where no process can make progress.

• Concurrency control mechanisms can help prevent deadlocks by managing


access to shared resources and ensuring that concurrent processes do not
interfere with each other.
THANK YOU…!!!

You might also like