Unit - 1
Process Synchronization
Process Synchronization is the coordination of execution of multiple
processes in a multi-process system to ensure that they access shared
resources in a controlled and predictable manner. It aims to resolve the
problem of race conditions and other synchronization issues in a concurrent
system.
Types of Process Synchronization
The two primary type of process Synchronization in an Operating System
are:
1. Competitive: Two or more processes are said to be in Competitive
Synchronization if and only if they compete for the accessibility of a
shared resource.
Lack of Synchronization among Competing process may lead to
either Inconsistency or Data loss.
2. Cooperative: Two or more processes are said to be in Cooperative
Synchronization if and only if they get affected by each other i.e.
execution of one process affects the other process.
Lack of Synchronization among Cooperating process may lead to
Deadlock.
Critical Section: It is that part of the program where shared resources are
accessed. Only one process can execute the critical section at a given point
of time. If there are no shared resources, then no need of synchronization
mechanisms.
Critical Section Problem
A critical section is a code segment that can be accessed by only one
process at a time. The critical section contains shared variables that need to
be synchronized to maintain the consistency of data variables. So the critical
section problem means designing a way for cooperative processes to access
shared resources without creating data inconsistencies.
In the above example, the operations that involve balance variable should be
put in critical sections of both deposit and withdraw.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
Mutual Exclusion: If a process is executing in its critical section, then no
other process is allowed to execute in the critical section.
Progress: If no process is executing in the critical section and other
processes are waiting outside the critical section, then only those
processes that are not executing in their remainder section can participate
in deciding which will enter the critical section next, and the selection
cannot be postponed indefinitely.
Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has
made a request to enter its critical section and before that request is
granted.
What is Peterson's Algorithm?
Peterson's Algorithm is a well-known solution for ensuring mutual exclusion
in process synchronization. It is designed to manage access to shared
resources between two processes in a way that prevents conflicts or data
corruption. The algorithm ensures that only one process can enter the critical
section at any given time while the other process waits its turn. Peterson's
Algorithm uses two simple variables one to indicate whose turn it is to
access the critical section and another to show if a process is ready to enter.
This method is often used in scenarios where two processes need to share
resources or data without interfering with each other. It is simple, easy to
understand, and serves as a foundational concept in process
synchronization.
Algorithm for Pi process Algorithm for Pi process
do{ do{
flag[i] = true; flag[j] = true;
turn = j; turn = i;
while (flag[j] && turn == j); while (flag[i] && turn == i);
//critical section //critical section
flag[i] = false; flag[j] = false;
//remainder section //remainder section
}while(true); }while(true);
Peterson's Algorithm Explanation
Peterson's Algorithm is a mutual exclusion solution used to ensure that two
processes do not enter into the critical sections at the same time. The
algorithm uses two main components: a turn variable and a flag array.
The turn variable is an integer that indicates whose turn it is to enter the
critical section.
The flag array contains Boolean values for each process, indicating
whether a process wants to enter the critical section.
Here’s how Peterson’s Algorithm works step-by-step:
Initial Setup: Initially, both processes set their respective flag values
to false, meaning neither wants to enter the critical section. The turn
variable is set to the ID of one of the processes (either 0 or 1), indicating
that it’s that process's turn to enter.
Intention to Enter: When a process wants to enter the critical section, it
sets its flag value to true signaling its intent to enter.
Set the Turn: Then the process, which is having the next turn, sets the
turn variable to its own ID. This will indicate that it is its turn to enter the
critical section.
Waiting Loop: Both processes enter a loop where they check the flag of
the other process and the turn variable:
o If the other process wants to enter (i.e., flag[1 - processID] ==
true), and
o It’s the other process’s turn (i.e., turn == 1 - processID ), then the
process waits, allowing the other process to enter the critical
section.
This loop ensures that only one process can enter the critical section at a
time, preventing a race condition.
Critical Section: Once a process successfully exits the loop, it enters the
critical section, where it can safely access or modify the shared resource
without interference from the other process.
Exiting the Critical Section: After finishing its work in the critical section,
the process resets its flag to false. This signals that it no longer wants to
enter the critical section, and the other process can now have its turn.
By alternating turns and using these checks, Peterson’s algorithm
ensures mutual exclusion, meaning only one process can access the critical
section at a time, and both processes get an equal opportunity to do so.
Example of Peterson's Algorithm
Peterson’s solution is often used as a simple example of mutual exclusion in
concurrent programming. Here are a few scenarios where it can be applied:
Accessing a shared printer: Peterson's solution ensures that only one
process can access the printer at a time when two processes are trying to
print documents.
Reading and writing to a shared file: It can be used when two
processes need to read from and write to the same file, preventing
concurrent access issues.
Competing for a shared resource: When two processes are competing
for a limited resource, such as a network connection or critical hardware,
Peterson’s solution ensures mutual exclusion to avoid conflicts.
Dekkers algorithm
Dekker's Algorithm is one of the earliest known algorithms for mutual exclusion in concurrent
programming. It allows two processes to share a single-use resource without conflict, using
only shared memory for communication.
flag[0] = true; // P0 wants to enter
while (flag[1])
{ // Check if P1 also wants to enter
if (turn != 0)
{ // If it's not P0's turn
flag[0] = false; // Withdraw intent
while (turn != 0); // Wait until it's P0's turn
flag[0] = true; // Try again
}
}
// --- Critical Section ---
... do work ...
// -----------------------
turn = 1; // Give turn to P1
flag[0] = false; // P0 is done
Example:
Let’s assume two processes:
P0: Trying to access a printer.
P1: Also trying to access the same printer.
Case 1: Only P0 wants to enter
flag[0] = true
flag[1] = false
while (flag[1]) is false → P0 enters critical section.
Case 2: Both want to enter at the same time
Both set flag[0] = true and flag[1] = true
Both enter the while(flag[other]) loop
Suppose turn = 0 → P1 sees turn ≠ 1, so it waits.
P0 proceeds to critical section.
After P0 finishes:
turn = 1, flag[0] = false
P1 now gets its turn and enters.
Limitations:
Only works for 2 processes
Complex and inefficient in modern systems
Replaced by Peterson’s algorithm or hardware primitives like test-and-set,
semaphores, mutexes.
Lamports Bakkery Algorithm
Lamport’s Bakery Algorithm 🧁 is a classical solution to the mutual exclusion problem in
concurrent programming, especially when there are more than two processes.
Algorithm for Process i
// Entry Section
choosing[i] = true;
number[i] = 1 + max(number[0], ..., number[n-1]);
choosing[i] = false;
for (j = 0; j < n; j++) {
while (choosing[j]); // Wait if j is choosing a number
while (number[j] != 0 && (number[j] < number[i] ||
(number[j] == number[i] && j < i)));
// Wait if j has a smaller ticket, or same ticket but smaller ID
}
// --- Critical Section ---
// Only one process is here at a time
// Exit Section
number[i] = 0;
Lamport’s Bakery Algorithm Properties
Property Guaranteed?
Mutual Exclusion ✅ Yes
Bounded Waiting ✅ Yes
Fairness ✅ Yes (ordered by ticket)
Starvation-Free ✅ Yes
Supports > 2 processes ✅ Yes
Simple Hardware Needs Only shared memory
Limitations:
Requires shared memory
Comparatively slow for high-performance systems
Assumes read/write atomicity
Semaphores
A semaphore is a synchronization tool used to control access to a shared resource in a
concurrent system like a multi threaded program.
Types of Semaphores in Operating Systems
Semaphores can be broadly categorized into:
1. Binary Semaphore
2. Counting Semaphore
1. Binary Semaphore (Also called Boolean Semaphore)
💡 Definition:
A binary semaphore can take only two values: 0 or 1.
Used to implement mutual exclusion (like a lock/unlock mechanism).
Protecting critical sections
Ensuring only one process/thread accesses a resource at a time
💻 Example:
semaphore mutex = 1;
wait(mutex);
// critical section
signal(mutex);
When a process calls wait(mutex), the value becomes 0 → no other process can enter.
signal(mutex) resets it to 1 → allows others in.
2. Counting Semaphore
💡 Definition:
A counting semaphore allows its value to be any non-negative integer.
Represents the number of available units of a resource.
Managing a pool of resources, like:
o N printers
o N database connections
o N empty/full slots in buffer
💻 Example: Buffer of size N = 5
semaphore empty = 5; // Initially, 5 empty slots
semaphore full = 0; // Initially, no full slots
// Producer
wait(empty); // decrease empty slot count
// produce item
signal(full); // increase full slot count
// Consumer
wait(full); // wait until there's something to consume
// consume item
signal(empty); // indicate one more empty slot
All semaphore operations must be atomic (no interruption during wait() or signal()).
Improper ordering of wait() and signal() can lead to:
Deadlock
Starvation
Priority inversion
Semaphore Operations:
wait(S):
while (S <= 0); // Busy wait (for basic semaphore)
S = S - 1;
signal(S):
S = S + 1;
Example: Producer-Consumer Problem
Shared buffer of size N
Producer adds item to buffer
Consumer removes item from buffer
Producer
wait(empty);
wait(mutex);
// add item to buffer
signal(mutex);
signal(full);
Consumer
wait(full);
wait(mutex);
// remove item from buffer
signal(mutex);
signal(empty);
Limitations:
Programmer must carefully place wait() and signal()
Improper use may lead to deadlock or starvation
Difficult to debug
Concurrent Programming
Monitors:
A monitor is a high-level synchronization construct used in concurrent programming to control access
to shared resources by multiple threads or processes.
A monitor is an abstract data type (like a class or module) that includes:
Shared variables (resource being protected)
Procedures (methods) to operate on the shared data
Synchronization primitives (like wait() and signal())
➡️Only one process can be active inside the monitor at a time, ensuring mutual exclusion
automatically.
Structure of monitor:
monitor ExampleMonitor {
// shared data
procedure entry1(...) {
// synchronized code
}
procedure entry2(...) {
// synchronized code
}
condition x;
procedure waitX() {
x.wait(); // process waits on condition x
}
procedure signalX() {
x.signal(); // resumes one waiting process on x
}
}
Advantages of Monitors
Simpler than semaphores
Encapsulates synchronization with data
Automatically ensures mutual exclusion
Reduces errors like deadlocks, race conditions
Limitations
Only works within the same process (not across systems)
Most programming languages don't have built-in monitor constructs (exceptions:
Java, Modula-2, Concurrent Pascal)
May require support from compiler/runtime
Feature Monitors Semaphores
High-level synchronization construct that Low-level synchronization
Definition encapsulates shared data, procedures, and primitive based on integer
synchronization counters
Explicit — programmer must
Implicit — only one thread can execute
Mutual Exclusion use wait() and signal()
inside a monitor at a time
manually
Level of Abstraction High-level abstraction Low-level, closer to hardware
Harder — prone to programmer
Easier — automatically manages access
Ease of Use errors like deadlock or race
and synchronization
conditions
Synchronization Uses condition variables with wait() Uses wait() and signal() (also
Mechanism and signal() called P() and V())
Handled automatically by the monitor Programmer must ensure atomic
Atomicity
construct execution
Yes — data and procedures are No encapsulation — operates on
Data Encapsulation
encapsulated in a single unit global/shared semaphores
General concept — implemented
Language Support Built-in in Java, Modula-2, etc.
using libraries or OS primitives
Can be unfair — may lead to
More fair — managed by the monitor’s
Fairness starvation if not carefully
internal mechanism
handled
Reusability and Low — logic is spread and less
High — since logic is encapsulated
Modularity reusable
Harder — programmer needs to
Deadlock Avoidance Easier to avoid deadlocks
take care manually
Multi-process Generally limited to single Can be used across multiple
support process/thread context processes (with OS support)
Uses wait() / signal() with condition Uses semaphore wait() /
Condition Waiting
variables signal()
Message Passing
What is Message Passing?
Message passing involves sending and receiving messages between concurrent entities
(threads/processes), typically in a distributed system or systems without shared memory.
Instead of sharing variables, processes exchange information explicitly.
Two Main Operations:
1. send(destination, message) — Transmit a message to another process.
2. receive(source, &message) — Wait and retrieve a message from another process.
Types of message passing
Type Description
Sender waits until receiver gets the message; Receiver waits for
Synchronous (Blocking)
message.
Asynchronous (Non- Sender sends and continues; Receiver may get message later
blocking) (buffered).
Direct Communication Sender and receiver explicitly name each other.
Indirect Communication Messages sent via a mailbox or queue.
Advantages of Message Passing
Clear data encapsulation
No race conditions due to no shared variables
Suitable for distributed systems
Built-in synchronization
Disadvantages
Can have latency and overhead
More complex protocols for ordering and delivery
Buffer management may be tricky