0% found this document useful (0 votes)
18 views45 pages

Inter-Process Communication Explained

The document discusses Inter-Process Communication (IPC), which allows processes to exchange data and coordinate actions, highlighting its importance for information sharing, computational speedup, and modularity. It describes two IPC models: Shared Memory and Message Passing, along with synchronization mechanisms to prevent issues like race conditions and deadlocks. Additionally, it covers various synchronization problems and solutions, including the Dining Philosopher Problem, emphasizing the need for effective resource management in concurrent programming.

Uploaded by

Khelan Mehta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views45 pages

Inter-Process Communication Explained

The document discusses Inter-Process Communication (IPC), which allows processes to exchange data and coordinate actions, highlighting its importance for information sharing, computational speedup, and modularity. It describes two IPC models: Shared Memory and Message Passing, along with synchronization mechanisms to prevent issues like race conditions and deadlocks. Additionally, it covers various synchronization problems and solutions, including the Dining Philosopher Problem, emphasizing the need for effective resource management in concurrent programming.

Uploaded by

Khelan Mehta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Inter-process

Communication
Inter-Process Communication

 A process is independent if it cannot affect or be


affected by other processes executing in the system.
 A process is cooperating if it can affect or be affected
by other processes executing in the system.
 Inter-Process Communication (IPC) is the
mechanism that allows different processes to
exchange data, signals, or messages so they can
coordinate and work together.
 Processes can be on the same system or across
different systems connected via a network.
Why IPC

1. Information sharing: Allows processes to exchange data


so multiple programs can access and use common
information.
2. Computational Speedup: Enables tasks to be split
among multiple processes running in parallel to finish
faster.
3. Modularity: Supports breaking a large system into
smaller cooperating processes for easier development and
maintenance.
4. Convenience: Lets users run multiple programs at the
same time to perform related activities more efficiently.
Inter-Process Communication
models
1. Shared Memory Model
2. Message Passing Model
Shared Memory Model

 A region of memory is shared between cooperating


processes. Each process can read from and write to
this memory segment.
 How it works:
• The OS creates a shared memory segment.
• Processes map this segment into their address
spaces.
• Once mapped, processes can access it directly
without involving the kernel for each access.
Process A

Shared

Process B

Kernel
Message Passing Model

 Processes communicate by sending and receiving


messages, with the OS providing the
communication channel.
 How it works:
• The OS manages message queues or channels.
• A process uses system calls like send() and
receive() to exchange messages.
• Can be direct (send to a specific process) or
indirect (send via a mailbox or queue).
Process A
M

Process B M

Kernel M
Critical Section

 Part of the program where shared resources are


accessed.
 In shared memory, multiple processes can access
the same data at the same time.
 The critical section is the part of a process’s code
where that shared data is accessed or modified.
 To avoid race conditions, we need synchronization
so that only one process is in its critical section at a
time.
Critical Section Problem

 Consider a system consisting n processes, each


process has a code segment called critical
section.
 Race Condition: The order of executing of
instructions defines the result produced.
 So, there is a need that no two processes are
executing in their critical sections at the same time.
 Therefore, it is to design a protocol that the
processes use to cooperate each other.
Synchronization Mechanism

Each process must request permission to enter its critical


section.
do
{ Entry Section

Critical Section

Exit Section

Remainder
Section

}while (TRUE);
Requirement for
Synchronization Mechanism
 A Synchronization Mechanism must satisfy the
following three (Fundamental) requirements:
1. Mutual Exclusion: Only one process can be in the
critical section at a time to prevent race conditions.
2. Progress: If no process is in the critical section, a
waiting process should be allowed to enter without
unnecessary delay.
3. Bounded Waiting: A limit exists on how many
times other processes can enter the critical section
before a requesting process is allowed in,
preventing starvation.
Requirement for
Synchronization Mechanism
 Secondary requirements:
1. Minimal CPU Wastage: Use blocking or sleep
mechanisms instead of busy waiting where possible.
2. Support for Priority Inheritance: Temporarily
boost the priority of a lock holder to avoid priority
inversion.
3. Portability / Architectural Neutrality: The
mechanism should work correctly across different
hardware architectures and operating systems
without redesign.
Problems with Synchronization
Mechanism
1. CPU Wastage: In executing extra instructions.
2. Deadlock: Two or more processes wait indefinitely for
resources held by each other, forming a circular wait.
3. Starvation (Indefinite Blocking): A process never gets
access to the critical section because others keep getting
preference.
4. Priority Inversion: A high-priority process waits for a lock
held by a low-priority process, delayed further by medium-
priority processes.
5. Busy Waiting (CPU Wastage): A process repeatedly checks
for a lock to be free, consuming CPU cycles without doing
useful work.
Types of Synchronization
Mechanism
 Synchronization mechanisms can be broadly classified
into two types based on whether they cause the
process to actively wait (consume CPU cycles) or not.
1. Busy Waiting Synchronization: The process keeps
actively checking for a condition (e.g., whether a lock is
available) while remaining in the ready/running state.
2. Without Busy Waiting Synchronization: The
waiting process is blocked (put into a waiting queue)
until the resource becomes available, freeing the CPU
for other work.
Busy Waiting Synchronization:
Simple Lock Variable Mechanism
 Software Mechanism implemented in user mode.
 Busy waiting mechanism.
 Can be used for more than two process
 A lock is a shared variable (often a Boolean flag)
used to indicate whether a resource (or critical
section) is available or in use.
 lock = 0 → Resource is free
 lock = 1 → Resource is in use
Simple Lock Variable Mechanism

int lock = 0;
while (lock == 1) ;
lock = 1;

lock = 0;
Simple Lock Variable Mechanism

1. Load lock, R
2. CMP R, #0
3. JNZ step 1
4. Store #1, lock
Problem with Simple Lock Variable
Mechanism
 No Mutual Exclusion
Another chance with Lock Variable
Mechanism
1. Load lock, R
2. Store #1, lock
3. CMP R, #0
4. JNZ step 1
Problem with Previous Approach

 No Mutual Exclusion
Test Set Lock

1. Load lock, R TSL #1,


2. Store #1, lock lock
3. CMP R, #0
4. JNZ step 1
Analysis of Test Set Lock

 Mutual Exclusion: Yes


 Progress: Yes
 Bounded Waiting: Not Guaranteed
 Architectural Neutrality: No
 Priority Inversion: Yes
 Spin Lock (Deadlock)
Disabling Interrupt

 Mutual Exclusion: Yes


 Progress: Yes
 Bounded Waiting: Not Guaranteed
 Architectural Neutrality: No
Test Set Lock (Example)

void enter_CS(x)
{
while(TSL(x));
}
void leave_CS(x)
{
x=0;
}
where ‘x’ is initialized to 0.
Test Set Lock (Example)

1. The given solution to CS problem is deadlock free?


2. The solution is starvation free?
3. The process enters CS in FIFO?
4. More then one process can enter at the same time?
Peterson Solution
 Peterson gave a software-based solution
 For two processes only
 It uses two shared variables:
1. boolean flag[2] → expresses interest.
 interest[i] = true → Pi wants to enter CS.
 interest[i] = false → Pi not interested.
2. int turn → indicates whose turn it is.
 If both want at the same time, turn decides who
gets priority.
Peterson Solution
define N 2
define TRUE 1
define FALSE 0
int interested [N] = FALSE
int turn;
void Entry_Section (int process)
{
1. int other;
2. other = 1-process
3. intersted [process] = TRUE;
4. turn = process;
5. while(interested [other] == TRUE && TURN == process);
}
Void Exit_Section (int process)
{
6. Interested [process]=FALSE;
Peterson’s solution Analysis

1. Mutual Exclusion → At most one process can enter


critical section.
2. Progress → If no one is in CS, the choice of who enters
next cannot be postponed indefinitely.
3. Bounded Waiting → Each process gets a fair chance
(turn variable ensures fairness).
Limitation with Peterson’s
Solution/Busy waiting solutions
1. Busy Waiting (Spinlock problem)While one process
is in the critical section, the other keeps looping in
the while condition.
This wastes CPU cycles → bad for
multiprogramming environments.
2. Priority inversion
Without Busy Waiting
Synchronization: Producer-
#NConsumer
//slots in bufferProblem void Consumer(void)
#count=0 //items in buffer {
void Producer(void) Int item;
{ While(TRUE)
Int item; {
While(TRUE) if(count==0) sleep();
{ item=remove_item();
Item = produce_item(); count = count-1;
If(count==N) sleep(); If(count == N-1)
Insert_item(item); Wakeup (producer);
Count=count+1; Consumer_item(item);
If(count==1)wakeup(consum }
er); }
}
}
Semaphore

 Variables on which read, modify and update


happens atomically in kernel mode (No preemption).
 There are two types of Semaphore:
 Counting Semaphore
 Binary Semaphore (Mutex)
Counting Semaphore
struct semaphore UP (semaphore S)
{ {
int value; [Link] = [Link]+1;
Queue type L; If ([Link] <= 0)
} {
Down (Semaphore S) select a process from
{ L;
[Link] = [Link]-1; wakeup();
if ([Link]<0) }
{ }
put process (PCB) in L;
sleep();
} Decrement: Down/P/Wait
else Increment: Up/V/Signal
return;
}
Counting Semaphore

 A counting semaphore was initialized to 10. then 6P


and 4V operations were computed on this
semaphore. What is the result?
 S=7, then 20P, 15V. So what will be the final S?
Binary Semaphore
Struct BSemaphore UP (Bsemaphore S)
{ {
enum value (0,1) if (S.L is empty)
Queue type L; {
} [Link] = 1;
Down (Bsemaphore S) }
{ else
if ([Link] == 1) {
{ select a process from
[Link] = 0; S.L;
} wakeup();
else{ }
Put the process (PCB) in S.L; }
sleep(); Where, l contains all PCBs
} corresponding to processes
} got blocked while
Question
Mutex = 1; Pi=10
Pi=1,2,….9 while(1)
While(1) {
{ up(mutex)
down(mutex) <CS>
<CS> up(mutex)
up(mutex) }
}
Question
Mutex = a,b; a=1, b=0;
P0 P1
while(true) While(true)
{ {
P(a); P(b);
print(“1”); print(“0”);
V(b); V(a);
} }
Question: check for deadlock
Mutex = a,b; a=1, b=1;
P0 P1
while(true) While(true)
{ {
P(a); P(a);
P(b); P(b);
<CS> <CS>
V(a); V(a);
V(b); V(b);
} }
Question: check for deadlock
Mutex = a,b; a=1, b=1;
P0 P1
while(true) While(true)
{ {
P(a); P(b);
P(b); P(a);
<CS> <CS>
V(a); V(a);
V(b); V(b);
} }
Dining Philosopher Problem

 A classic problem in Operating Systems &


Synchronization
 Proposed by Edsger Dijkstra (1965)
 Demonstrates challenges of concurrent resource
sharing
Dining Philosopher Problem

 Five philosophers sit around a table.


 Each philosopher alternates between thinking and
eating: like Think → Pick chopsticks → Eat → Put
down chopsticks → Think
 To eat, a philosopher needs two chopsticks (left and
right).
 Chopsticks are shared resources placed between
philosophers.
Problem

 How can philosophers eat without causing:


1. Deadlock (no one eats)
2. Starvation (some never eat)
3. Race conditions (two grab the same chopstick at the
same time)
 Solution: Need a synchronization mechanism to
ensure safe access to chopsticks.
Challenges

 Deadlock: If all pick their left chopstick at the same


time → nobody gets to eat.
 Starvation: A philosopher may keep waiting if
neighbors are eating frequently.
 Concurrency: Multiple philosophers may attempt to
pick chopsticks simultaneously.
Problem
 Pi = 0,1,2,3
 Mi = 0,1,2,3
 Pi:
 P(Mi); P(M(i+1)mod4)
 <CS>

 V(Mi); V(M(i+1)mod4)
 Explanation: P0: P(Mo) P(M1) One of possible solution:
P1: P(M1) P(M2) P0: P(Mo) P(M1)
P2: P(M2) P(M3) P1: P(M1)
P(M2)
P3: P(M3) P(M0) P2: P(M2)
P(M3)
Possible Solutions
Resource Hierarchy Solution
 Number chopsticks.
 Each philosopher picks the lower-numbered first, then the
higher.
 Breaks circular wait → avoids deadlock.
Arbitrator/Waiter Solution
 A waiter (monitor/process) grants permission before eating.
 Ensures no deadlock and fair scheduling.
Chandy/Misra Solution (1984)
 Philosophers communicate via messages to request/release
chopsticks.
 Eliminates starvation.

You might also like