0% found this document useful (0 votes)
3 views

Concurrent Processes in Operating System

The document discusses concurrent processes in operating systems, explaining the concepts of multiprogramming, multiprocessing, and distributed processing environments. It details the process life cycle, the structure of a Process Control Block (PCB), and the challenges of concurrency such as deadlocks and race conditions. Additionally, it introduces the Producer-Consumer problem, mutual exclusion, and the critical section problem, emphasizing the need for synchronization mechanisms to manage shared resources effectively.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Concurrent Processes in Operating System

The document discusses concurrent processes in operating systems, explaining the concepts of multiprogramming, multiprocessing, and distributed processing environments. It details the process life cycle, the structure of a Process Control Block (PCB), and the challenges of concurrency such as deadlocks and race conditions. Additionally, it introduces the Producer-Consumer problem, mutual exclusion, and the critical section problem, emphasizing the need for synchronization mechanisms to manage shared resources effectively.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIT – 2

Concurrent Processes in Operating System

Concurrent processing is a computing model in which multiple processors execute


instructions simultaneously for better performance. Concurrent means, which occurs when
something else happens. The tasks are broken into sub-types, which are then assigned to
different processors to perform simultaneously, sequentially instead, as they would have to be
performed by one processor. Concurrent processing is sometimes synonymous with parallel
processing.
The term real and virtual concurrency in concurrent processing:
1. Multiprogramming Environment :
In multiprogramming environment, there are multiple tasks shared by one processor.
while a virtual concert can be achieved by the operating system, if the processor is
allocated for each individual task, so that the virtual concept is visible if each task has a
dedicated processor. The multilayer environment shown in figure.

2. Multiprocessing Environment :

In multiprocessing environment two or more processors are used with shared memory. Only
one virtual address space is used, which is common for all processors. All tasks reside in
shared memory. In this environment, concurrency is supported in the form of concurrently
executing processors. The tasks executed on different processors are performed with each
other through shared memory. The multiprocessing environment is shown in figure.
3. Distributed Processing Environment :

In a distributed processing environment, two or more computers are connected to each other
by a communication network or high speed bus. There is no shared memory between the
processors and each computer has its own local memory. Hence a distributed application
consisting of concurrent tasks, which are distributed over network communication via
messages. The distributed processing environment is shown in figure.

Process
A process is basically a program in execution. The execution of a process must progress in a
sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented
in the system.

To put it in simple terms, we write our computer programs in a text file and when we execute
this program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into
four sections ─ stack, heap, text and data. The following image shows a simplified layout of a
process inside main memory −

S.N. Component & Description

1 Stack
The process Stack contains the temporary data such as method/function parameters,
return address and local variables.
2 Heap
This is dynamically allocated memory to a process during its run time.

3 Text
This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.

4 Data
This section contains the global and static variables.

Program
A program is a piece of code which may be a single line or millions of lines. A computer
program is usually written by a computer programmer in a programming language. For
example, here is a simple program written in C programming language −
#include <stdio.h>

int main() {
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that performs a specific task when
executed by a computer. When we compare a program with a process, we can conclude that a
process is a dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is known as an algorithm. A
collection of computer programs, libraries and related data are referred to as a software.
Process Life Cycle
When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.

S.N. State & Description

1 Start
This is the initial state when a process is first started/created.

2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to
have the processor allocated to them by the operating system so that they can run.
Process may come into this state after Start state or while running it by but interrupted
by the scheduler to assign CPU to some other process.

3 Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.

4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting
for user input, or waiting for a file to become available.

5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.

Process Control Block (PCB)


A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the
information needed to keep track of a process as listed below in the table −

S.N. Information & Description

1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2 Process privileges
This is required to allow/disallow access to system resources.

3 Process ID
Unique identification for each of the process in the operating system.

4 Pointer
A pointer to parent process.
5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for
this process.

6 CPU registers
Various CPU registers where process need to be stored for execution for running state.

7 CPU Scheduling Information


Process priority and other scheduling information which is required to schedule the
process.

8 Memory management information


This includes the information of page table, memory limits, Segment table depending
on memory used by the operating system.

9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID
etc.

10 IO status information
This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.

What is Concurrency?

It refers to the execution of multiple instruction sequences at the same time. It occurs in an
operating system when multiple process threads are executing concurrently. These threads
can interact with one another via shared memory or message passing. Concurrency results in
resource sharing, which causes issues like deadlocks and resource scarcity. It aids with
techniques such as process coordination, memory allocation, and execution schedule to
maximize throughput.

Principles of Concurrency

Today's technology, like multi-core processors and parallel processing, allows multiple
processes and threads to be executed simultaneously. Multiple processes and threads can
access the same memory space, the same declared variable in code, or even read or write to
the same file.

The amount of time it takes a process to execute cannot be simply estimated, and you cannot
predict which process will complete first, enabling you to build techniques to deal with the
problems that concurrency creates.

Interleaved and overlapping processes are two types of concurrent processes with the same
problems. It is impossible to predict the relative speed of execution, and the following factors
determine it:

1. The way operating system handles interrupts


2. Other processes' activities
3. The operating system's scheduling policies

Problems in Concurrency

There are various problems in concurrency. Some of them are as follows:

1. Locating the programming errors

It's difficult to spot a programming error because reports are usually repeatable due to the
varying states of shared components each time the code is executed.

2. Sharing Global Resources

Sharing global resources is difficult. If two processes utilize a global variable and both alter
the variable's value, the order in which the many changes are executed is critical.

3. Locking the channel

It could be inefficient for the OS to lock the resource and prevent other processes from using
it.

4. Optimal Allocation of Resources

It is challenging for the OS to handle resource allocation properly.

Issues of Concurrency

Various issues of concurrency are as follows:

1. Non-atomic

Operations that are non-atomic but interruptible by several processes may happen issues. A
non-atomic operation depends on other processes, and an atomic operation runs
independently of other processes.
2. Deadlock

In concurrent computing, it occurs when one group member waits for another member,
including itself, to send a message and release a lock. Software and hardware locks are
commonly used to arbitrate shared resources and implement process synchronization in
parallel computing, distributed systems, and multiprocessing.

3. Blocking

A blocked process is waiting for some event, like the availability of a resource or completing
an I/O operation. Processes may block waiting for resources, and a process may be blocked
for a long time waiting for terminal input. If the process is needed to update some data
periodically, it will be very undesirable.

4. Race Conditions

A race problem occurs when the output of a software application is determined by the timing
or sequencing of other uncontrollable events. Race situations can also happen in
multithreaded software, runs in a distributed environment, or is interdependent on shared
resources.

5. Starvation

A problem in concurrent computing is where a process is continuously denied the resources it


needs to complete its work. It could be caused by errors in scheduling or mutual exclusion
algorithm, but resource leaks may also cause it.

Concurrent system design frequently requires developing dependable strategies for


coordinating their execution, data interchange, memory allocation, and execution schedule to
decrease response time and maximize throughput.

Advantages and Disadvantages of Concurrency in Operating System

Various advantages and disadvantages of Concurrency in Operating systems are as follows:

Advantages

1. Better Performance

It improves the operating system's performance. When one application only utilizes the
processor, and another only uses the disk drive, the time it takes to perform both apps
simultaneously is less than the time it takes to run them sequentially.

2. Better Resource Utilization

It enables resources that are not being used by one application to be used by another.

3. Running Multiple Applications

It enables you to execute multiple applications simultaneously.


Disadvantages
1. It is necessary to protect multiple applications from each other.
2. It is necessary to use extra techniques to coordinate several applications.
3. Additional performance overheads and complexities in OS are needed for switching
between applications.

What is the Producer-Consumer problem?

Background and introduction

The Producer-Consumer problem is a classic synchronization problem in operating


systems.

The problem is defined as follows: there is a fixed-size buffer and a Producer process, and a
Consumer process.

The Producer process creates an item and adds it to the shared buffer.
The Consumer process takes items out of the shared buffer and “consumes” them.

The tricky part

Certain conditions must be met by the Producer and the Consumer processes to have
consistent data synchronization:

1. The Producer process must not produce an item if the shared buffer is full.
2. The Consumer process must not consume an item if the shared buffer is empty.
3. Access to the shared buffer must be mutually exclusive; this means that at any given
instance, only one process should be able to access the shared buffer and make
changes to it.

Solution

The solution to the Producer-Consumer problem involves three semaphore variables.

 semaphore Full: Tracks the space filled by the Producer process. It is initialized with
a value of 00 as the buffer will have 00 filled spaces at the beginning
 semaphore Empty: Tracks the empty space in the buffer. It is initially set
to buffer_size as the whole buffer is empty at the beginning.
 semaphore mutex: Used for mutual exclusion so that only one process can access the
shared buffer at a time.

Using the signal() and wait() operations on these semaphores, we can arrive at a solution.

Let’s look at the code for the Producer and Consumer processes.

void Producer(){

while(true){
// Produce an item

wait(Empty);

wait(mutex);

add();

signal(mutex);

signal(Full);

In the code above, the Producer process waits for the Empty semaphore. This means that the
Producer process is kept in busy-waiting if the Empty semaphore value is 00, indicating that
there are 00 empty spaces available. The Producer will have to wait for the Consumer to
consume some items from the buffer and make some space available for itself.

The Producer then waits for the mutex semaphore, which merely ensures that once a thread
has entered the critical section of the code, the rest of the threads cannot access it and cause
race conditions.

The add() function appends the item to the shared buffer. Once a Producer process reaches
this point in the code, it is guaranteed that no other process is accessing the shared buffer
concurrently, preventing data inconsistency.

After the Producer process adds the item to the shared buffer, it uses the signal() operation to
increase the value of the mutex semaphore by one, thereby allowing any other threads which
were busy-waiting in the mutex semaphore to access the critical section.

Lastly, the Producer process uses the signal() operation on the Full semaphore, increasing its
value by 11, indicating that an item has been added to the shared buffer and the count for the
filled spaces has increased by one.

The code for the Consumer process is as follows.

void Consumer(){

while(true){

wait(Full);

wait(mutex);

consume();

signal(mutex);

signal(Empty)
}

The Consumer waits for the Full semaphore. If the Full semaphore value is 0, it indicates that
there are no items to consume, and it must wait for the Producer process to produce an item
and add it to the shared buffer for consumption.

As previously mentioned, the mutex semaphore ensures mutually exclusive access to the
critical section of the code so that the shared buffer is only accessed by one thread at a time
for data synchronization.

Once the Consumer process reaches the critical section of the code, i.e.,
the consume() function, it executes the function and takes one item from the shared buffer.

After taking an item from the buffer, the Consumer process first uses signal(mutex) to release
the mutex semaphore, allowing other threads that may have been busy-waiting in
the mutex to access the critical section.

Lastly, the Consumer uses signal(Empty) to increase the value of the Empty semaphore by
one, indicating that a free slot has been made in the shared buffer. Any Producer processes
that may have been waiting in the Empty semaphore are now allowed to add an item to the
shared buffer.

Mutual Exclusion in Synchronization

During concurrent execution of processes, processes need to enter the critical section (or the
section of the program shared across processes) at times for execution. It might so happen
that because of the execution of multiple processes at once, the values stored in the critical
section become inconsistent. In other words, the values depend on the sequence of execution
of instructions – also known as a race condition. The primary task of process synchronization
is to get rid of race conditions while executing the critical section.

This is primarily achieved through mutual exclusion.

Mutual exclusion is a property of process synchronization which states that “no two
processes can exist in the critical section at any given point of time”. The term was first
coined by Dijkstra. Any process synchronization technique being used must satisfy the
property of mutual exclusion, without which it would not be possible to get rid of a race
condition.
To understand mutual exclusion, let’s take an example.
Example:
In the clothes section of a supermarket, two people are shopping for clothes.

Boy A decides upon some clothes to buy and heads to the changing room to try them out.
Now, while boy A is inside the changing room, there is an ‘occupied’ sign on it – indicating
that no one else can come in. Girl B has to use the changing room too, so she has to wait till
boy A is done using the changing room.
Once boy A comes out of the changing room, the sign on it changes from ‘occupied’ to
‘vacant’ – indicating that another person can use it. Hence, girl B proceeds to use the
changing room, while the sign displays ‘occupied’ again.
The changing room is nothing but the critical section, boy A and girl B are two different
processes, while the sign outside the changing room indicates the process synchronization
mechanism being used.

The Critical Section Problem

Critical Section is the part of a program which tries to access shared resources. That resource
may be any resource in a computer like a memory location, Data structure, CPU or any IO
device.

The critical section cannot be executed by more than one process at the same time; operating
system faces the difficulties in allowing and disallowing the processes from entering the
critical section.

The critical section problem is used to design a set of protocols which can ensure that the
Race condition among the processes will never arise.

In order to synchronize the cooperative processes, our main task is to solve the critical section
problem. We need to provide a solution in such a way that the following conditions can be
satisfied.

Requirements of Synchronization mechanisms

Primary

1. Mutual Exclusion

Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if
one process is executing inside critical section then the other process must not enter in
the critical section.
2. Progress

Progress means that if one process doesn't need to execute into critical section then it
should not stop other processes to get into the critical section.
Secondary
1. Bounded Waiting

We should be able to predict the waiting time for every process to get into the critical
section. The process must not be endlessly waiting for getting into the critical section.

2. Architectural Neutrality

Our mechanism must be architectural natural. It means that if our solution is working
fine on one architecture then it should also run on the other ones as well.

Dekker’s algorithm in Process Synchronization

To obtain such a mutual exclusion, bounded waiting, and progress there have been several
algorithms implemented, one of which is Dekker’s Algorithm. To understand the algorithm
let’s understand the solution to the critical section problem first.
A process is generally represented as :

do {
//entry section
critical section
//exit section
remainder section
} while (TRUE);
The solution to the critical section problem must ensure the following three conditions:

1. Mutual Exclusion
2. Progress
3. Bounded Waiting

Another one is Dekker’s Solution. Dekker’s algorithm was the first probably-correct
solution to the critical section problem. It allows two threads to share a single-use resource
without conflict, using only shared memory for communication. It avoids the strict alternation
of a naïve turn-taking algorithm, and was one of the first mutual exclusion algorithms to be
invented.
Although there are many versions of Dekker’s Solution, the final or 5th version is the one
that satisfies all of the above conditions and is the most efficient of them all.

Note – Dekker’s Solution, mentioned here, ensures mutual exclusion between two processes
only, it could be extended to more than two processes with the proper use of arrays and
variables.
First Version of Dekker’s Solution – The idea is to use a common or shared thread
number between processes and stop the other process from entering its critical section if the
shared thread indicates the former one already running.
Second Version of Dekker’s Solution – To remove lockstep synchronization, it uses two
flags to indicate its current status and updates them accordingly at the entry and exit
section.
Third Version of Dekker’s Solution – To re-ensure mutual exclusion, it sets the flags
before the entry section itself.
The problem with this version is a deadlock possibility. Both threads could set their flag as
true simultaneously and both will wait infinitely later on.

Fourth Version of Dekker’s Solution – Uses small time interval to recheck the condition,
eliminates deadlock, and ensures mutual exclusion as well.
The problem with this version is the indefinite postponement. Also, a random amount of
time is erratic depending upon the situation in which the algorithm is being implemented,
hence not an acceptable solution in business critical systems.

Dekker’s Algorithm: Final and completed Solution – -Idea is to use favoured thread
notion to determine entry to the critical section. Favoured thread alternates between the
thread providing mutual exclusion and avoiding deadlock, indefinite postponement, or
lockstep synchronization.
Peterson’s Solution

Peterson’s Solution is a classical software based solution to the critical section problem.
In Peterson’s solution, we have two shared variables:
 boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the
critical section
 int turn : The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions :


 Mutual Exclusion is assured as only one process can access the critical section at any
time.
 Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s Solution


 It involves Busy waiting
 It is limited to 2 processes.

TestAndSet

TestAndSet is a hardware solution to the synchronization problem. In TestAndSet, we


have a shared lock variable which can take either of the two values, 0 or 1.
0 Unlock
1 Lock
Before entering into the critical section, a process inquires about the lock. If it is locked,
it keeps on waiting until it becomes free and if it is not locked, it takes the lock and
executes the critical section.
In TestAndSet, Mutual exclusion and progress are preserved but bounded waiting cannot be
preserved.

Semaphores

A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be


signaled by another thread. This is different than a mutex as the mutex can be signaled only
by the thread that called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two
operations wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores
 Binary Semaphores: They can only be either 0 or 1. They are also known as
mutex locks, as the locks can provide mutual exclusion. All the processes can
share the same mutex semaphore that is initialized to 1. Then, a process has to
wait until the lock becomes 0. Then, the process can make the mutex semaphore
1 and start its critical section. When it completes its critical section, it can reset
the value of mutex semaphore to 0 and some other process can enter its critical
section.
 Counting Semaphores: They can have any value and are not restricted over a
certain domain. They can be used to control access to a resource that has a
limitation on the number of simultaneous accesses. The semaphore can be
initialized to the number of instances of the resource. Whenever a process wants
to use that resource, it checks if the number of remaining instances is more than
zero, i.e., the process has an instance available. Then, the process can enter its
critical section thereby decreasing the value of the counting semaphore by 1.
After the process is over with the use of the instance of the resource, it can leave
the critical section thereby adding 1 to the number of available instances of the
resource.

Classical Problems of Synchronization

Classical problems of Synchronization with Semaphore Solution

we will see number of classical problems of synchronization as examples of a large class of


concurrency-control problems. In our solutions to the problems, we use semaphores for
synchronization, since that is the traditional way to present such solutions. However, actual
implementations of these solutions could use mutex locks in place of binary semaphores. The
classical problems of synchronization are as follows:

1. Bound-Buffer problem
2. Sleeping barber problem
3. Dining Philosophers problem
4. Readers and writers problem

Bound-Buffer problem

Also known as the Producer-Consumer problem. In this problem, there is a buffer of


n slots, and each buffer is capable of storing one unit of data. There are two processes
that are operating on the buffer – Producer and Consumer. The producer tries to insert
data and the consumer tries to remove data.
If the processes are run simultaneously they will not yield the expected output.
The solution to this problem is creating two semaphores, one full and the other empty
to keep a track of the concurrent processes.

Sleeping Barber Problem

This problem is based on a hypothetical barbershop with one barber.


When there are no customers the barber sleeps in his chair. If any customer enters he
will wake up the barber and sit in the customer chair. If there are no chairs empty they
wait in the waiting queue.

Dining Philosopher’s problem

This problem states that there are K number of philosophers sitting around a circular
table with one chopstick placed between each pair of philosophers. The philosopher
will be able to eat if he can pick up two chopsticks that are adjacent to the
philosopher.
This problem deals with the allocation of limited resources.

Readers and Writers Problem

This problem occurs when many threads of execution try to access the same shared
resources at a time. Some threads may read, and some may write. In this scenario, we
may get faulty outputs.

Different Models of Interprocess Communication


Interprocess communication is the mechanism provided by the operating system that allows
processes to communicate with each other. This communication could involve a process
letting another process know that some event has occurred or transferring of data from one
process to another.
The models of interprocess communication are as follows −
Shared Memory Model
Shared memory is the memory that can be simultaneously accessed by multiple processes.
This is done so that the processes can communicate with each other. All POSIX systems, as
well as Windows operating systems use shared memory.
Advantage of Shared Memory Model
Memory communication is faster on the shared memory model as compared to the message
passing model on the same machine.
Disadvantages of Shared Memory Model
Some of the disadvantages of shared memory model are as follows −

 All the processes that use the shared memory model need to make sure that they are
not writing to the same memory location.
 Shared memory model may create problems such as synchronization and memory
protection that need to be addressed.

Message Passing Model


Multiple processes can read and write data to the message queue without being connected to
each other. Messages are stored on the queue until their recipient retrieves them. Message
queues are quite useful for interprocess communication and are used by most operating
systems.
Advantage of Messaging Passing Model
The message passing model is much easier to implement than the shared memory model.
Disadvantage of Messaging Passing Model
The message passing model has slower communication than the shared memory model
because the connection setup takes time.
A diagram that demonstrates the shared memory model and message passing model is given
as follows –
Inter Process Communication (IPC)

A process can be of two types:


 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think that
those processes, which are running independently, will execute very efficiently, in reality,
there are many situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity. Inter-process communication (IPC) is a
mechanism that allows processes to communicate with each other and synchronize their
actions. The communication between these processes can be seen as a method of co-operation
between them. Processes can communicate with each other through both:

1. Shared Memory
2. Message passing

An operating system can implement both methods of communication. First, we will discuss
the shared memory methods of communication and then message passing. Communication
between processes using shared memory requires processes to share some variable, and it
completely depends on how the programmer will implement it. One way of communication
using shared memory can be imagined like this: Suppose process1 and process2 are executing
simultaneously, and they share some resources or use some information from another
process. Process1 generates information about certain computations or resources being used
and keeps it as a record in shared memory. When process2 needs to use the shared
information, it will check in the record stored in shared memory and take note of the
information generated by process1 and act accordingly. Processes can use shared memory for
extracting information as a record from another process as well as for delivering any specific
information to other processes.

Let’s discuss an example of communication between processes using the shared memory
method.

i) Shared Memory Method


Ex: Producer-Consumer problem
There are two processes: Producer and Consumer. The producer produces some items and the
Consumer consumes that item. The two processes share a common space or memory location
known as a buffer where the item produced by the Producer is stored and from which the
Consumer consumes the item if needed. There are two versions of this problem: the first one
is known as the unbounded buffer problem in which the Producer can keep on producing
items and there is no limit on the size of the buffer, the second one is known as the bounded
buffer problem in which the Producer can produce up to a certain number of items before it
starts waiting for Consumer to consume it. We will discuss the bounded buffer problem.
First, the Producer and the Consumer will share some common memory, then the producer
will start producing items. If the total produced item is equal to the size of the buffer, the
producer will wait to get it consumed by the Consumer. Similarly, the consumer will first
check for the availability of the item. If no item is available, the Consumer will wait for the
Producer to produce it. If there are items available, Consumer will consume them.

Shared Data between the two Processes

ii) Messaging Passing Method

Now, We will start our discussion of the communication between processes via message
passing. In this method, processes communicate with each other without using any kind of
shared memory. If two processes p1 and p2 want to communicate with each other, they
proceed as follows:

 Establish a communication link (if a link already exists, no need to establish it again.)
 Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)

The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for
an OS designer but complicated for a programmer and if it is of variable size then it is easy
for a programmer but complicated for the OS designer. A standard message can have two
parts: header and body.
The header part is used for storing message type, destination id, source id, message length,
and control information. The control information contains information like what to do if
runs out of buffer space, sequence number, priority. Generally, message is sent using FIFO
style.

Process Scheduling in Operating System


Process scheduling is an important part of multiprogramming operating systems. It is the
process of removing the running task from the processor and selecting another task for
processing. It schedules a process into different states like ready, waiting, and running.

Categories of Scheduling in OS

There are two categories of scheduling:

1. Non-preemptive: In non-preemptive, the resource can’t be taken from a


process until the process completes execution. The switching of resources
occurs when the running process terminates and moves to a waiting state.

2. Preemptive: In preemptive scheduling, the OS allocates the resources to a


process for a fixed amount of time. During resource allocation, the process
switches from running state to ready state or from waiting state to ready state.
This switching occurs as the CPU may give priority to other processes and
replace the process with higher priority with the running process.

Process Scheduling Queues


There are multiple states a process has to go through during execution. The OS maintains a
separate queue for each state along with the process control blocks (PCB) of all processes.
The PCB moves to a new state queue, after being unlinked from its current queue, when the
state of a process changes.

These process scheduling queues are:

1. Job queue: Makes sure that processes stay in the system.


2. Ready queue: This stores a set of all processes in main memory, ready and waiting for
execution. The ready queue stores any new process.
3. Device queue: This queue consists of the processes blocked due to the unavailability of an
I/O device.
There are different policies that the OS uses to manage each queue and the OS scheduler
decides how to move processes between the ready and run queue which allows only one entry
per processor core on the system.

 A new process first goes in the Ready queue, where it waits for execution or to be
dispatched.
 The CPU gets allocated to one of the processes for execution.
 The process issues an I/O request, after which an OS places it in the I/O queue.
 The process then creates a new subprocess and waits for its termination.
 If removed forcefully, the process creates an interrupt. Thus, once this interrupt
completes, the process goes back to the ready queue.

Objectives of Process Scheduling in OS

Following are the objectives of process scheduling:

1. It maximizes the number of interactive users within acceptable response times.


2. It achieves a balance between response and utilization.
3. It makes sure that there is no postponement for an unknown time and enforces priorities.
4. It gives reference to the processes holding the key resources.

You might also like