Unit 3 Process Management
Process:
Definition of Process:
A process is a program in execution and it is more than a program code called as text section and
this concept works under all the operating system because all the task perform by the operating
system needs a process to perform the task
The process executes when it changes the state. The state of a process is defined by the current
activity of the process.
Each process may be in any one of the following states −
New − The process is being created.
Running − In this state the instructions are being executed.
Waiting − The process is in waiting state until an event occurs like I/O operation completion or
receiving a signal.
Ready − The process is waiting to be assigned to a processor.
Terminated − the process has finished execution.
Explanation:
Step 1 − Whenever a new process is created, it is admitted into ready state.
Step 2 − If no other process is present at running state, it is dispatched to running based on
scheduler dispatcher.
Step 3 − If any higher priority process is ready, the uncompleted process will be sent to the
waiting state from the running state.
Step 4 − Whenever I/O or event is completed the process will send back to ready state based on
the interrupt signal given by the running state.
Step 5 − Whenever the execution of a process is completed in running state, it will exit to
terminate state, which is the completion of process.
Process Control Block:
Process Control Block is a data structure that contains information of the process related to it.
The process control block is also known as a task control block, entry of the process table, etc.
It is very important for process management as the data structuring for processes is done in terms
of the PCB. It also defines the current state of the operating system.
Structure of the Process Control Block
The process control stores many data items that are needed for efficient process management.
Some of these data items are explained with the help of the given diagram −
The following are the data items −
Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number
This shows the number of the particular process.
Program Counter
This contains the address of the next instruction that needs to be executed in the process.
Registers
This specifies the registers that are used by the process. They may include accumulators, index
registers, stack pointers, general purpose registers etc.
List of Open Files
These are the different files that are associated with the process
CPU Scheduling Information
The process priority, pointers to scheduling queues etc. is the CPU scheduling information that is
contained in the PCB. This may also include any other scheduling parameters.
Memory Management Information
The memory management information includes the page tables or the segment tables depending
on the memory system used. It also contains the value of the base registers, limit registers etc.
I/O Status Information
This information includes the list of I/O devices used by the process, the list of files etc.
Accounting information
The time limits, account numbers, amount of CPU used, process numbers etc. are all a part of the
PCB accounting information.
Location of the Process Control Block
The process control block is kept in a memory area that is protected from the normal user access.
This is done because it contains important process information. Some of the operating systems
place the PCB at the beginning of the kernel stack for the process as it is a safe location.
Context Switching
Context Switching involves storing the context or state of a process so that it can be reloaded
when required and execution can be resumed from the same point as earlier. This is a feature of a
multitasking operating system and allows a single CPU to be shared by multiple processes.
A diagram that demonstrates context switching is as follows −
In the above diagram, initially Process 1 is running. Process 1 is switched out and Process 2 is
switched in because of an interrupt or a system call. Context switching involves saving the state
of Process 1 into PCB1 and loading the state of process 2 from PCB2. After some time again a
context switch occurs and Process 2 is switched out and Process 1 is switched in again. This
involves saving the state of Process 2 into PCB2 and loading the state of process 1 from PCB1.
Context Switching Triggers:
There are three major triggers for context switching. These are given as follows −
Multitasking: In a multitasking environment, a process is switched out of the CPU so another
process can be run. The state of the old process is saved and the state of the new process is
loaded. On a pre-emptive system, processes may be switched out by the scheduler.
Interrupt Handling: The hardware switches a part of the context when an interrupt occurs. This
happens automatically. Only some of the context is changed to minimize the time required to
handle the interrupt.
User and Kernel Mode Switching: A context switch may take place when a transition between
the user mode and kernel mode is required in the operating system.
Inter process Communication (IPC) :
Inter process Communication (IPC) is a mechanism which allows the exchange of data between
processes. It enables resource and data sharing between the processes without interference.
Processes that execute concurrently in the operating system may be either independent processes
or cooperating processes.
A process is independent and it may or may not be affected by other processes executing in the
system. Any process that does not share data with any other process is independent.
Suppose if a process is cooperating then, it can be affected by other processes that are executing
in the system. Any process that shares the data with another process is called a cooperative
process.
Given below is the diagram of inter process communication −
Reasons for Process Cooperation:
There are several reasons which allow process cooperation, which is as follows −
Information sharing − Several users are interested in the same piece of information. We must
provide an environment to allow concurrent access to such information.
Computation speeds up − If we want a particular task to run faster, we must break it into
subtasks, then each will be executed in parallel with the other. The speedup can be achieved only
if the computer has multiple processing elements.
Modularity − A system can be constructed in a modular fashion dividing the system functions
into separate processes or threads.
Convenience − An individual user may work on many tasks at the same time. For example, a
user may be editing, compiling, and printing in parallel.
The cooperative process requires an IPC mechanism that will allow them to exchange data and
information.
IPC Models:
There are two fundamental models of IPC which are as follows −
Shared memory
A region of memory that is shared by cooperating processes is established. Processes can then
exchange information by reading and writing data to the shared region.
Message passing
Communication takes place by means of messages exchanged between the cooperating
processes. Message passing is useful for exchanging small amounts of data because no conflicts
need be avoided.It is easier to implement when compared to shared memory by using system
calls and therefore, it requires more time-consuming tasks of the kernel.
What is Thread?
A thread is a flow of execution through the process code, with its own program counter that
keeps track of which instruction to execute next, system registers which hold its current
working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and
open files. When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach to
improving performance of operating system by reducing the overhead thread is equivalent
to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each
thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web server. They also provide a suitable foundation for
parallel execution of applications on shared memory multiprocessors. The following figure
shows the working of a single-threaded and a multithreaded process.
Types of Thread
Threads are implemented in following two ways −
User Level Threads − User managed threads.
Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The
thread library contains code for creating and destroying threads, for passing message and
data between threads, for scheduling thread execution and for saving and restoring thread
contexts. The application starts with a single thread.
dv
antages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management
code in the application area. Kernel threads are supported directly by the operating system.
Any application can be programmed to be multithreaded. All of the threads within an
application are supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals
threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel
performs thread creation, scheduling and management in Kernel space. Kernel threads are
generally slower to create and manage than the user threads.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread
facility. Solaris is a good example of this combined approach. In a combined system,
multiple threads within the same application can run in parallel on multiple processors and a
blocking system call need not block the entire process. Multithreading models are three
types
Many to many relationship.
Many to one relationship.
One to one relationship.
Many to Many Model
The many-to-many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level threads
are multiplexing with 6 kernel level threads. In this model, developers can create as many
user threads as necessary and the corresponding Kernel threads can run in parallel on a
multiprocessor machine. This model provides the best accuracy on concurrency and when a
thread performs a blocking system call, the kernel can schedule another thread for
execution.
M
any to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking
system call, the entire process will be blocked. Only one thread can access the Kernel at a
time, so multiple threads are unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way that
the system does not support them, then the Kernel threads use the many-to-one
relationship modes.
O
ne to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This model
provides more concurrency than the many-to-one model. It also allows another thread to run
when a thread makes a blocking system call. It supports multiple threads to execute in
parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel
thread. OS/2, windows NT and windows 2000 use one to one relationship model.
Difference between User-Level & Kernel-Level Thread
S.N
User-Level Threads Kernel-Level Thread
.
User-level threads are faster to create and Kernel-level threads are slower to
1
manage. create and manage.
Implementation is by a thread library at the user Operating system supports creation of
2
level. Kernel threads.
User-level thread is generic and can run on any Kernel-level thread is specific to the
3
operating system. operating system.
Multi-threaded applications cannot take advantage Kernel routines themselves can be
4
of multiprocessing. multithreaded.