0% found this document useful (0 votes)
2 views

OS Module2.1 Process Management

The document provides an overview of process management in operating systems, detailing the concept of processes, their states, and the role of the Process Control Block (PCB). It discusses process scheduling, including types of schedulers, and the operations that can be performed on processes such as creation, preemption, blocking, and termination. Additionally, it covers interprocess communication, highlighting the importance of cooperation among processes and the mechanisms for communication.

Uploaded by

nithyashree6776
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

OS Module2.1 Process Management

The document provides an overview of process management in operating systems, detailing the concept of processes, their states, and the role of the Process Control Block (PCB). It discusses process scheduling, including types of schedulers, and the operations that can be performed on processes such as creation, preemption, blocking, and termination. Additionally, it covers interprocess communication, highlighting the importance of cooperation among processes and the mechanisms for communication.

Uploaded by

nithyashree6776
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

 Process Concept

 Process Scheduling

 Operations on Processes

 Interprocess Communication
 To introduce the notion of a process -- a program in
execution, which forms the basis of all computation.

 To describe the various features of processes,


including scheduling, creation and termination, and
communication.
 Process management deals with several issues:

◦ What are the units of execution.

◦ How are those units of execution represented in the OS.

◦ How is work scheduled in the CPU.

◦ What are possible execution states, and how does the


system move from one to another.
Process concept

➢ An operating system executes a variety of programs:


◦ Batch system – jobs

◦ Time-shared systems – user programs or tasks

Process
o A process is a program under execution.
o process execution must progress in sequential fashion.

o Its current activity is indicated by PC (Program Counter)


and the contents of the processor's registers.
Process concept

➢ Program is passive entity, process is active entity.


➢ Program becomes process when executable file loaded
into memory.

➢ Execution of program started via GUI mouse clicks, command


line entry of its name, etc

➢ One program can be several processes.


➢ Consider multiple users executing the same program.
Process memory is divided into four sections as shown in the
figure below:

• The stack is used to store local variables, function parameters,


function return values, return address etc.

• The heap is used for dynamic memory allocation.

• The data section stores global and static variables.

• The text section comprises the compiled program code.

• Note that, there is a free


space between the stack
and the heap. When the
stack is full, it grows
downwards and when the
heap is full, it grows
upwards.
 As a process executes, it changes state.

 A Process has 5 states. Each process may be in one of the


following states –

◦ New: The process is being created.

◦ Running: Instructions are being executed.

◦ Waiting: The process is waiting for some event to occur.

◦ Ready: The process is waiting to be assigned to a processor.

◦ Terminated: The process has finished execution.


Processes move from state to state as a result of actions they
perform (e.g., system calls), OS actions(rescheduling), and
external actions (interrupts)
 At any time, there are many processes in the system, each in
its particular state.

 The OS must have data structures representing each process:


This data structure is called the Process Control
Block(PCB):

◦ Process Control Block


 The PCB contains all of the info about a process.

 The PCB is where the OS keeps all of a process’ hardware


execution state (PC, SP, registers) when the process is not
running.
For each process there is a Process
Control Block (PCB), which stores
the process-specific information as
shown below –
 Process state

 Program counter

 CPU registers

 CPU scheduling information

 Memory-management information

 Accounting information

 I/O status information


Process State – The state of the process may be new, ready,
running, waiting, and so on.

Program counter – The counter indicates the address of the


next instruction to be executed for this process.

CPU registers - The registers vary in number and type,


depending on the computer architecture. They include
accumulators, index registers, stack pointers, and general-
purpose registers.

Along with the program counter, this state information must be


saved when an interrupt occurs, to allow the process to be
continued correctly afterward.
CPU scheduling information- This information includes a
process priority, pointers to scheduling queues, and any other
scheduling parameters.

Memory-management information – This include information


such as the value of the base and limit registers, the page
tables, or the segment tables.

Accounting information – This information includes the


amount of CPU and real time used, time limits, account
numbers, job or process numbers, and so on.

I/O status information – This information includes the list of


I/O devices allocated to the process, a list of open files, and so
on.

The PCB simply serves as the repository for any information


that may vary from process to process.
 When a process is running its Program Counter, stack pointer,
registers, etc., are loaded on the CPU (I.e., the processor
hardware registers contain the current values).

 When the OS stops running a process, it saves the current


values of those registers into the PCB for that process.

 When the OS is ready to start executing a new process, it


loads the hardware registers from the values stored in that
process’ PCB.

 The process of switching the CPU from one process to


another is called a context switch. Timesharing systems
may do 100s or 1000s of context switches a second!
Here, are important objectives of Process scheduling

• Maximize the number of interactive users within acceptable


response times.

• Achieve a balance between response and utilization.

• Avoid indefinite postponement and enforce priorities.

• It also should give reference to the processes holding the


key resources.
• The process scheduling is the activity of the process manager
that handles the removal of the running process from the CPU
and the selection of another process on the basis of a particular
strategy.

• Process scheduling is an essential part of a Multiprogramming


operating systems.

• Maximize CPU use, quickly switch processes onto CPU for time
sharing.

• The OS maintains all PCBs in Process Scheduling Queues.

• The OS maintains a separate queue for each of the process states


and PCBs of all processes in the same execution state are placed in
the same queue. When the state of a process is changed, its PCB is
unlinked from its current queue and moved to its new state queue.
 The Operating System maintains the following important process
scheduling queues:

◦ Job queue – set of all processes in the system.

◦ Ready queue – set of all processes residing in main memory, ready


and waiting to execute. A new process is always put in this queue.

◦ Device queues – set of processes waiting for an I/O device.

◦ Processes migrate among the various queues.


Ready Queue And Various I/O Device Queues
 PCBs are data structures, dynamically allocated in OS
memory.

 When a process is created, a PCB is allocated to it,


initialized, and placed on the correct queue.

 As the process computes, its PCB moves from queue to


queue.

 When the process is terminated, its PCB is deallocated.


In the Diagram,

•Rectangle represents
a queue.

•Circle denotes the


resource.

• Arrow indicates the


flow of the process.
1. Every new process first put in the Ready queue .It waits in the ready
queue until it is finally processed for execution or it is dispatched.

2. One of the processes is allocated the CPU and it is executing.

3. The process should issue an I/O request.

4. Then, it should be placed in the I/O queue.

5. The process should create a new sub process.

6. The process should be waiting for its termination.

7. It should remove forcefully from the CPU, as a result interrupt. Once


interrupt is completed, it should be sent back to ready queue.
 Schedulers are special system software which handle
process scheduling in various ways.

 Their main task is to select the jobs to be submitted into


the system and to decide which process to run.

 Schedulers are of three types −


Long-Term Scheduler

Short-Term Scheduler

Medium-Term Scheduler
 Long-term scheduler(or job scheduler) – selects jobs from
the job pool (of secondary memory, disk) and loads them into
the memory.

Selects which processes should be brought into the ready


queue.

The primary objective of the job scheduler is to provide a


balanced mix of jobs, such as I/O bound and CPU bound.

It also controls the degree of multiprogramming.


Degree of multiprogramming:describes the maximum number
of processes that a single-processor system can accommodate
efficiently.

If the degree of multiprogramming is stable, then the


average rate of process creation must be equal to the
average departure rate of processes leaving the system.

 On some systems, the long-term scheduler may not be


available or minimal.

 Time-sharing operating systems have no long-term


scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.
 Short-term scheduler(or CPU scheduler) – selects which
process should be executed next and allocates CPU.

 Its main objective is to increase system performance in


accordance with the chosen set of criteria.

 CPU scheduler selects a process among the processes that


are ready to execute and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, is


responsible for make the decision of which process to
execute next.(Ready to Running State).
 Context switching is done by dispatcher only.

A dispatcher does the following:


Switching context.
Switching to user mode.
Jumping to the proper location in the newly
loaded program.

 Short-term schedulers are faster than long-term


schedulers.
 The medium-term scheduler - selects the process in ready
queue and reintroduced into the memory.

 Medium-term scheduling is a part of swapping. It swaps


out the process from ready queue and swap in the
process to ready queue.

 When system loads get high, this scheduler will swap one or
more processes out of the ready queue for a few seconds, in
order to allow smaller faster jobs to finish up quickly and clear
the system.
Advantages of medium-term scheduler –

To remove process from memory and thus reduce the


degree of multiprogramming (number of processes in
memory).

To make a proper mix of processes(CPU bound and I/O


bound ).

➢ Short-term scheduler is invoked very frequently


(milliseconds)  (must be fast)

➢ Long-term scheduler is invoked very infrequently (seconds,


minutes)  (may be slow).
➢ Processes can be described as either:

➢ I/O-bound process – spends more time doing I/O than


computations, many short CPU bursts.

➢ CPU-bound process – spends more time doing


computations; few very long CPU bursts.

An efficient scheduling system will select a good mix of CPU-


bound processes and I/O bound processes.

• If the scheduler selects more I/O bound process, then I/O


queue will be full and ready queue will be empty.

• If the scheduler selects more CPU bound process, then


ready queue will be full and I/O queue will be empty.

Time sharing systems employ a medium-term scheduler.


Comparison among Scheduler
S. Long-Term Scheduler Short-Term Scheduler Medium-Term
N. Scheduler
1 It is a job scheduler. It is a CPU scheduler. It is a process
swapping scheduler.
2 Speed is lesser than Speed is fastest among Speed is in between
short term scheduler. other two. both short and long
term scheduler.
3 It controls the degree of It provides lesser control It reduces the degree
multiprogramming. over degree of of multiprogramming.
multiprogramming.
4 It is almost absent or It is also minimal in time It is a part of Time
minimal in time sharing sharing system. sharing systems.
system.
5 It selects processes from It selects those It can re-introduce the
pool and loads them into processes which are process into memory
memory for execution. ready to execute. and execution can be
continued.
There are many operations that can be performed on
processes. Some of these are

• process creation.

• process preemption.

• process blocking.

• process termination.
 Parent process create children processes, which, in turn create
other processes, forming a tree of processes.

 Processes need to be created in the system for different


operations. This can be done by the following events −

• User request for process creation.

• System initialization.

• Execution of a process creation system call by a running


process.

• Batch job initialization.


 Generally, process identified and managed via a process
identifier (pid)

 Resource sharing
◦ Parent and children share all resources.
◦ Children share subset of parent’s resources.
◦ Parent and child share no resources.

 Execution
◦ Parent and children execute concurrently.
◦ Parent waits until children terminate.
 Address space
◦ Child duplicate of parent.
◦ Child has a new program loaded into it.

 UNIX examples
◦ fork system call creates new process.
◦ exec system call used after a fork to replace the process’
memory space with a new program.
#include <sys/types.h>
#include <studio.h>
#include <unistd.h>
int main()
{
pid_t pid;
/* fork another process */
pid = fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
return 1;
}
else if (pid == 0) { /* child process */
execlp("/bin/ls", "ls", NULL);
}
else { /* parent process */
/* parent will wait for the child */
wait (NULL);
printf ("Child Complete");
}
return 0;
}
Process Preemption
An interrupt mechanism is used in preemption that suspends the
process executing currently and the next process to execute is
determined by the short-term scheduler.

Preemption makes sure that all processes get some CPU time for
execution.

A diagram that demonstrates process preemption is as follows −


Process Blocking

The process is blocked if it is waiting for some event to occur.

This event may be I/O as the I/O events are executed in the
main memory and don't require the processor. After the event is
complete, the process again goes to the ready state.

A diagram that demonstrates process blocking is as follows −


process termination.
• A process terminates when it finishes executing its last
statement and asks the operating system to delete it, by
using the exit( ) system call.

• All of the resources assigned to the process like


memory, open files, and I/O buffers, are deallocated by
the operating system.

• A process can cause the termination of another process by


using appropriate system call.

• The parent process can terminate its child processes by


knowing of the PID of the child.
process termination.

A parent may terminate the execution of children for a variety


of reasons, such as:

• The child has exceeded its usage of the resources, it


has been allocated.

• The task assigned to the child is no longer required.

• The parent is exiting, and the operating system


terminates all the children. This is called cascading
termination.
 Interprocess Communication- Processes executing may be
either co-operative or independent processes.

• Independent Processes – processes that cannot affect


other processes or be affected by other processes executing in
the system.

• Cooperating Processes – processes that can affect other


processes or be affected by other processes executing in the
system.
Co-operation among processes are allowed for following
reasons

•Information Sharing - There may be several processes which


need to access the same file. So the information must be
accessible at the same time to all users.

•Computation speedup - Often a solution to a problem can be


solved faster if the problem can be broken down into sub-
tasks, which are solved simultaneously ( particularly when
multiple processors are involved. )
•Modularity - A system can be divided into cooperating modules
and executed by sending information among one another.

•Convenience - Even a single user can work on multiple task by


information sharing.

 Two models of IPC


• Shared memory

• Message passing
 Mechanism for processes to communicate and to synchronize their
actions.

 Message system – processes communicate with each other without


resorting to shared variables.

 IPC facility provides two operations:


◦ send(message) – message size fixed or variable.
◦ receive(message)

 If P and Q wish to communicate, they need to:


◦ establish a communication link between them
◦ exchange messages via send/receive

 Implementation of communication link


◦ physical (e.g., shared memory, hardware bus)
◦ logical (e.g., logical properties)
 Processes must name each other explicitly:
◦ send (P, message) – send a message to process P

◦ receive(Q, message) – receive a message from process Q

 Properties of communication link


◦ Links are established automatically.

◦ A link is associated with exactly one pair of communicating


processes.

◦ Between each pair there exists exactly one link.

◦ The link may be unidirectional, but is usually bi-directional.


 Messages are directed and received from mailboxes (also
referred to as ports)
◦ Each mailbox has a unique id.

◦ Processes can communicate only if they share a mailbox.

 Properties of communication link


◦ Link established only if processes share a common mailbox.

◦ A link may be associated with many processes.

◦ Each pair of processes may share several communication


links.
◦ Link may be unidirectional or bi-directional.
Operations
create a new mailbox.
send and receive messages through mailbox.
destroy a mailbox.
Primitives are defined as:
send(A, message) – send a message to mailbox A.
receive(A, message) – receive a message from mailbox A.
Mailbox sharing
P1, P2, and P3 share mailbox A
P1, sends; P2 and P3 receive
Who gets the message?
Solutions
Allow a link to be associated with at most two processes.
Allow only one process at a time to execute a receive
operation.
Allow the system to select arbitrarily the receiver. Sender is
notified who the receiver was.
 Message passing may be either blocking or non-blocking

 Blocking is considered synchronous


◦ Blocking send has the sender block until the message
is received.

◦ Blocking receive has the receiver block until a


message is available.

 Non-blocking is considered asynchronous


◦ Non-blocking send has the sender send the message
and continue.

◦ Non-blocking receive has the receiver receive a valid


message or null.
Buffering
 Queue of messages attached to the link; implemented in one
of three ways

1. Zero capacity – 0 messages


Sender must wait for receiver (rendezvous)

2. Bounded capacity – finite length of n messages


Sender must wait if link full.

3. Unbounded capacity – infinite length


Sender never waits.
Lab 4 Develop a C program which demonstrates
interprocess communication between a reader process
and a writer process. Use mkfifo, open, read, write and
close APIs in your program.
1. Writer Process (writerProcess()):

• Create a named pipe (FIFO) using mkfifo with a specified


name (FIFO_NAME).

• Open the FIFO in write-only mode (O_WRONLY) using


open.

• Write a message or data into the FIFO using write.

• Close the FIFO using close after writing the data.


2. Reader Process (readerProcess()):

• Open the same named pipe created by the writer in read


only mode (O_RDONLY) using open.

• Read data from the FIFO using read.

• Print the received data or message.

• Close the FIFO using close after reading the data.


3. Main Function (main()):
• Fork a child process using fork().

• In the parent process (writer), call the writerProcess()


function to write data to the named pipe.

• In the child process (reader), call the readerProcess()


to read data from the named pipe.

• Ensure proper handling of errors such as failed forks,


pipe creation, and data reading/writing failures.

You might also like