Operating System 2 Unit Notes
Operating System 2 Unit Notes
Course Objectives:
Provide an introduction to operating system concepts (i.e., processes, threads,
scheduling,synchronization, deadlocks, memory management, file and I/O
subsystems and protection)
Introduce the issues to be considered in the design and development of
operating system
Introduce basic Unix commands, system call interface for process management,
interprocess communication and I/O in Unix
Course Outcomes:
Will be able to control access to a computer and the files that may be shared
Demonstrate the knowledge of the components of computer and their respective
roles in computing.
Ability to recognize and resolve user problems with standard operating
environments.
Gain practical knowledge of how programming languages, operating systems,
and
architectures interact and how to use each effectively.
UNIT - I
Operating System - Introduction, Structures - Simple Batch, Multiprogrammed,
Time-shared,Personal Computer, Parallel, Distributed Systems, Real-Time
Systems, System components,Operating System services, System Calls
UNIT - II
Process and CPU Scheduling - Process concepts and scheduling, Operations on
processes,Cooperating Processes, Threads, and Interposes Communication,
Scheduling Criteria, SchedulingAlgorithms, Multiple -Processor Scheduling.
System call interface for process management-fork, exit, wait, waitpid, exec
UNIT - III
Deadlocks - System Model, Deadlocks Characterization, Methods for Handling
Deadlocks, DeadlockPrevention, Deadlock Avoidance, Deadlock Detection, and
Recovery from Deadlock
Process Management and Synchronization - The Critical Section Problem,
Synchronization Hardware, Semaphores, and Classical Problems of
Synchronization, Critical Regions, Monitors
Interprocess Communication Mechanisms: IPC between processes on a single
computer system,IPC between processes on different systems, using pipes, FIFOs,
message queues, shared memory.
UNIT - IV
Memory Management and Virtual Memory - Logical versus Physical Address
Space, Swapping,Contiguous Allocation, Paging, Segmentation, Segmentation
with Paging, Demand Paging, PageReplacement, Page Replacement Algorithms.
UNIT - V
File System Interface and Operations -Access methods, Directory Structure,
Protection, FileSystem Structure, Allocation methods, Free-space Management.
Usage of open, create, read, write,close, lseek, stat, ioctl system calls.
TEXT BOOKS:
1. Operating System Principles- Abraham Silberchatz, Peter B. Galvin, Greg Gagne
7th Edition,
John Wiley
2. Advanced programming in the UNIX environment, W.R. Stevens, Pearson
education.
REFERENCE BOOKS:
1. Operating Systems – Internals and Design Principles Stallings, Fifth Edition–
2005, Pearson
Education/PHI
2. Operating System A Design Approach- Crowley, TMH.
3. Modern Operating Systems, Andrew S. Tanenbaum 2nd edition, Pearson/PHI
4. UNIX programming environment, Kernighan and Pike, PHI/ Pearson Education
5. UNIX Internals -The New Frontiers, U. Vahalia, Pearson Education.
UNIT-I
Operating System - Introduction, Structures - Simple Batch, Multiprogrammed,
Time-shared,Personal Computer, Parallel, Distributed Systems, Real-Time
Systems, System components,Operating System services, System Calls
INTRODUCTION
Evolution of Operating Systems
The evolution of operating systems is directly dependent on the development of
computer systems and how users use them. Here is a quick tour of computing systems
through the past fifty years in the timeline.
Early Evolution
• 1945: ENIAC, Moore School of Engineering, University of Pennsylvania.
• 1949: EDSAC and EDVAC
• 1949: BINAC - a successor to the ENIAC
• 1951: UNIVAC by Remington
• 1952: IBM 701
• 1956: The interrupt
• 1954-1957: FORTRAN was developed
Operating Systems - Late 1950s
By the late 1950s Operating systems were well improved and started supporting
following usages:
• It was able to perform Single stream batch processing.
• It could use Common, standardized, input/output routines for device access.
• Program transition capabilities to reduce the overhead of starting a new job was
added.
• Error recovery to clean up after a job terminated abnormally was added.
• Job control languages that allowed users to specify the job definition and
resource requirements were made possible.
Operating Systems - In 1960s
• 1961: The dawn of minicomputers
• 1962: Compatible Time-Sharing System (CTSS) from MIT
• 1963: Burroughs Master Control Program (MCP) for the B5000 system
• 1964: IBM System/360
• 1960s: Disks became mainstream
• 1966: Minicomputers got cheaper, more powerful, and really useful.
• 1967-1968: Mouse was invented.
• 1964 and onward: Multics
• 1969: The UNIX Time-Sharing System from Bell Telephone Laboratories.
Supported OS Features by 1970s
• Multi User and Multi tasking was introduced.
• Dynamic address translation hardware and Virtual machines came into
picture.
• Modular architectures came into existence.
• Personal, interactive systems came into existence.
Accomplishments after 1970
• 1971: Intel announces the microprocessor
• 1972: IBM comes out with VM: the Virtual Machine Operating System
• 1973: UNIX 4th Edition is published
• 1973: Ethernet
• 1974 The Personal Computer Age begins
• 1974: Gates and Allen wrote BASIC for the Altair
• 1976: Apple II
• August 12, 1981: IBM introduces the IBM PC
• 1983 Microsoft begins work on MS-Windows
• 1984 Apple Macintosh comes out
• 1990 Microsoft Windows 3.0 comes out
• 1991 GNU/Linux
• 1992 The first Windows virus comes out
• 1993 Windows NT
• 2007: iOS
• 2008: Android OS
And as the research and development work continues, we are seeing new operating
systems being developed and existing ones getting improved and modified to enhance
the overall user experience, making operating systems fast and efficient like never
before.
Also, with the onset of new devies like wearables, which includes, Smart
Watches, Smart Glasses, VR gears etc, the demand for unconventional operating
systems is also rising.
An operating system (OS) is a collection of software that manages computer hardware
resources and provides common services for computer programs. The operating system
is a vital component of the system software in a computer system.
An Operating System (OS) is an interface between a computer user and computer
hardware. An operating system is a software which performs all the basic tasks like file
management, memory management, process management, handling input and output,
and controlling peripheral devices such as disk drives and printers.
Some popular Operating Systems include Linux Operating System, Windows Operating
System, VMS, OS/400, AIX, z/OS, etc.
DEFINITION
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.
APPLICATIONS OF OPERATING SYSTEM
Following are some of the important activities that an Operating System performs −
• Security − By means of password and similar other techniques, it prevents
unauthorized access to programs and data.
• Control over system performance − Recording delays between request for a
service and response from the system.
• Job accounting − Keeping track of time and resources used by various jobs and
users.
• Error detecting aids − Production of dumps, traces, error messages, and other
debugging and error detecting aids.
• Coordination between other softwares and users − Coordination and
assignment of compilers, interpreters, assemblers and other software to the
various users of the computer systems.
FUNCTIONS OF AN OPERATING SYSTEM :
Following are some of important functions of an operating System.
• Memory Management
• Processor Management
• Device Management
• File Management
• Security
• Control over system performance
• Job accounting
• Error detecting aids
• Coordination between other software and users
Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main
memory is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a
program to be executed, it must in the main memory. An Operating System does the
following activities for memory management −
• Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part
are not in use.
• In multiprogramming, the OS decides which process will get memory when and
how much.
• Allocates the memory when a process requests it to do so.
• De-allocates the memory when a process no longer needs it or has been
terminated.
Processor Management
In multiprogramming environment, the OS decides which process gets the processor
when and for how much time. This function is called process scheduling. An Operating
System does the following activities for processor management −
• Keeps tracks of processor and status of process. The program responsible for this
task is known as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when a process is no longer required.
Device Management
An Operating System manages device communication via their respective drivers. It
does the following activities for device management −
• Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
• Decides which process gets the device when and for how much time.
• Allocates the device in the efficient way.
• De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
An Operating System does the following activities for file management −
• Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
• Decides who gets the resources.
• Allocates the resources.
• De-allocates the resources.
Other Important Activities
Following are some of the important activities that an Operating System performs −
• Security − By means of password and similar other techniques, it prevents
unauthorized access to programs and data.
• Control over system performance − Recording delays between request for a
service and response from the system.
• Job accounting − Keeping track of time and resources used by various jobs and
users.
• Error detecting aids − Production of dumps, traces, error messages, and other
debugging and error detecting aids.
• Coordination between other softwares and users − Coordination and
assignment of compilers, interpreters, assemblers and other software to the
various users of the computer systems.
OPERATING SYSTEMS-STRUCTURES
Operating systems are there from the very first computer generation and they keep
evolving with time. In this chapter, we will discuss some of the important types of
operating systems which are most commonly used.
Batch operating system
The users of a batch operating system do not interact with the computer directly. Each
user prepares his job on an off-line device like punch cards and submits it to the
computer operator. To speed up processing, jobs with similar needs are batched
together and run as a group. The programmers leave their programs with the operator
and the operator then sorts the programs with similar requirements into batches.
The problems with Batch Systems are as follows −
• Lack of interaction between the user and the job.
• CPU is often idle, because the speed of the mechanical I/O devices is slower
than the CPU.
• Difficult to provide the desired priority.
Time-sharing operating systems
Time-sharing is a technique which enables many people, located at various terminals,
to use a particular computer system at the same time. Time-sharing or multitasking is a
logical extension of multiprogramming. Processor's time which is shared among multiple
users simultaneously is termed as time-sharing.
The main difference between Multiprogrammed Batch Systems and Time-Sharing
Systems is that in case of Multiprogrammed batch systems, the objective is to maximize
processor use, whereas in Time-Sharing Systems, the objective is to minimize response
time.
Multiple jobs are executed by the CPU by switching between them, but the switches
occur so frequently. Thus, the user can receive an immediate response. For example, in
a transaction processing, the processor executes each user program in a short burst or
quantum of computation. That is, if n users are present, then each user can get a time
quantum. When the user submits the command, the response time is in few seconds at
most.
The operating system uses CPU scheduling and multiprogramming to provide each user
with a small portion of a time. Computer systems that were designed primarily as batch
systems have been modified to time-sharing systems.
Advantages of Timesharing operating systems are as follows −
• Provides the advantage of quick response.
• Avoids duplication of software.
• Reduces CPU idle time.
Disadvantages of Time-sharing operating systems are as follows −
• Problem of reliability.
• Question of security and integrity of user programs and data.
• Problem of data communication.
Distributed operating System
Distributed systems use multiple central processors to serve multiple real-time
applications and multiple users. Data processing jobs are distributed among the
processors accordingly.
The processors communicate with one another through various communication lines
(such as high-speed buses or telephone lines). These are referred as loosely coupled
systems or distributed systems. Processors in a distributed system may vary in size and
function. These processors are referred as sites, nodes, computers, and so on.
The advantages of distributed systems are as follows −
• With resource sharing facility, a user at one site may be able to use the
resources available at another.
• Speedup the exchange of data with one another via electronic mail.
• If one site fails in a distributed system, the remaining sites can potentially
continue operating.
• Better service to the customers.
• Reduction of the load on the host computer.
• Reduction of delays in data processing.
Network operating System
A Network Operating System runs on a server and provides the server the capability to
manage data, users, groups, security, applications, and other networking functions. The
primary purpose of the network operating system is to allow shared file and printer
access among multiple computers in a network, typically a local area network (LAN), a
private network or to other networks.
Examples of network operating systems include Microsoft Windows Server 2003,
Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
The advantages of network operating systems are as follows −
• Centralized servers are highly stable.
• Security is server managed.
• Upgrades to new technologies and hardware can be easily integrated into the
system.
• Remote access to servers is possible from different locations and types of
systems.
The disadvantages of network operating systems are as follows −
• High cost of buying and running a server.
• Dependency on a central location for most operations.
• Regular maintenance and updates are required.
Real Time operating System
A real-time system is defined as a data processing system in which the time interval
required to process and respond to inputs is so small that it controls the environment.
The time taken by the system to respond to an input and display of required updated
information is termed as the response time. So in this method, the response time is very
less as compared to online processing.
Real-time systems are used when there are rigid time requirements on the operation of
a processor or the flow of data and real-time systems can be used as a control device in
a dedicated application. A real-time operating system must have well-defined, fixed time
constraints, otherwise the system will fail. For example, Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.
There are two types of real-time operating systems.
Hard real-time systems
Hard real-time systems guarantee that critical tasks complete on time. In hard real-time
systems, secondary storage is limited or missing and the data is stored in ROM. In these
systems, virtual memory is almost never found.
Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets priority over other
tasks and retains the priority until it completes. Soft real-time systems have limited utility
than hard real-time systems. For example, multimedia, virtual reality, Advanced Scientific
Projects like undersea exploration and planetary rovers, etc.
OPERATING SYSTEM COMPONENTS
Components of Operating Systems
What are OS Components?
An operating system is a large and complex system that can only be created by
partitioning into small pieces. These pieces should be a well-defined portion of the
system, which carefully defined inputs, outputs, and functions.
Although Mac, Unix, Linux, Windows, and other OS do not have the same structure,
most of the operating systems share similar OS system components like File, Process,
Memory, I/O device management.
• What are OS Components ?
• File Management
• Process Management
• I/O Device Management
• Network Management
• Main Memory management
• Secondary-Storage Management
• Security Management
• Other Important Activities
File Management
A file is a set of related information which is should define by its creator. It commonly
represents programs, both source and object forms, and data. Data files can be
numeric, alphabetic, or alphanumeric.
Function of file management in OS:
The operating system has the following important given activities in connections with file
management:
• File and directory creation and deletion.
• For manipulating files and directories.
• Mapping files onto secondary storage.
• Backup files on stable storage media.
Process Management
The process management component is a procedure for managing the many processes
that are running simultaneously on the operating system. Every software application
program has one or more processes associated with them when they are running.
For example, when you use a browser like Google Chrome, there is a process running
for that browser program. The OS also has many processes running, which performing
various functions.
All these processes should be managed by process management, which keeps
processes for running efficiently. It also uses memory allocated to them and shutting
them down when needed.
The execution of a process must be sequential so, at least one instruction should be
executed on behalf of the process.
Functions of process management in OS:
The following are functions of process management.
• Process creation and deletion.
• Suspension and resumption.
• Synchronization process
• Communication process
I/O Device Management
One of the important use of an operating system that helps you to hide the variations of
specific hardware devices from the user.
Functions of I/O management in OS:
• It offers buffer caching system
• It provides general device driver code
• It provides drivers for particular hardware devices.
• I/O helps you to knows the individualities of a specific device.
Network Management
Network management is the process of administering and managing computer
networks. It includes performance management, fault analysis, provisioning of networks,
and maintaining the quality of service.
A distributed system is a collection of computers/processors that never share their own
memory or a clock. In this type of system, all the processors have their local Memory,
and the processors communicate with each other using different communication lines,
like fiber optics or telephone lines.
The computers in the network are connected through a communication network, which
can be configured in a number of different ways. With the help of network management,
the network can be fully or partially connected, which helps users to design routing and
connection strategies that overcome connection and security issues.
Functions of Network management:
• Distributed systems help you to various computing resources in size and
function. They may involve microprocessors, minicomputers, and many general-
purpose computer systems.
• A distributed system also offers the user access to the various resources the
network shares.
• It helps to access shared resources that help computation to speed-up or offers
data availability and reliability.
Main Memory management
Main Memory is a large array of storage or bytes, which has an address. The memory
management process is conducted by using a sequence of reads or writes of specific
memory addresses.
In order to execute a program , it should be mapped to absolute addresses and loaded
inside the Memory. The selection of a memory management method depends on
several factors.
However, it is mainly based on the hardware design of the system. Each algorithm
requires corresponding hardware support. Main Memory offers fast storage that can be
accessed directly by the CPU. It is costly and hence has a lower storage capacity.
However, for a program to be executed, it must be in the main Memory.
Functions of Memory management in OS:
An Operating System performs the following functions for Memory Management:
• It helps you to keep track of primary memory.
• Determine what part of it are in use by whom, what part is not in use.
• In a multiprogramming system, the OS takes a decision about which process will
get Memory and how much.
• Allocates the memory when a process requests
• It also de-allocates the Memory when a process no longer requires or has been
terminated.
Secondary-Storage Management
The most important task of a computer system is to execute programs. These
programs, along with the data, helps you to access, which is in the main memory during
execution.
This Memory of the computer is very small to store all data and programs permanently.
The computer system offers secondary storage to back up the main Memory. Today
modern computers use hard drives/SSD as the primary storage of both programs and
data. However, the secondary storage management also works with storage devices,
like a USB flash drive, and CD/DVD drives.
Programs like assemblers, compilers, stored on the disk until it is loaded into memory,
and then use the disk as a source and destination for processing.
Functions of Secondary storage management in OS:
Here, are major functions of secondary storage management in OS:
• Storage allocation
• Free space management
• Disk scheduling
Security Management
The various processes in an operating system need to be secured from each other's
activities. For that purpose, various mechanisms can be used to ensure that those
processes which want to operate files, memory CPU, and other hardware resources
should have proper authorization from the operating system.
For example, Memory addressing hardware helps you to confirm that a process can be
executed within its own address space. The time ensures that no process has control of
the CPU without renouncing it.
Lastly, no process is allowed to do its own I/O, to protect, which helps you to keep the
integrity of the various peripheral devices.
Other Important Activities
Here, are some other important activities of OS:
• The user's program can't execute I/O operations directly. The operating system
should provide some medium to perform this.
• OS checks the capability of the program to read, write, create, and delete files.
• OS facilitates an exchange of information between processes executing on the
same or different systems.
• OS components help you to makes sure that you get the correct computing by
detecting errors in the CPU and memory hardware.
The interface between a process and an operating system is provided by system calls. In
general, system calls are available as assembly language instructions. They are also
included in the manuals used by the assembly level programmers. System calls are
usually made when a process in user mode requires access to a resource. Then it
requests the kernel to provide the resource via a system call.
A figure representing the execution of the system call is given as follows:
As can be seen from this diagram, the processes execute normally in the user mode until
a system call interrupts this. Then the system call is executed on a priority basis in the
kernel mode. After the execution of the system call, the control returns to the user mode
and execution of user processes can be resumed.
In general, system calls are required in the following situations:
• If a file system requires the creation or deletion of files. Reading and writing from
files also require a system call.
• Creation and management of new processes.
• Network connections also require system calls. This includes sending and
receiving packets.
• Access to a hardware devices such as a printer, scanner etc. requires a system
call.
Types of System Calls
There are mainly five types of system calls. These are explained in detail as follows:
Process Control
These system calls deal with processes such as process creation, process termination
etc.
File Management
These system calls are responsible for file manipulation such as creating a file, reading a
file, writing into a file etc.
Device Management
These system calls are responsible for device manipulation such as reading from device
buffers, writing into device buffers etc.
Information Maintenance
These system calls handle information and its transfer between the operating system and
the user program.
Communication
These system calls are useful for interprocess communication. They also deal with
creating and deleting a communication connection.
Some of the examples of all the above types of system calls in Windows and Unix are
given as follows:
Types of System Calls Windows Linux
CreateProcess() fork()
Process Control ExitProcess() exit()
WaitForSingleObject() wait()
CreateFile() open()
ReadFile() read()
File Management
WriteFile() write()
CloseHandle() close()
SetConsoleMode() ioctl()
Device Management ReadConsole() read()
WriteConsole() write()
GetCurrentProcessID() getpid()
Information Maintenance SetTimer() alarm()
Sleep() sleep()
CreatePipe() pipe()
Communication CreateFileMapping() shmget()
MapViewOfFile() mmap()
There are many different system calls as shown above. Details of some of those system
calls are as follows:
open()
The open() system call is used to provide access to a file in a file system. This system
call allocates resources to the file and provides a handle that the process uses to refer to
the file. A file can be opened by multiple processes at the same time or be restricted to
one process. It all depends on the file organisation and file system.
read()
The read() system call is used to access data from a file that is stored in the file system.
The file to read can be identified by its file descriptor and it should be opened using open()
before it can be read. In general, the read() system calls takes three arguments i.e. the
file descriptor, buffer which stores read data and number of bytes to be read from the file.
write()
The write() system calls writes the data from a user buffer into a device such as a file.
This system call is one of the ways to output data from a program. In general, the write
system calls takes three arguments i.e. file descriptor, pointer to the buffer where data is
stored and number of bytes to write from the buffer.
close()
The close() system call is used to terminate access to a file system. Using this system
call means that the file is no longer required by the program and so the buffers are flushed,
the file metadata is updated and the file resources are de-allocated.
UNIT - II
Process and CPU Scheduling - Process concepts and scheduling, Operations on
processes,Cooperating Processes, Threads, and Interposes Communication,
Scheduling Criteria, SchedulingAlgorithms, Multiple -Processor Scheduling.
System call interface for process management-fork, exit, wait, waitpid, exec
UNIT-2
Process
• A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
• A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
• To put it in simple terms, we write our computer programs in a text file and when
we execute this program, it becomes a process which performs all the tasks
mentioned in the program.
• When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows
a simplified layout of a process inside main memory −
•
S.N. Component & Description
1 Stack
The process Stack contains the temporary data such as method/function parameters,
return address and local variables.
2 Heap
This is dynamically allocated memory to a process during its run time.
3 Text
This includes the current activity represented by the value of Program Counter and
the contents of the processor's registers.
4 Data
This section contains the global and static variables.
• Program
• A program is a piece of code which may be a single line or millions of lines. A
computer program is usually written by a computer programmer in a programming
language. For example, here is a simple program written in C programming
language −
• #include <stdio.h>
•
• int main() {
• printf("Hello, World! \n");
• return 0;
• }
• A computer program is a collection of instructions that performs a specific task
when executed by a computer. When we compare a program with a process, we
can conclude that a process is a dynamic instance of a computer program.
• A part of a computer program that performs a well-defined task is known as
an algorithm. A collection of computer programs, libraries and related data are
referred to as a software.
• Process Life Cycle
• When a process executes, it passes through different states. These stages may
differ in different operating systems, and the names of these states are also not
standardized.
• In general, a process can have one of the following five states at a time.
S.N. State & Description
1 Start
This is the initial state when a process is first started/created.
2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting
to have the processor allocated to them by the operating system so that they can run.
Process may come into this state after Start state or while running it by but interrupted
by the scheduler to assign CPU to some other process.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.
4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting
for user input, or waiting for a file to become available.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it
is moved to the terminated state where it waits to be removed from main memory.
•
• Process Control Block (PCB)
• A Process Control Block is a data structure maintained by the Operating System
for every process. The PCB is identified by an integer process ID (PID). A PCB
keeps all the information needed to keep track of a process as listed below in the
table −
S.N. Information & Description
1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
2 Process privileges
This is required to allow/disallow access to system resources.
3 Process ID
Unique identification for each of the process in the operating system.
4 Pointer
A pointer to parent process.
5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for
this process.
6 CPU registers
Various CPU registers where process need to be stored for execution for running
state.
9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution
ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
• The architecture of a PCB is completely dependent on Operating System and may
contain different information in different operating systems. Here is a simplified
diagram of a PCB −
•
• The PCB is maintained for a process throughout its lifetime, and is deleted once
the process terminates.
Definition
The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the same
execution state are placed in the same queue. When the state of a process is changed,
its PCB is unlinked from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority,
etc.). The OS scheduler determines how to move processes between the ready and run
queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which are described
below −
S.N. State & Description
1 Running
When a new process is created, it enters into the system as in the running state.
2 Not Running
Processes that are not running are kept in queue, waiting for their turn to execute.
Each entry in the queue is a pointer to a particular process. Queue is implemented by
using linked list. Use of dispatcher is as follows. When a process is interrupted, that
process is transferred in the waiting queue. If the process has completed or aborted,
the process is discarded. In either case, the dispatcher then selects a process from
the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to decide
which process to run. Schedulers are of three types −
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads
them into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the
state from new to ready, then there is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance
in accordance with the chosen set of criteria. It is the change of ready state to running
state of the process. CPU scheduler selects a process among the processes that are
ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process
to execute next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the
memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-
charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended
processes cannot make any progress towards completion. In this condition, to remove
the process from memory and make space for other processes, the suspended process
is moved to the secondary storage. This process is called swapping, and the process
is said to be swapped out or rolled out. Swapping may be necessary to improve the
process mix.
Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
Process Preemption
An interrupt mechanism is used in preemption that suspends the process executing
currently and the next process to execute is determined by the short-term scheduler.
Preemption makes sure that all processes get some CPU time for execution.
A diagram that demonstrates process preemption is as follows:
Process Blocking
The process is blocked if it is waiting for some event to occur. This event may be I/O as
the I/O events are executed in the main memory and don't require the processor. After
the event is complete, the process again goes to the ready state.
A diagram that demonstrates process blocking is as follows:
Process Termination
After the process has completed the execution of its last instruction, it is terminated. The
resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer relevant.
The child process sends its status information to the parent process before it terminates.
Also, when a parent process is terminated, its child processes are terminated as well as
the child processes cannot run if the parent processes are terminated.
Cooperating processes
Cooperating processes are those that can affect or are affected by other processes
running on the system. Cooperating processes may share data with each other.
Reasons for needing cooperating processes
There may be many reasons for the requirement of cooperating processes. Some of
these are given as follows:
1. Modularity
Modularity involves dividing complicated tasks into smaller subtasks. These
subtasks can completed by different cooperating processes. This leads to faster
and more efficient completion of the required tasks.
2. Information Sharing
Sharing of information between multiple processes can be accomplished using
cooperating processes. This may include access to the same files. A mechanism
is required so that the processes can access the files in parallel to each other.
3. Convenience
There are many tasks that a user needs to do such as compiling, printing, editing
etc. It is convenient if these tasks can be managed by cooperating processes.
4. Computation Speedup
Subtasks of a single task can be performed parallely using cooperating processes.
This increases the computation speedup as the task can be executed faster.
However, this is only possible if the system has multiple processing elements.
Methods of Cooperation
Cooperating processes can coordinate with each other using shared data or messages.
Details about these are given as follows:
1. Cooperation by Sharing
The cooperating processes can cooperate with each other using shared data such
as memory, variables, files, databases etc. Critical section is used to provide data
integrity and writing is mutually exclusive to prevent inconsistent data.
A diagram that demonstrates cooperation by sharing is given as follows:
In the above diagram, Process P1 and P2 can cooperate with each other using
shared data such as memory, variables, files, databases etc.
2. Cooperation by Communication
The cooperating processes can cooperate with each other using messages. This
may lead to deadlock if each process is waiting for a message from the other to
perform a operation. Starvation is also possible if a process never receives a
message.
A diagram that demonstrates cooperation by communication is given as follows:
In the above diagram, Process P1 and P2 can cooperate with each other using
messages to communicate.
What is Thread?
A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its
current working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment
and open files. When one thread alters a code segment memory item, all other threads
see that.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach to
improving performance of operating system by reducing the overhead thread is
equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process.
Each thread represents a separate flow of control. Threads have been successfully used
in implementing network servers and web server. They also provide a suitable foundation
for parallel execution of applications on shared memory multiprocessors. The following
figure shows the working of a single-threaded and a multithreaded process.
Difference between Process and Thread
S.N. Process Thread
2 Process switching needs interaction with Thread switching does not need
operating system. to interact with operating system.
3 In multiple processing environments, each All threads can share same set
process executes the same code but has its own of open files, child processes.
memory and file resources.
4 If one process is blocked, then no other process While one thread is blocked and
can execute until the first process is unblocked. waiting, a second thread in the
same task can run.
5 Multiple processes without using threads use Multiple threaded processes use
more resources. fewer resources.
6 In multiple processes each process operates One thread can read, write or
independently of the others. change another thread's data.
Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.
Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed threads acting on kernel,
an operating system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The
thread library contains code for creating and destroying threads, for passing message
and data between threads, for scheduling thread execution and for saving and restoring
thread contexts. The application starts with a single thread.
Advantages
• Thread switching does not require Kernel mode privileges.
• User level thread can run on any operating system.
• Scheduling can be application specific in the user level thread.
• User level threads are fast to create and manage.
Disadvantages
• In a typical operating system, most system calls are blocking.
• Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management
code in the application area. Kernel threads are supported directly by the operating
system. Any application can be programmed to be multithreaded. All of the threads within
an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals
threads within the process. Scheduling by the Kernel is done on a thread basis. The
Kernel performs thread creation, scheduling and management in Kernel space. Kernel
threads are generally slower to create and manage than the user threads.
Advantages
• Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
• If one thread in a process is blocked, the Kernel can schedule another thread of
the same process.
• Kernel routines themselves can be multithreaded.
Disadvantages
• Kernel threads are generally slower to create and manage than the user threads.
• Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread
facility. Solaris is a good example of this combined approach. In a combined system,
multiple threads within the same application can run in parallel on multiple processors
and a blocking system call need not block the entire process. Multithreading models are
three types
• Many to many relationship.
• Many to one relationship.
• One to one relationship.
Many to Many Model
The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level
threads are multiplexing with 6 kernel level threads. In this model, developers can create
as many user threads as necessary and the corresponding Kernel threads can run in
parallel on a multiprocessor machine. This model provides the best accuracy on
concurrency and when a thread performs a blocking system call, the kernel can schedule
another thread for execution.
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking
system call, the entire process will be blocked. Only one thread can access the Kernel
at a time, so multiple threads are unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way
that the system does not support them, then the Kernel threads use the many-to-one
relationship modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This
model provides more concurrency than the many-to-one model. It also allows another
thread to run when a thread makes a blocking system call. It supports multiple threads
to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.
1 User-level threads are faster to create and Kernel-level threads are slower to
manage. create and manage.
3 User-level thread is generic and can run on Kernel-level thread is specific to the
any operating system. operating system.
Interprocess communications
Interprocess communication is the mechanism provided by the operating system that
allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of
data from one process to another.
A diagram that illustrates interprocess communication is as follows:
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25
Priority Based Scheduling
• Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
• Each process is assigned a priority. Process with highest priority is to be executed
first and so on.
• Processes with same priority are executed on first come first served basis.
• Priority can be decided based on memory requirements, time requirements or any
other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we
are considering 1 is the lowest priority.
Process Arrival Time Execution Time Priority Service Time
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6
Shortest Remaining Time
• Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
• The processor is allocated to the job closest to completion but it can be preempted
by a newer ready job with shorter time to completion.
• Impossible to implement in interactive systems where required CPU time is not
known.
• It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling
• Round Robin is the preemptive process scheduling algorithm.
• Each process is provided a fix time to execute, it is called a quantum.
• Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
• Context switching is used to save states of preempted processes.
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
Average Wait Time: (9+2+12+11) / 4 = 8.5
Multiple-Level Queues Scheduling
Multiple-level queues are not an independent scheduling algorithm. They make use of
other existing algorithms to group and schedule jobs with common characteristics.
• Multiple queues are maintained for processes with common characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in
another queue. The Process Scheduler then alternately selects jobs from each queue
and assigns them to the CPU based on the algorithm assigned to the queue.
More examples:
Shortest Job First(SJF) Scheduling
Shortest Job First scheduling works on the process with the shortest burst
time or duration first.
• This is the best approach to minimize waiting time.
• This is used in Batch Systems.
• It is of two types:
1. Non Pre-emptive
2. Pre-emptive
• To successfully implement it, the burst time/duration time of the processes
should be known to the processor in advance, which is practically not feasible all
the time.
• This scheduling algorithm is optimal if all the jobs/processes are available at the
same time. (either Arrival time is 0 for all, or Arrival time is same for all)
Non Pre-emptive Shortest Job First
Consider the below processes available in the ready queue for execution, with arrival
time as 0 for all and given burst times.
As you can see in the GANTT chart above, the process P4 will be picked up first as it
has the shortest burst time, then P2, followed by P3 and at last P1.
We scheduled the same set of processes using the First come first serve algorithm in
the previous tutorial, and got average waiting time to be 18.75 ms, whereas with SJF,
the average waiting time comes out 4.5 ms.
Problem with Non Pre-emptive SJF
If the arrival time for processes are different, which means all the processes are not
available in the ready queue at time 0, and some jobs arrive after some time, in such
situation, sometimes process with short burst time have to wait for the current process's
execution to finish, because in Non Pre-emptive SJF, on arrival of a process with short
duration, the existing job/process's execution is not halted/stopped to execute the short
job first.
This leads to the problem of Starvation, where a shorter process has to wait for a long
time until the current longer process gets executed. This happens if shorter jobs keep
coming, but this can be solved using the concept of aging.
Pre-emptive Shortest Job First
In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they
arrive, but as a process with short burst time arrives, the existing process is
preempted or removed from execution, and the shorter job is executed first.
As you can see in the GANTT chart above, as P1 arrives first, hence it's execution
starts immediately, but just after 1 ms, process P2 arrives with a burst time of 3
ms which is less than the burst time of P1, hence the process P1(1 ms done, 20 ms left)
is preemptied and process P2 is executed.
As P2 is getting executed, after 1 ms, P3 arrives, but it has a burst time greater than
that of P2, hence execution of P2 continues. But after another millisecond, P4 arrives
with a burst time of 2 ms, as a result P2(2 ms done, 1 ms left) is preemptied and P4 is
executed.
After the completion of P4, process P2 is picked up and finishes, then P2 will get
executed and at last P1.
The Pre-emptive SJF is also known as Shortest Remaining Time First, because at
any given point of time, the job with the shortest remaining time is executed first.
Priority CPU Scheduling
In this tutorial we will understand the priority scheduling algorithm, how it works and its
advantages and disadvantages.
In the Shortest Job First scheduling algorithm, the priority of a process is generally the
inverse of the CPU burst time, i.e. the larger the burst time the lower is the priority of
that process.
In case of priority scheduling the priority is not always set as the inverse of the CPU
burst time, rather it can be internally or externally set, but yes the scheduling is done on
the basis of priority of the process where the process which is most urgent is processed
first, followed by the ones with lesser priority in order.
Processes with same priority are executed in FCFS manner.
The priority of process, when internally defined, can be decided based on memory
requirements, time limits ,number of open files, ratio of I/O burst to CPU burst etc.
Whereas, external priorities are set based on criteria outside the operating system, like
the importance of the process, funds paid for the computer resource use, makrte factor
etc.
Types of Priority Scheduling Algorithm
Priority scheduling can be of two types:
1. Preemptive Priority Scheduling: If the new process arrived at the ready queue
has a higher priority than the currently running process, the CPU is preempted,
which means the processing of the current process is stoped and the incoming
new process with higher priority gets the CPU for its execution.
2. Non-Preemptive Priority Scheduling: In case of non-preemptive priority
scheduling algorithm if a new process arrives with a higher priority than the
current running process, the incoming process is put at the head of the ready
queue, which means after the execution of the current process it will be
processed.
Example of Priority Scheduling Algorithm
Consider the below table fo processes with their respective CPU burst times and the
priorities.
As you can see in the GANTT chart that the processes are given CPU time just on the
basis of the priorities.
Problem with Priority Scheduling Algorithm
In priority scheduling algorithm, the chances of indefinite blocking or starvation.
A process is considered blocked when it is ready to run but has to wait for the CPU as
some other process is running currently.
But in case of priority scheduling if new higher priority processes keeps coming in the
ready queue then the processes waiting in the ready queue with lower priority may have
to wait for long durations before getting the CPU for execution.
In 1973, when the IBM 7904 machine was shut down at MIT, a low-priority process was
found which was submitted in 1967 and had not yet been run.
Using Aging Technique with Priority Scheduling
To prevent starvation of any process, we can use the concept of aging where we keep
on increasing the priority of low-priority process based on the its waiting time.
For example, if we decide the aging factor to be 0.5 for each day of waiting, then if a
process with priority 20(which is comparitively low priority) comes in the ready queue.
After one day of waiting, its priority is increased to 19.5 and so on.
Doing so, we can ensure that no process will have to wait for indefinite time for getting
CPU time for processing.
Round Robin Scheduling
Exit()
On many computer operating systems, a computer process terminates its execution by
making an exit system call. More generally, an exit in a multithreading environment
means that a thread of execution has stopped running. For resource management,
the operating system reclaims resources (memory, files, etc.) that were used by the
process. The process is said to be a dead process after it terminates.
C:
#include <stdlib.h>
int main(void)
{
exit(EXIT_SUCCESS); /* or return EXIT_SUCCESS */
}
UNIX:
exit 0
wait ():
A call to wait() blocks the calling process until one of its child processes exits or a signal
is received. After child process terminates, parent continues its execution after wait
system call instruction.
Child process may terminate due to any of these:
• It calls exit();
• It returns (an int) from main
• It receives a signal (from the OS or another process) whose default action is to
terminate.
Syntax in c language:
#include
#include
int main()
{
pid_t cpid;
if (fork()== 0)
exit(0); /* terminate child */
else
cpid = wait(NULL); /* reaping parent */
printf("Parent pid = %d\n", getpid());
printf("Child pid = %d\n", cpid);
return 0;
}
Output:
Parent pid = 12345678
Child pid = 89546848
Waitpid():
We know if more than one child processes are terminated, then wait() reaps any
arbitrarily child process but if we want to reap any specific child process, we
use waitpid() function.
Syntax in c language:
pid_t waitpid (child_pid, &status, options);
Options Parameter
• If 0 means no option parent has to wait for terminates child.
• If WNOHANG means parent does not wait if child does not terminate just check
and return waitpid().(not block parent process)
• If child_pid is -1 then means any arbitrarily child, here waitpid() work same as
wait() work.
Return value of waitpid()
• pid of child, if child has exited
• 0, if using WNOHANG and child hasn’t exited
// C program to demonstrate waitpid()
#include<stdio.h>
#include<stdlib.h>
#include<sys/wait.h>
#include<unistd.h>
void waitexample()
{
int i, stat;
pid_t pid[5];
for (i=0; i<5; i++)
{
if ((pid[i] = fork()) == 0)
{
sleep(1);
exit(100 + i);
}
}
// Driver code
int main()
{
waitexample();
return 0;
}
Output:
Child 50 terminated with status: 100
Child 51 terminated with status: 101
Child 52 terminated with status: 102
Child 53 terminated with status: 103
Child 54 terminated with status: 104
Here, Children pids depend on the system but in order print all child information.