0% found this document useful (0 votes)
620 views54 pages

Operating System 2 Unit Notes

This document provides information on an operating systems course, including its objectives, outcomes, and units of study. The course objectives are to introduce operating system concepts like processes, memory management, and file systems. The outcomes include being able to control computer access and demonstrate knowledge of computer components. The units of study cover topics such as processes and scheduling, deadlocks, memory management, and file systems.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
620 views54 pages

Operating System 2 Unit Notes

This document provides information on an operating systems course, including its objectives, outcomes, and units of study. The course objectives are to introduce operating system concepts like processes, memory management, and file systems. The outcomes include being able to control computer access and demonstrate knowledge of computer components. The units of study cover topics such as processes and scheduling, deadlocks, memory management, and file systems.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 54

OPERATING SYSTEM NOTES

Course Objectives:
Provide an introduction to operating system concepts (i.e., processes, threads,
scheduling,synchronization, deadlocks, memory management, file and I/O
subsystems and protection)
Introduce the issues to be considered in the design and development of
operating system
Introduce basic Unix commands, system call interface for process management,
interprocess communication and I/O in Unix

Course Outcomes:
Will be able to control access to a computer and the files that may be shared
Demonstrate the knowledge of the components of computer and their respective
roles in computing.
Ability to recognize and resolve user problems with standard operating
environments.
Gain practical knowledge of how programming languages, operating systems,
and
architectures interact and how to use each effectively.

UNIT - I
Operating System - Introduction, Structures - Simple Batch, Multiprogrammed,
Time-shared,Personal Computer, Parallel, Distributed Systems, Real-Time
Systems, System components,Operating System services, System Calls

UNIT - II
Process and CPU Scheduling - Process concepts and scheduling, Operations on
processes,Cooperating Processes, Threads, and Interposes Communication,
Scheduling Criteria, SchedulingAlgorithms, Multiple -Processor Scheduling.
System call interface for process management-fork, exit, wait, waitpid, exec
UNIT - III
Deadlocks - System Model, Deadlocks Characterization, Methods for Handling
Deadlocks, DeadlockPrevention, Deadlock Avoidance, Deadlock Detection, and
Recovery from Deadlock
Process Management and Synchronization - The Critical Section Problem,
Synchronization Hardware, Semaphores, and Classical Problems of
Synchronization, Critical Regions, Monitors
Interprocess Communication Mechanisms: IPC between processes on a single
computer system,IPC between processes on different systems, using pipes, FIFOs,
message queues, shared memory.
UNIT - IV
Memory Management and Virtual Memory - Logical versus Physical Address
Space, Swapping,Contiguous Allocation, Paging, Segmentation, Segmentation
with Paging, Demand Paging, PageReplacement, Page Replacement Algorithms.
UNIT - V
File System Interface and Operations -Access methods, Directory Structure,
Protection, FileSystem Structure, Allocation methods, Free-space Management.
Usage of open, create, read, write,close, lseek, stat, ioctl system calls.
TEXT BOOKS:
1. Operating System Principles- Abraham Silberchatz, Peter B. Galvin, Greg Gagne
7th Edition,
John Wiley
2. Advanced programming in the UNIX environment, W.R. Stevens, Pearson
education.
REFERENCE BOOKS:
1. Operating Systems – Internals and Design Principles Stallings, Fifth Edition–
2005, Pearson
Education/PHI
2. Operating System A Design Approach- Crowley, TMH.
3. Modern Operating Systems, Andrew S. Tanenbaum 2nd edition, Pearson/PHI
4. UNIX programming environment, Kernighan and Pike, PHI/ Pearson Education
5. UNIX Internals -The New Frontiers, U. Vahalia, Pearson Education.

UNIT-I
Operating System - Introduction, Structures - Simple Batch, Multiprogrammed,
Time-shared,Personal Computer, Parallel, Distributed Systems, Real-Time
Systems, System components,Operating System services, System Calls

INTRODUCTION
Evolution of Operating Systems
The evolution of operating systems is directly dependent on the development of
computer systems and how users use them. Here is a quick tour of computing systems
through the past fifty years in the timeline.
Early Evolution
• 1945: ENIAC, Moore School of Engineering, University of Pennsylvania.
• 1949: EDSAC and EDVAC
• 1949: BINAC - a successor to the ENIAC
• 1951: UNIVAC by Remington
• 1952: IBM 701
• 1956: The interrupt
• 1954-1957: FORTRAN was developed
Operating Systems - Late 1950s
By the late 1950s Operating systems were well improved and started supporting
following usages:
• It was able to perform Single stream batch processing.
• It could use Common, standardized, input/output routines for device access.
• Program transition capabilities to reduce the overhead of starting a new job was
added.
• Error recovery to clean up after a job terminated abnormally was added.
• Job control languages that allowed users to specify the job definition and
resource requirements were made possible.
Operating Systems - In 1960s
• 1961: The dawn of minicomputers
• 1962: Compatible Time-Sharing System (CTSS) from MIT
• 1963: Burroughs Master Control Program (MCP) for the B5000 system
• 1964: IBM System/360
• 1960s: Disks became mainstream
• 1966: Minicomputers got cheaper, more powerful, and really useful.
• 1967-1968: Mouse was invented.
• 1964 and onward: Multics
• 1969: The UNIX Time-Sharing System from Bell Telephone Laboratories.
Supported OS Features by 1970s
• Multi User and Multi tasking was introduced.
• Dynamic address translation hardware and Virtual machines came into
picture.
• Modular architectures came into existence.
• Personal, interactive systems came into existence.
Accomplishments after 1970
• 1971: Intel announces the microprocessor
• 1972: IBM comes out with VM: the Virtual Machine Operating System
• 1973: UNIX 4th Edition is published
• 1973: Ethernet
• 1974 The Personal Computer Age begins
• 1974: Gates and Allen wrote BASIC for the Altair
• 1976: Apple II
• August 12, 1981: IBM introduces the IBM PC
• 1983 Microsoft begins work on MS-Windows
• 1984 Apple Macintosh comes out
• 1990 Microsoft Windows 3.0 comes out
• 1991 GNU/Linux
• 1992 The first Windows virus comes out
• 1993 Windows NT
• 2007: iOS
• 2008: Android OS
And as the research and development work continues, we are seeing new operating
systems being developed and existing ones getting improved and modified to enhance
the overall user experience, making operating systems fast and efficient like never
before.
Also, with the onset of new devies like wearables, which includes, Smart
Watches, Smart Glasses, VR gears etc, the demand for unconventional operating
systems is also rising.
An operating system (OS) is a collection of software that manages computer hardware
resources and provides common services for computer programs. The operating system
is a vital component of the system software in a computer system.
An Operating System (OS) is an interface between a computer user and computer
hardware. An operating system is a software which performs all the basic tasks like file
management, memory management, process management, handling input and output,
and controlling peripheral devices such as disk drives and printers.
Some popular Operating Systems include Linux Operating System, Windows Operating
System, VMS, OS/400, AIX, z/OS, etc.
DEFINITION
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.
APPLICATIONS OF OPERATING SYSTEM
Following are some of the important activities that an Operating System performs −
• Security − By means of password and similar other techniques, it prevents
unauthorized access to programs and data.
• Control over system performance − Recording delays between request for a
service and response from the system.
• Job accounting − Keeping track of time and resources used by various jobs and
users.
• Error detecting aids − Production of dumps, traces, error messages, and other
debugging and error detecting aids.
• Coordination between other softwares and users − Coordination and
assignment of compilers, interpreters, assemblers and other software to the
various users of the computer systems.
FUNCTIONS OF AN OPERATING SYSTEM :
Following are some of important functions of an operating System.
• Memory Management
• Processor Management
• Device Management
• File Management
• Security
• Control over system performance
• Job accounting
• Error detecting aids
• Coordination between other software and users
Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main
memory is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a
program to be executed, it must in the main memory. An Operating System does the
following activities for memory management −
• Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part
are not in use.
• In multiprogramming, the OS decides which process will get memory when and
how much.
• Allocates the memory when a process requests it to do so.
• De-allocates the memory when a process no longer needs it or has been
terminated.
Processor Management
In multiprogramming environment, the OS decides which process gets the processor
when and for how much time. This function is called process scheduling. An Operating
System does the following activities for processor management −
• Keeps tracks of processor and status of process. The program responsible for this
task is known as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when a process is no longer required.
Device Management
An Operating System manages device communication via their respective drivers. It
does the following activities for device management −
• Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
• Decides which process gets the device when and for how much time.
• Allocates the device in the efficient way.
• De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
An Operating System does the following activities for file management −
• Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
• Decides who gets the resources.
• Allocates the resources.
• De-allocates the resources.
Other Important Activities
Following are some of the important activities that an Operating System performs −
• Security − By means of password and similar other techniques, it prevents
unauthorized access to programs and data.
• Control over system performance − Recording delays between request for a
service and response from the system.
• Job accounting − Keeping track of time and resources used by various jobs and
users.
• Error detecting aids − Production of dumps, traces, error messages, and other
debugging and error detecting aids.
• Coordination between other softwares and users − Coordination and
assignment of compilers, interpreters, assemblers and other software to the
various users of the computer systems.
OPERATING SYSTEMS-STRUCTURES
Operating systems are there from the very first computer generation and they keep
evolving with time. In this chapter, we will discuss some of the important types of
operating systems which are most commonly used.
Batch operating system
The users of a batch operating system do not interact with the computer directly. Each
user prepares his job on an off-line device like punch cards and submits it to the
computer operator. To speed up processing, jobs with similar needs are batched
together and run as a group. The programmers leave their programs with the operator
and the operator then sorts the programs with similar requirements into batches.
The problems with Batch Systems are as follows −
• Lack of interaction between the user and the job.
• CPU is often idle, because the speed of the mechanical I/O devices is slower
than the CPU.
• Difficult to provide the desired priority.
Time-sharing operating systems
Time-sharing is a technique which enables many people, located at various terminals,
to use a particular computer system at the same time. Time-sharing or multitasking is a
logical extension of multiprogramming. Processor's time which is shared among multiple
users simultaneously is termed as time-sharing.
The main difference between Multiprogrammed Batch Systems and Time-Sharing
Systems is that in case of Multiprogrammed batch systems, the objective is to maximize
processor use, whereas in Time-Sharing Systems, the objective is to minimize response
time.
Multiple jobs are executed by the CPU by switching between them, but the switches
occur so frequently. Thus, the user can receive an immediate response. For example, in
a transaction processing, the processor executes each user program in a short burst or
quantum of computation. That is, if n users are present, then each user can get a time
quantum. When the user submits the command, the response time is in few seconds at
most.
The operating system uses CPU scheduling and multiprogramming to provide each user
with a small portion of a time. Computer systems that were designed primarily as batch
systems have been modified to time-sharing systems.
Advantages of Timesharing operating systems are as follows −
• Provides the advantage of quick response.
• Avoids duplication of software.
• Reduces CPU idle time.
Disadvantages of Time-sharing operating systems are as follows −
• Problem of reliability.
• Question of security and integrity of user programs and data.
• Problem of data communication.
Distributed operating System
Distributed systems use multiple central processors to serve multiple real-time
applications and multiple users. Data processing jobs are distributed among the
processors accordingly.
The processors communicate with one another through various communication lines
(such as high-speed buses or telephone lines). These are referred as loosely coupled
systems or distributed systems. Processors in a distributed system may vary in size and
function. These processors are referred as sites, nodes, computers, and so on.
The advantages of distributed systems are as follows −
• With resource sharing facility, a user at one site may be able to use the
resources available at another.
• Speedup the exchange of data with one another via electronic mail.
• If one site fails in a distributed system, the remaining sites can potentially
continue operating.
• Better service to the customers.
• Reduction of the load on the host computer.
• Reduction of delays in data processing.
Network operating System
A Network Operating System runs on a server and provides the server the capability to
manage data, users, groups, security, applications, and other networking functions. The
primary purpose of the network operating system is to allow shared file and printer
access among multiple computers in a network, typically a local area network (LAN), a
private network or to other networks.
Examples of network operating systems include Microsoft Windows Server 2003,
Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
The advantages of network operating systems are as follows −
• Centralized servers are highly stable.
• Security is server managed.
• Upgrades to new technologies and hardware can be easily integrated into the
system.
• Remote access to servers is possible from different locations and types of
systems.
The disadvantages of network operating systems are as follows −
• High cost of buying and running a server.
• Dependency on a central location for most operations.
• Regular maintenance and updates are required.
Real Time operating System
A real-time system is defined as a data processing system in which the time interval
required to process and respond to inputs is so small that it controls the environment.
The time taken by the system to respond to an input and display of required updated
information is termed as the response time. So in this method, the response time is very
less as compared to online processing.
Real-time systems are used when there are rigid time requirements on the operation of
a processor or the flow of data and real-time systems can be used as a control device in
a dedicated application. A real-time operating system must have well-defined, fixed time
constraints, otherwise the system will fail. For example, Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.
There are two types of real-time operating systems.
Hard real-time systems
Hard real-time systems guarantee that critical tasks complete on time. In hard real-time
systems, secondary storage is limited or missing and the data is stored in ROM. In these
systems, virtual memory is almost never found.
Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets priority over other
tasks and retains the priority until it completes. Soft real-time systems have limited utility
than hard real-time systems. For example, multimedia, virtual reality, Advanced Scientific
Projects like undersea exploration and planetary rovers, etc.
OPERATING SYSTEM COMPONENTS
Components of Operating Systems
What are OS Components?
An operating system is a large and complex system that can only be created by
partitioning into small pieces. These pieces should be a well-defined portion of the
system, which carefully defined inputs, outputs, and functions.
Although Mac, Unix, Linux, Windows, and other OS do not have the same structure,
most of the operating systems share similar OS system components like File, Process,
Memory, I/O device management.
• What are OS Components ?
• File Management
• Process Management
• I/O Device Management
• Network Management
• Main Memory management
• Secondary-Storage Management
• Security Management
• Other Important Activities
File Management
A file is a set of related information which is should define by its creator. It commonly
represents programs, both source and object forms, and data. Data files can be
numeric, alphabetic, or alphanumeric.
Function of file management in OS:
The operating system has the following important given activities in connections with file
management:
• File and directory creation and deletion.
• For manipulating files and directories.
• Mapping files onto secondary storage.
• Backup files on stable storage media.
Process Management
The process management component is a procedure for managing the many processes
that are running simultaneously on the operating system. Every software application
program has one or more processes associated with them when they are running.
For example, when you use a browser like Google Chrome, there is a process running
for that browser program. The OS also has many processes running, which performing
various functions.
All these processes should be managed by process management, which keeps
processes for running efficiently. It also uses memory allocated to them and shutting
them down when needed.
The execution of a process must be sequential so, at least one instruction should be
executed on behalf of the process.
Functions of process management in OS:
The following are functions of process management.
• Process creation and deletion.
• Suspension and resumption.
• Synchronization process
• Communication process
I/O Device Management
One of the important use of an operating system that helps you to hide the variations of
specific hardware devices from the user.
Functions of I/O management in OS:
• It offers buffer caching system
• It provides general device driver code
• It provides drivers for particular hardware devices.
• I/O helps you to knows the individualities of a specific device.
Network Management
Network management is the process of administering and managing computer
networks. It includes performance management, fault analysis, provisioning of networks,
and maintaining the quality of service.
A distributed system is a collection of computers/processors that never share their own
memory or a clock. In this type of system, all the processors have their local Memory,
and the processors communicate with each other using different communication lines,
like fiber optics or telephone lines.
The computers in the network are connected through a communication network, which
can be configured in a number of different ways. With the help of network management,
the network can be fully or partially connected, which helps users to design routing and
connection strategies that overcome connection and security issues.
Functions of Network management:
• Distributed systems help you to various computing resources in size and
function. They may involve microprocessors, minicomputers, and many general-
purpose computer systems.
• A distributed system also offers the user access to the various resources the
network shares.
• It helps to access shared resources that help computation to speed-up or offers
data availability and reliability.
Main Memory management
Main Memory is a large array of storage or bytes, which has an address. The memory
management process is conducted by using a sequence of reads or writes of specific
memory addresses.
In order to execute a program , it should be mapped to absolute addresses and loaded
inside the Memory. The selection of a memory management method depends on
several factors.
However, it is mainly based on the hardware design of the system. Each algorithm
requires corresponding hardware support. Main Memory offers fast storage that can be
accessed directly by the CPU. It is costly and hence has a lower storage capacity.
However, for a program to be executed, it must be in the main Memory.
Functions of Memory management in OS:
An Operating System performs the following functions for Memory Management:
• It helps you to keep track of primary memory.
• Determine what part of it are in use by whom, what part is not in use.
• In a multiprogramming system, the OS takes a decision about which process will
get Memory and how much.
• Allocates the memory when a process requests
• It also de-allocates the Memory when a process no longer requires or has been
terminated.
Secondary-Storage Management
The most important task of a computer system is to execute programs. These
programs, along with the data, helps you to access, which is in the main memory during
execution.
This Memory of the computer is very small to store all data and programs permanently.
The computer system offers secondary storage to back up the main Memory. Today
modern computers use hard drives/SSD as the primary storage of both programs and
data. However, the secondary storage management also works with storage devices,
like a USB flash drive, and CD/DVD drives.
Programs like assemblers, compilers, stored on the disk until it is loaded into memory,
and then use the disk as a source and destination for processing.
Functions of Secondary storage management in OS:
Here, are major functions of secondary storage management in OS:
• Storage allocation
• Free space management
• Disk scheduling
Security Management
The various processes in an operating system need to be secured from each other's
activities. For that purpose, various mechanisms can be used to ensure that those
processes which want to operate files, memory CPU, and other hardware resources
should have proper authorization from the operating system.
For example, Memory addressing hardware helps you to confirm that a process can be
executed within its own address space. The time ensures that no process has control of
the CPU without renouncing it.
Lastly, no process is allowed to do its own I/O, to protect, which helps you to keep the
integrity of the various peripheral devices.
Other Important Activities
Here, are some other important activities of OS:
• The user's program can't execute I/O operations directly. The operating system
should provide some medium to perform this.
• OS checks the capability of the program to read, write, create, and delete files.
• OS facilitates an exchange of information between processes executing on the
same or different systems.
• OS components help you to makes sure that you get the correct computing by
detecting errors in the CPU and memory hardware.

OPERATING SYSTEM SERVICES


An Operating System provides services to both the users and to the programs.
• It provides programs an environment to execute.
• It provides users the services to execute the programs in a convenient manner.

Following are a few common services provided by an operating system −


• Program execution
• I/O operations
• File System manipulation
• Communication
• Error Detection
• Resource Allocation
• Protection
Program execution
Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc. Each of these activities is
encapsulated as a process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system
with respect to program management −
• Loads a program into memory.
• Executes the program.
• Handles program's execution.
• Provides a mechanism for process synchronization.
• Provides a mechanism for process communication.
• Provides a mechanism for deadlock handling.
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
• I/O operation means read or write operation with any file or any specific I/O
device.
• Operating system provides the access to the required I/O device when required.
File system manipulation
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media
has its own properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −
• Program needs to read a file or write a file.
• The operating system gives the permission to the program for operation on file.
• Permission varies from read-only, read-write, denied and so on.
• Operating System provides an interface to the user to create/delete files.
• Operating System provides an interface to the user to create/delete directories.
• Operating System provides an interface to create the backup of file system.
Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages communications
between all the processes. Multiple processes communicate with one another through
communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −
• Two processes often require data to be transferred between them
• Both the processes can be on one computer or on different computers, but are
connected through a computer network.
• Communication may be implemented by two methods, either by Shared Memory
or by Message Passing.
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or
in the memory hardware. Following are the major activities of an operating system with
respect to error handling −
• The OS constantly checks for possible errors.
• The OS takes an appropriate action to ensure correct and consistent computing.
Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory,
CPU cycles and files storage are to be allocated to each user or job. Following are the
major activities of an operating system with respect to resource management −
• The OS manages all kinds of resources using schedulers.
• CPU scheduling algorithms are used for better utilization of CPU.
Protection
Considering a computer system having multiple users and concurrent execution of
multiple processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes,
or users to the resources defined by a computer system. Following are the major
activities of an operating system with respect to protection −
• The OS ensures that all access to system resources is controlled.
• The OS ensures that external I/O devices are protected from invalid access
attempts.
• The OS provides authentication features for each user by means of passwords.

The interface between a process and an operating system is provided by system calls. In
general, system calls are available as assembly language instructions. They are also
included in the manuals used by the assembly level programmers. System calls are
usually made when a process in user mode requires access to a resource. Then it
requests the kernel to provide the resource via a system call.
A figure representing the execution of the system call is given as follows:
As can be seen from this diagram, the processes execute normally in the user mode until
a system call interrupts this. Then the system call is executed on a priority basis in the
kernel mode. After the execution of the system call, the control returns to the user mode
and execution of user processes can be resumed.
In general, system calls are required in the following situations:
• If a file system requires the creation or deletion of files. Reading and writing from
files also require a system call.
• Creation and management of new processes.
• Network connections also require system calls. This includes sending and
receiving packets.
• Access to a hardware devices such as a printer, scanner etc. requires a system
call.
Types of System Calls
There are mainly five types of system calls. These are explained in detail as follows:
Process Control
These system calls deal with processes such as process creation, process termination
etc.
File Management
These system calls are responsible for file manipulation such as creating a file, reading a
file, writing into a file etc.
Device Management
These system calls are responsible for device manipulation such as reading from device
buffers, writing into device buffers etc.
Information Maintenance
These system calls handle information and its transfer between the operating system and
the user program.
Communication
These system calls are useful for interprocess communication. They also deal with
creating and deleting a communication connection.
Some of the examples of all the above types of system calls in Windows and Unix are
given as follows:
Types of System Calls Windows Linux
CreateProcess() fork()
Process Control ExitProcess() exit()
WaitForSingleObject() wait()
CreateFile() open()
ReadFile() read()
File Management
WriteFile() write()
CloseHandle() close()
SetConsoleMode() ioctl()
Device Management ReadConsole() read()
WriteConsole() write()
GetCurrentProcessID() getpid()
Information Maintenance SetTimer() alarm()
Sleep() sleep()
CreatePipe() pipe()
Communication CreateFileMapping() shmget()
MapViewOfFile() mmap()
There are many different system calls as shown above. Details of some of those system
calls are as follows:
open()
The open() system call is used to provide access to a file in a file system. This system
call allocates resources to the file and provides a handle that the process uses to refer to
the file. A file can be opened by multiple processes at the same time or be restricted to
one process. It all depends on the file organisation and file system.
read()
The read() system call is used to access data from a file that is stored in the file system.
The file to read can be identified by its file descriptor and it should be opened using open()
before it can be read. In general, the read() system calls takes three arguments i.e. the
file descriptor, buffer which stores read data and number of bytes to be read from the file.
write()
The write() system calls writes the data from a user buffer into a device such as a file.
This system call is one of the ways to output data from a program. In general, the write
system calls takes three arguments i.e. file descriptor, pointer to the buffer where data is
stored and number of bytes to write from the buffer.
close()
The close() system call is used to terminate access to a file system. Using this system
call means that the file is no longer required by the program and so the buffers are flushed,
the file metadata is updated and the file resources are de-allocated.

UNIT - II
Process and CPU Scheduling - Process concepts and scheduling, Operations on
processes,Cooperating Processes, Threads, and Interposes Communication,
Scheduling Criteria, SchedulingAlgorithms, Multiple -Processor Scheduling.
System call interface for process management-fork, exit, wait, waitpid, exec

UNIT-2
Process
• A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
• A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
• To put it in simple terms, we write our computer programs in a text file and when
we execute this program, it becomes a process which performs all the tasks
mentioned in the program.
• When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows
a simplified layout of a process inside main memory −


S.N. Component & Description

1 Stack
The process Stack contains the temporary data such as method/function parameters,
return address and local variables.

2 Heap
This is dynamically allocated memory to a process during its run time.

3 Text
This includes the current activity represented by the value of Program Counter and
the contents of the processor's registers.

4 Data
This section contains the global and static variables.
• Program
• A program is a piece of code which may be a single line or millions of lines. A
computer program is usually written by a computer programmer in a programming
language. For example, here is a simple program written in C programming
language −
• #include <stdio.h>

• int main() {
• printf("Hello, World! \n");
• return 0;
• }
• A computer program is a collection of instructions that performs a specific task
when executed by a computer. When we compare a program with a process, we
can conclude that a process is a dynamic instance of a computer program.
• A part of a computer program that performs a well-defined task is known as
an algorithm. A collection of computer programs, libraries and related data are
referred to as a software.
• Process Life Cycle
• When a process executes, it passes through different states. These stages may
differ in different operating systems, and the names of these states are also not
standardized.
• In general, a process can have one of the following five states at a time.
S.N. State & Description

1 Start
This is the initial state when a process is first started/created.

2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting
to have the processor allocated to them by the operating system so that they can run.
Process may come into this state after Start state or while running it by but interrupted
by the scheduler to assign CPU to some other process.

3 Running
Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.

4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting
for user input, or waiting for a file to become available.

5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it
is moved to the terminated state where it waits to be removed from main memory.


• Process Control Block (PCB)
• A Process Control Block is a data structure maintained by the Operating System
for every process. The PCB is identified by an integer process ID (PID). A PCB
keeps all the information needed to keep track of a process as listed below in the
table −
S.N. Information & Description

1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2 Process privileges
This is required to allow/disallow access to system resources.

3 Process ID
Unique identification for each of the process in the operating system.

4 Pointer
A pointer to parent process.

5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for
this process.

6 CPU registers
Various CPU registers where process need to be stored for execution for running
state.

7 CPU Scheduling Information


Process priority and other scheduling information which is required to schedule the
process.

8 Memory management information


This includes the information of page table, memory limits, Segment table depending
on memory used by the operating system.

9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution
ID etc.

10 IO status information
This includes a list of I/O devices allocated to the process.
• The architecture of a PCB is completely dependent on Operating System and may
contain different information in different operating systems. Here is a simplified
diagram of a PCB −


• The PCB is maintained for a process throughout its lifetime, and is deleted once
the process terminates.
Definition
The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such
operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the same
execution state are placed in the same queue. When the state of a process is changed,
its PCB is unlinked from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority,
etc.). The OS scheduler determines how to move processes between the ready and run
queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which are described
below −
S.N. State & Description

1 Running
When a new process is created, it enters into the system as in the running state.

2 Not Running
Processes that are not running are kept in queue, waiting for their turn to execute.
Each entry in the queue is a pointer to a particular process. Queue is implemented by
using linked list. Use of dispatcher is as follows. When a process is interrupted, that
process is transferred in the waiting queue. If the process has completed or aborted,
the process is discarded. In either case, the dispatcher then selects a process from
the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to decide
which process to run. Schedulers are of three types −
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads
them into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the
state from new to ready, then there is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance
in accordance with the chosen set of criteria. It is the change of ready state to running
state of the process. CPU scheduler selects a process among the processes that are
ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process
to execute next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the
memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-
charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended
processes cannot make any progress towards completion. In this condition, to remove
the process from memory and make space for other processes, the suspended process
is moved to the secondary storage. This process is called swapping, and the process
is said to be swapped out or rolled out. Swapping may be necessary to improve the
process mix.
Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in time It is a part of Time sharing


minimal in time sharing sharing system systems.
system

5 It selects processes from It selects those It can re-introduce the


pool and loads them into processes which are process into memory and
memory for execution ready to execute execution can be continued.
Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point
at a later time. Using this technique, a context switcher enables multiple processes to
share a single CPU. Context switching is an essential part of a multitasking operating
system features.
When the scheduler switches the CPU from executing one process to execute another,
the state from the current running process is stored into the process control block. After
this, the state for the process to run next is loaded from its own PCB and used to set the
PC, registers, etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must be
saved and restored. To avoid the amount of context switching time, some hardware
systems employ two or more sets of processor registers. When the process is switched,
the following information is stored for later use.
• Program Counter
• Scheduling information
• Base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information
Operations on Process
There are many operations that can be performed on processes. Some of these are
process creation, process preemption, process blocking, and process termination. These
are given in detail as follows:
Process Creation
Processes need to be created in the system for different operations. This can be done by
the following events:
• User request for process creation
• System initialization
• Execution of a process creation system call by a running process
• Batch job initialization
A process may be created by another process using fork(). The creating process is called
the parent process and the created process is the child process. A child process can have
only one parent but a parent process may have many children. Both the parent and child
processes have the same memory image, open files, and environment strings. However,
they have distinct address spaces.
A diagram that demonstrates process creation using fork() is as follows:

Process Preemption
An interrupt mechanism is used in preemption that suspends the process executing
currently and the next process to execute is determined by the short-term scheduler.
Preemption makes sure that all processes get some CPU time for execution.
A diagram that demonstrates process preemption is as follows:
Process Blocking
The process is blocked if it is waiting for some event to occur. This event may be I/O as
the I/O events are executed in the main memory and don't require the processor. After
the event is complete, the process again goes to the ready state.
A diagram that demonstrates process blocking is as follows:

Process Termination
After the process has completed the execution of its last instruction, it is terminated. The
resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer relevant.
The child process sends its status information to the parent process before it terminates.
Also, when a parent process is terminated, its child processes are terminated as well as
the child processes cannot run if the parent processes are terminated.

Cooperating processes
Cooperating processes are those that can affect or are affected by other processes
running on the system. Cooperating processes may share data with each other.
Reasons for needing cooperating processes
There may be many reasons for the requirement of cooperating processes. Some of
these are given as follows:
1. Modularity
Modularity involves dividing complicated tasks into smaller subtasks. These
subtasks can completed by different cooperating processes. This leads to faster
and more efficient completion of the required tasks.
2. Information Sharing
Sharing of information between multiple processes can be accomplished using
cooperating processes. This may include access to the same files. A mechanism
is required so that the processes can access the files in parallel to each other.
3. Convenience
There are many tasks that a user needs to do such as compiling, printing, editing
etc. It is convenient if these tasks can be managed by cooperating processes.
4. Computation Speedup
Subtasks of a single task can be performed parallely using cooperating processes.
This increases the computation speedup as the task can be executed faster.
However, this is only possible if the system has multiple processing elements.
Methods of Cooperation
Cooperating processes can coordinate with each other using shared data or messages.
Details about these are given as follows:
1. Cooperation by Sharing
The cooperating processes can cooperate with each other using shared data such
as memory, variables, files, databases etc. Critical section is used to provide data
integrity and writing is mutually exclusive to prevent inconsistent data.
A diagram that demonstrates cooperation by sharing is given as follows:

In the above diagram, Process P1 and P2 can cooperate with each other using
shared data such as memory, variables, files, databases etc.
2. Cooperation by Communication
The cooperating processes can cooperate with each other using messages. This
may lead to deadlock if each process is waiting for a message from the other to
perform a operation. Starvation is also possible if a process never receives a
message.
A diagram that demonstrates cooperation by communication is given as follows:
In the above diagram, Process P1 and P2 can cooperate with each other using
messages to communicate.
What is Thread?
A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its
current working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment
and open files. When one thread alters a code segment memory item, all other threads
see that.
A thread is also called a lightweight process. Threads provide a way to improve
application performance through parallelism. Threads represent a software approach to
improving performance of operating system by reducing the overhead thread is
equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process.
Each thread represents a separate flow of control. Threads have been successfully used
in implementing network servers and web server. They also provide a suitable foundation
for parallel execution of applications on shared memory multiprocessors. The following
figure shows the working of a single-threaded and a multithreaded process.
Difference between Process and Thread
S.N. Process Thread

1 Process is heavy weight or resource intensive. Thread is light weight, taking


lesser resources than a process.

2 Process switching needs interaction with Thread switching does not need
operating system. to interact with operating system.

3 In multiple processing environments, each All threads can share same set
process executes the same code but has its own of open files, child processes.
memory and file resources.

4 If one process is blocked, then no other process While one thread is blocked and
can execute until the first process is unblocked. waiting, a second thread in the
same task can run.

5 Multiple processes without using threads use Multiple threaded processes use
more resources. fewer resources.
6 In multiple processes each process operates One thread can read, write or
independently of the others. change another thread's data.
Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.
Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed threads acting on kernel,
an operating system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The
thread library contains code for creating and destroying threads, for passing message
and data between threads, for scheduling thread execution and for saving and restoring
thread contexts. The application starts with a single thread.

Advantages
• Thread switching does not require Kernel mode privileges.
• User level thread can run on any operating system.
• Scheduling can be application specific in the user level thread.
• User level threads are fast to create and manage.
Disadvantages
• In a typical operating system, most system calls are blocking.
• Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management
code in the application area. Kernel threads are supported directly by the operating
system. Any application can be programmed to be multithreaded. All of the threads within
an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals
threads within the process. Scheduling by the Kernel is done on a thread basis. The
Kernel performs thread creation, scheduling and management in Kernel space. Kernel
threads are generally slower to create and manage than the user threads.
Advantages
• Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
• If one thread in a process is blocked, the Kernel can schedule another thread of
the same process.
• Kernel routines themselves can be multithreaded.
Disadvantages
• Kernel threads are generally slower to create and manage than the user threads.
• Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread
facility. Solaris is a good example of this combined approach. In a combined system,
multiple threads within the same application can run in parallel on multiple processors
and a blocking system call need not block the entire process. Multithreading models are
three types
• Many to many relationship.
• Many to one relationship.
• One to one relationship.
Many to Many Model
The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level
threads are multiplexing with 6 kernel level threads. In this model, developers can create
as many user threads as necessary and the corresponding Kernel threads can run in
parallel on a multiprocessor machine. This model provides the best accuracy on
concurrency and when a thread performs a blocking system call, the kernel can schedule
another thread for execution.
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking
system call, the entire process will be blocked. Only one thread can access the Kernel
at a time, so multiple threads are unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way
that the system does not support them, then the Kernel threads use the many-to-one
relationship modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This
model provides more concurrency than the many-to-one model. It also allows another
thread to run when a thread makes a blocking system call. It supports multiple threads
to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding
Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.

Difference between User-Level & Kernel-Level Thread


S.N. User-Level Threads Kernel-Level Thread

1 User-level threads are faster to create and Kernel-level threads are slower to
manage. create and manage.

2 Implementation is by a thread library at the Operating system supports creation


user level. of Kernel threads.

3 User-level thread is generic and can run on Kernel-level thread is specific to the
any operating system. operating system.

4 Multi-threaded applications cannot take Kernel routines themselves can be


advantage of multiprocessing. multithreaded.

Interprocess communications
Interprocess communication is the mechanism provided by the operating system that
allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of
data from one process to another.
A diagram that illustrates interprocess communication is as follows:

Synchronization in Interprocess Communication


Synchronization is a necessary part of interprocess communication. It is either provided
by the interprocess control mechanism or handled by the communicating processes.
Some of the methods to provide synchronization are as follows:
• Semaphore
A semaphore is a variable that controls the access to a common resource by
multiple processes. The two types of semaphores are binary semaphores and
counting semaphores.
• Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical
section at a time. This is useful for synchronization and also prevents race
conditions.
• Barrier
A barrier does not allow individual processes to proceed until all the processes
reach it. Many parallel languages and collective routines impose barriers.
• Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while
checking if the lock is available or not. This is known as busy waiting because the
process is not doing any useful operation even though it is active.
Approaches to Interprocess Communication
The different approaches to implement interprocess communication are given as follows:
• Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a
two-way data channel between two processes. This uses standard input and
output methods. Pipes are used in all POSIX systems as well as Windows
operating systems.
• Socket
The socket is the endpoint for sending or receiving data in a network. This is true
for data sent between processes on the same computer or data sent between
different computers on the same network. Most of the operating systems use
sockets for interprocess communication.
• File
A file is a data record that may be stored on a disk or acquired on demand by a file
server. Multiple processes can access a file as required. All operating systems use
files for data storage.
• Signal
Signals are useful in interprocess communication in a limited way. They are system
messages that are sent from one process to another. Normally, signals are not
used to transfer data but are used for remote commands between processes.
• Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple
processes. This is done so that the processes can communicate with each other.
All POSIX systems, as well as Windows operating systems use shared memory.
• Message Queue
Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored in the queue until their recipient
retrieves them. Message queues are quite useful for interprocess communication
and are used by most operating systems.
A diagram that demonstrates message queue and shared memory methods of
interprocess communication is as follows:
SCHEDULING CRITERIA
Different CPU scheduling algorithms has different properties. Selection decision depends
on the properties of various algorithms.
There are many criteria need to be consider for comparing CPU scheduling algorithms.
This characteristic is used to determine the best algorithm. The criteria are as following:
1. CPU Utilization
▪ We want to keep the CPU as busy as possible. It may range from 0 to 100%. In real time
system, it suits range from 40% (for a lightly loaded system ) to 90% ( for heavily loaded
system ).
2. Throughput
▪ If the CPU is busy executing processes, then work is being done. One measure of work
is the number of processes completed per time unit for throughput. For time processes,
these may be a one process for one minute. For shorter transactions, throughput might
be 10 processes per minute.
3. Turnaround Time
▪ From the submission time of a process to its completion time is turnaround time. It is the
sum of total time period spends waiting to get into memory, waiting in the ready queue,
executing in the CPU and doing input/output operations.
4. Waiting Time
▪ The CPU scheduling algorithm does not affect the amount of the time during which
process execute or does input/output. It affects only the amount of time that a process
spends waiting in the ready queue. Waiting time is the sum of period spends waiting in
the ready queue.
5. Response Time
▪ In an interactive system, a process can produce some output early and can continue,
computing new results. Previous results are being displayed the user. Thus another
measure is the time from the submission to the request until the first response is
produced. It is called Response time. The turnaround time is normally limited by speed
of output device.
Scheduling Algorithms, Multiple -Processor Scheduling.
A Process Scheduler schedules different processes to be assigned to the CPU based
on particular scheduling algorithms. There are six popular process scheduling algorithms
which we are going to discuss in this chapter −
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-Next (SJN) Scheduling
• Priority Scheduling
• Shortest Remaining Time
• Round Robin(RR) Scheduling
• Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive
algorithms are designed so that once a process enters the running state, it cannot be
preempted until it completes its allotted time, whereas the preemptive scheduling is
based on priority where a scheduler may preempt a low priority running process anytime
when a high priority process enters into a ready state.

First Come First Serve Scheduling


In the "First come first serve" scheduling algorithm, as the name suggests, the process
which arrives first, gets executed first, or we can say that the process which requests
the CPU first, gets the CPU allocated first.
• First Come First Serve, is just like FIFO(First in First out) Queue data structure,
where the data element which is added to the queue first, is the one who leaves
the queue first.
• This is used in Batch Systems.
• It's easy to understand and implement programmatically, using a Queue data
structure, where a new process enters through the tail of the queue, and the
scheduler selects process from the head of the queue.
• A perfect real life example of FCFS scheduling is buying tickets at ticket
counter.
• Completion Time: Time taken for the execution to complete, starting from arrival
time.
• Turn Around Time: Time taken to complete after arrival. In simple words, it is
the difference between the Completion time and the Arrival time.
• Waiting Time: Total time the process has to wait before it's execution begins. It
is the difference between the Turn Around time and the Burst time of the
process.
calculating Average Waiting Time
For every scheduling algorithm, Average waiting time is a crucial parameter to judge
it's performance.
AWT or Average waiting time is the average of the waiting times of the processes in the
queue, waiting for the scheduler to pick them for execution.
Lower the Average Waiting Time, better the scheduling algorithm.
Consider the processes P1, P2, P3, P4 given in the below table, arrives for execution in
the same order, with Arrival Time 0, and given Burst Time, let's find the average
waiting time using the FCFS scheduling algorithm.

The average waiting time will be 18.75 ms


For the above given proccesses, first P1 will be provided with the CPU resources,
• Hence, waiting time for P1 will be 0
• P1 requires 21 ms for completion, hence waiting time for P2 will be 21 ms
• Similarly, waiting time for process P3 will be execution time of P1 + execution
time for P2, which will be (21 + 3) ms = 24 ms.
• For process P4 it will be the sum of execution times of P1, P2 and P3.
The GANTT chart above perfectly represents the waiting time for each process.

Problems with FCFS Scheduling


Below we have a few shortcomings or problems with the FCFS scheduling algorithm:
1. It is Non Pre-emptive algorithm, which means the process priority doesn't
matter.
If a process with very least priority is being executed, more like daily routine
backup process, which takes more time, and all of a sudden some other high
priority process arrives, like interrupt to avoid system crash, the high priority
process will have to wait, and hence in this case, the system will crash, just
because of improper process scheduling.
2. Not optimal Average Waiting Time.
3. Resources utilization in parallel is not possible, which leads to Convoy Effect,
and hence poor resource(CPU, I/O etc) utilization.

What is Convoy Effect?


Convoy Effect is a situation where many processes, who need to use a resource for
short time are blocked by one process holding that resource for a long time.
This essentially leads to poort utilization of resources and hence poor performance.

Shortest Job Next (SJN)


• This is also known as shortest job first, or SJF
• This is a non-preemptive, pre-emptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in
advance.
• Impossible to implement in interactive systems where required CPU time is not
known.
• The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time
Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows −


Process Waiting Time

P0 0-0=0
P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5
Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25
Priority Based Scheduling
• Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
• Each process is assigned a priority. Process with highest priority is to be executed
first and so on.
• Processes with same priority are executed on first come first served basis.
• Priority can be decided based on memory requirements, time requirements or any
other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we
are considering 1 is the lowest priority.
Process Arrival Time Execution Time Priority Service Time

P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows −


Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2
Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6
Shortest Remaining Time
• Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
• The processor is allocated to the job closest to completion but it can be preempted
by a newer ready job with shorter time to completion.
• Impossible to implement in interactive systems where required CPU time is not
known.
• It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling
• Round Robin is the preemptive process scheduling algorithm.
• Each process is provided a fix time to execute, it is called a quantum.
• Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
• Context switching is used to save states of preempted processes.

Wait time of each process is as follows −


Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11
Average Wait Time: (9+2+12+11) / 4 = 8.5
Multiple-Level Queues Scheduling
Multiple-level queues are not an independent scheduling algorithm. They make use of
other existing algorithms to group and schedule jobs with common characteristics.
• Multiple queues are maintained for processes with common characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in
another queue. The Process Scheduler then alternately selects jobs from each queue
and assigns them to the CPU based on the algorithm assigned to the queue.
More examples:
Shortest Job First(SJF) Scheduling
Shortest Job First scheduling works on the process with the shortest burst
time or duration first.
• This is the best approach to minimize waiting time.
• This is used in Batch Systems.
• It is of two types:
1. Non Pre-emptive
2. Pre-emptive
• To successfully implement it, the burst time/duration time of the processes
should be known to the processor in advance, which is practically not feasible all
the time.
• This scheduling algorithm is optimal if all the jobs/processes are available at the
same time. (either Arrival time is 0 for all, or Arrival time is same for all)
Non Pre-emptive Shortest Job First
Consider the below processes available in the ready queue for execution, with arrival
time as 0 for all and given burst times.

As you can see in the GANTT chart above, the process P4 will be picked up first as it
has the shortest burst time, then P2, followed by P3 and at last P1.
We scheduled the same set of processes using the First come first serve algorithm in
the previous tutorial, and got average waiting time to be 18.75 ms, whereas with SJF,
the average waiting time comes out 4.5 ms.
Problem with Non Pre-emptive SJF
If the arrival time for processes are different, which means all the processes are not
available in the ready queue at time 0, and some jobs arrive after some time, in such
situation, sometimes process with short burst time have to wait for the current process's
execution to finish, because in Non Pre-emptive SJF, on arrival of a process with short
duration, the existing job/process's execution is not halted/stopped to execute the short
job first.
This leads to the problem of Starvation, where a shorter process has to wait for a long
time until the current longer process gets executed. This happens if shorter jobs keep
coming, but this can be solved using the concept of aging.
Pre-emptive Shortest Job First
In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they
arrive, but as a process with short burst time arrives, the existing process is
preempted or removed from execution, and the shorter job is executed first.

As you can see in the GANTT chart above, as P1 arrives first, hence it's execution
starts immediately, but just after 1 ms, process P2 arrives with a burst time of 3
ms which is less than the burst time of P1, hence the process P1(1 ms done, 20 ms left)
is preemptied and process P2 is executed.
As P2 is getting executed, after 1 ms, P3 arrives, but it has a burst time greater than
that of P2, hence execution of P2 continues. But after another millisecond, P4 arrives
with a burst time of 2 ms, as a result P2(2 ms done, 1 ms left) is preemptied and P4 is
executed.
After the completion of P4, process P2 is picked up and finishes, then P2 will get
executed and at last P1.
The Pre-emptive SJF is also known as Shortest Remaining Time First, because at
any given point of time, the job with the shortest remaining time is executed first.
Priority CPU Scheduling
In this tutorial we will understand the priority scheduling algorithm, how it works and its
advantages and disadvantages.
In the Shortest Job First scheduling algorithm, the priority of a process is generally the
inverse of the CPU burst time, i.e. the larger the burst time the lower is the priority of
that process.
In case of priority scheduling the priority is not always set as the inverse of the CPU
burst time, rather it can be internally or externally set, but yes the scheduling is done on
the basis of priority of the process where the process which is most urgent is processed
first, followed by the ones with lesser priority in order.
Processes with same priority are executed in FCFS manner.
The priority of process, when internally defined, can be decided based on memory
requirements, time limits ,number of open files, ratio of I/O burst to CPU burst etc.
Whereas, external priorities are set based on criteria outside the operating system, like
the importance of the process, funds paid for the computer resource use, makrte factor
etc.
Types of Priority Scheduling Algorithm
Priority scheduling can be of two types:
1. Preemptive Priority Scheduling: If the new process arrived at the ready queue
has a higher priority than the currently running process, the CPU is preempted,
which means the processing of the current process is stoped and the incoming
new process with higher priority gets the CPU for its execution.
2. Non-Preemptive Priority Scheduling: In case of non-preemptive priority
scheduling algorithm if a new process arrives with a higher priority than the
current running process, the incoming process is put at the head of the ready
queue, which means after the execution of the current process it will be
processed.
Example of Priority Scheduling Algorithm
Consider the below table fo processes with their respective CPU burst times and the
priorities.
As you can see in the GANTT chart that the processes are given CPU time just on the
basis of the priorities.
Problem with Priority Scheduling Algorithm
In priority scheduling algorithm, the chances of indefinite blocking or starvation.
A process is considered blocked when it is ready to run but has to wait for the CPU as
some other process is running currently.
But in case of priority scheduling if new higher priority processes keeps coming in the
ready queue then the processes waiting in the ready queue with lower priority may have
to wait for long durations before getting the CPU for execution.
In 1973, when the IBM 7904 machine was shut down at MIT, a low-priority process was
found which was submitted in 1967 and had not yet been run.
Using Aging Technique with Priority Scheduling
To prevent starvation of any process, we can use the concept of aging where we keep
on increasing the priority of low-priority process based on the its waiting time.
For example, if we decide the aging factor to be 0.5 for each day of waiting, then if a
process with priority 20(which is comparitively low priority) comes in the ready queue.
After one day of waiting, its priority is increased to 19.5 and so on.
Doing so, we can ensure that no process will have to wait for indefinite time for getting
CPU time for processing.
Round Robin Scheduling

• A fixed time is allotted to each process, called quantum, for execution.


• Once a process is executed for given time period that process is preemptied and
other process executes for given time period.
• Context switching is used to save states of preemptied processes.

Multilevel Queue Scheduling


Another class of scheduling algorithms has been created for situations in which
processes are easily classified into different groups.
For example: A common division is made between foreground(or interactive)
processes and background (or batch) processes. These two types of processes have
different response-time requirements, and so might have different scheduling needs. In
addition, foreground processes may have priority over background processes.
A multi-level queue scheduling algorithm partitions the ready queue into several
separate queues. The processes are permanently assigned to one queue, generally
based on some property of the process, such as memory size, process priority, or
process type. Each queue has its own scheduling algorithm.
For example: separate queues might be used for foreground and background
processes. The foreground queue might be scheduled by Round Robin algorithm, while
the background queue is scheduled by an FCFS algorithm.
In addition, there must be scheduling among the queues, which is commonly
implemented as fixed-priority preemptive scheduling. For example: The foreground
queue may have absolute priority over the background queue.
Let us consider an example of a multilevel queue-scheduling algorithm with five queues:
1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes
Each queue has absolute priority over lower-priority queues. No process in the batch
queue, for example, could run unless the queues for system processes, interactive
processes, and interactive editing processes were all empty. If an interactive editing
process entered the ready queue while a batch process was running, the batch process
will be preempted.

Multilevel Feedback Queue Scheduling


In a multilevel queue-scheduling algorithm, processes are permanently assigned to a
queue on entry to the system. Processes do not move between queues. This setup has
the advantage of low scheduling overhead, but the disadvantage of being inflexible.
Multilevel feedback queue scheduling, however, allows a process to move between
queues. The idea is to separate processes with different CPU-burst characteristics. If a
process uses too much CPU time, it will be moved to a lower-priority queue. Similarly, a
process that waits too long in a lower-priority queue may be moved to a higher-priority
queue. This form of aging prevents starvation.
An example of a multilevel feedback queue can be seen in the below figure.

In general, a multilevel feedback queue scheduler is defined by the following


parameters:
• The number of queues.
• The scheduling algorithm for each queue.
• The method used to determine when to upgrade a process to a higher-priority
queue.
• The method used to determine when to demote a process to a lower-priority
queue.
• The method used to determine which queue a process will enter when that
process needs service.
The definition of a multilevel feedback queue scheduler makes it the most general CPU-
scheduling algorithm. It can be configured to match a specific system under design.
Unfortunately, it also requires some means of selecting values for all the parameters to
define the best scheduler. Although a multilevel feedback queue is the most general
scheme, it is also the most complex.
Comparison of Scheduling Algorithms
By now, you must have understood how CPU can apply different scheduling algorithms
to schedule processes. Now, let us examine the advantages and disadvantages of each
scheduling algorithms that we have studied so far.
First Come First Serve (FCFS)
Let's start with the Advantages:
• FCFS algorithm doesn't include any complex logic, it just puts the process
requests in a queue and executes it one by one.
• Hence, FCFS is pretty simple and easy to implement.
• Eventually, every process will get a chance to run, so starvation doesn't occur.
It's time for the Disadvantages:
• There is no option for pre-emption of a process. If a process is started, then CPU
executes the process until it ends.
• Because there is no pre-emption, if a process executes for a long time, the
processes in the back of the queue will have to wait for a long time before they
get a chance to be executed.
Shortest Job First (SJF)
Starting with the Advantages: of Shortest Job First scheduling algorithm.
• According to the definition, short processes are executed first and then followed
by longer processes.
• The throughput is increased because more processes can be executed in less
amount of time.
And the Disadvantages:
• The time taken by a process must be known by the CPU beforehand, which is
not possible.
• Longer processes will have more waiting time, eventually they'll suffer starvation.
Note: Preemptive Shortest Job First scheduling will have the same advantages and
disadvantages as those for SJF.
Round Robin (RR)
Here are some Advantages: of using the Round Robin Scheduling:
• Each process is served by the CPU for a fixed time quantum, so all processes
are given the same priority.
• Starvation doesn't occur because for each round robin cycle, every process is
given a fixed time to execute. No process is left behind.
and here comes the Disadvantages:
• The throughput in RR largely depends on the choice of the length of the time
quantum. If time quantum is longer than needed, it tends to exhibit the same
behavior as FCFS.
• If time quantum is shorter than needed, the number of times that CPU switches
from one process to another process, increases. This leads to decrease in CPU
efficiency.
Priority based Scheduling
Advantages of Priority Scheduling:
• The priority of a process can be selected based on memory requirement, time
requirement or user preference. For example, a high end game will have better
graphics, that means the process which updates the screen in a game will have
higher priority so as to achieve better graphics performance.
Some Disadvantages:
• A second scheduling algorithm is required to schedule the processes which have
same priority.
• In preemptive priority scheduling, a higher priority process can execute ahead of
an already executing lower priority process. If lower priority process keeps
waiting for higher priority processes, starvation occurs.
Usage of Scheduling Algorithms in Different Situations
Every scheduling algorithm has a type of a situation where it is the best choice. Let's
look at different such situations:
Situation 1:
The incoming processes are short and there is no need for the processes to execute in
a specific order.
In this case, FCFS works best when compared to SJF and RR because the processes
are short which means that no process will wait for a longer time. When each process is
executed one by one, every process will be executed eventually.
Situation 2:
The processes are a mix of long and short processes and the task will only be
completed if all the processes are executed successfully in a given time.
Round Robin scheduling works efficiently here because it does not cause starvation and
also gives equal time quantum for each process.
Situation 3:
The processes are a mix of user based and kernel based processes.
Priority based scheduling works efficiently in this case because generally kernel based
processes have higher priority when compared to user based processes.
For example, the scheduler itself is a kernel based process, it should run first so that it
can schedule other processes.
System call interface for process management-fork, exit, wait, waitpid, exec
fork() in C
Fork system call is used for creating a new process, which is called child process,
which runs concurrently with the process that makes the fork() call (parent process).
After a new child process is created, both processes will execute the next instruction
following the fork() system call. A child process uses the same pc(program counter),
same CPU registers, same open files which use in the parent process.
It takes no parameters and returns an integer value. Below are different values returned
by fork().
Negative Value: creation of a child process was unsuccessful.
Zero: Returned to the newly created child process.
Positive value: Returned to parent or caller. The value contains process ID of newly
created child process.
Example :
Please note that the above programs don’t compile in Windows environment.
1. Predict the Output of the following program:.
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
int main()
{
// make two process which run same
// program after this instruction
fork();
printf("Hello world!\n");
return 0;
}
Output:
Hello world!
Hello world!

Exit()
On many computer operating systems, a computer process terminates its execution by
making an exit system call. More generally, an exit in a multithreading environment
means that a thread of execution has stopped running. For resource management,
the operating system reclaims resources (memory, files, etc.) that were used by the
process. The process is said to be a dead process after it terminates.
C:

#include <stdlib.h>

int main(void)
{
exit(EXIT_SUCCESS); /* or return EXIT_SUCCESS */
}

UNIX:

exit 0
wait ():
A call to wait() blocks the calling process until one of its child processes exits or a signal
is received. After child process terminates, parent continues its execution after wait
system call instruction.
Child process may terminate due to any of these:
• It calls exit();
• It returns (an int) from main
• It receives a signal (from the OS or another process) whose default action is to
terminate.

Syntax in c language:
#include
#include

// take one argument status and returns


// a process ID of dead children.
pid_t wait(int *stat_loc);
If any process has more than one child processes, then after calling wait(), parent
process has to be in wait state if no child terminates.
If only one child process is terminated, then return a wait() returns process ID of the
terminated child process.
If more than one child processes are terminated than wait() reap any arbitrarily
child and return a process ID of that child process.
When wait() returns they also define exit status (which tells our, a process why
terminated) via pointer, If status are not NULL.
If any process has no child process then wait() returns immediately “-1”.
Examples:
// C program to demonstrate working of wait()
#include<stdio.h>
#include<stdlib.h>
#include<sys/wait.h>
#include<unistd.h>

int main()
{
pid_t cpid;
if (fork()== 0)
exit(0); /* terminate child */
else
cpid = wait(NULL); /* reaping parent */
printf("Parent pid = %d\n", getpid());
printf("Child pid = %d\n", cpid);

return 0;
}
Output:
Parent pid = 12345678
Child pid = 89546848

Waitpid():
We know if more than one child processes are terminated, then wait() reaps any
arbitrarily child process but if we want to reap any specific child process, we
use waitpid() function.
Syntax in c language:
pid_t waitpid (child_pid, &status, options);
Options Parameter
• If 0 means no option parent has to wait for terminates child.
• If WNOHANG means parent does not wait if child does not terminate just check
and return waitpid().(not block parent process)
• If child_pid is -1 then means any arbitrarily child, here waitpid() work same as
wait() work.
Return value of waitpid()
• pid of child, if child has exited
• 0, if using WNOHANG and child hasn’t exited
// C program to demonstrate waitpid()
#include<stdio.h>
#include<stdlib.h>
#include<sys/wait.h>
#include<unistd.h>

void waitexample()
{
int i, stat;
pid_t pid[5];
for (i=0; i<5; i++)
{
if ((pid[i] = fork()) == 0)
{
sleep(1);
exit(100 + i);
}
}

// Using waitpid() and printing exit status


// of children.
for (i=0; i<5; i++)
{
pid_t cpid = waitpid(pid[i], &stat, 0);
if (WIFEXITED(stat))
printf("Child %d terminated with status: %d\n",
cpid, WEXITSTATUS(stat));
}
}

// Driver code
int main()
{
waitexample();
return 0;
}
Output:
Child 50 terminated with status: 100
Child 51 terminated with status: 101
Child 52 terminated with status: 102
Child 53 terminated with status: 103
Child 54 terminated with status: 104
Here, Children pids depend on the system but in order print all child information.

You might also like