0% found this document useful (0 votes)
12 views155 pages

Introduction To Operating Systems

Uploaded by

stickman8068
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
12 views155 pages

Introduction To Operating Systems

Uploaded by

stickman8068
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 155

Chapter 1

Introduction to Operating
Systems
Prepared by,
Dr. Swetha P C
Introduction
Any computer system can be divided into 4 broad categories:

• The Hardware

• The Operating System

• The application Programs

• The User
The Hardware
The Operating System
The Application Programs
What is software?
• Software is a set of instructions, data or programs used to operate
computers and execute specific tasks.

• Software is a generic term used to refer to applications, scripts and


programs that run on a device.

• The two main categories of software are application software and system
software.
✓An application is software that fulfils a specific need or performs tasks.
✓System software is designed to run a computer's hardware and provides a
platform for applications to run on top of.
Introduction to Operating System
• An Operating System (OS) is a program that manages the computer
hardware.

Windows Linux Ubuntu Mac OS X Android


• It also provides a basis for Application Programs and acts as an
intermediary between computer users and computer Hardware.
Static View of System Components

User 1 User 2 User 3 User 4 …….. User n

Word Spreadsheets Compilers Text Web


Processor Editors ……… Browsers

System / Application Programs

Operating System
Computer Resources like CPU,
Hardware Memory, I/O Devices
Functions of Operating System
i. User Interface
ii. File Management
iii. Hardware and Peripherals Management
iv. Processor Management
v. Interrupt Handling
vi. Security
vii. Memory Management
viii. Network Communication
What Operating Systems Do..?
Depends on the point of view.

➢Users want convenience, ease of use and good performance


• Don’t care about resource utilization.

➢But a shared computer such as mainframe or minicomputer


must keep all users happy.
• A user's perspective on a computer changes depending on the
interface in use.

• Many people work on laptops or desktop PCs equipped with a


monitor, keyboard, and mouse.

• These setups are intended for individual use, allowing one person to
fully utilize the system's resources.

• The primary aim is to enhance the user’s productivity or enjoyment.

• In this scenario, the operating system prioritizes ease of use, with


some focus on performance and security, but little consideration for
resource management.
• From the computer's perspective, the operating system is the program most
closely connected to the hardware.

• In this sense, we can think of the operating system as a resource manager.

• A computer system has various resources needed to address different tasks,


including CPU time, memory, storage, and input/output devices.

• The operating system oversees these resources.

• When confronted with numerous and potentially conflicting resource


requests, it must determine how to allocate them among programs and users
to ensure efficient and fair operation of the system.
Defining Operating System
• OS encompasses a wide range of roles and functions.

• Early computing began as an experimental pursuit such as military


purposes, such as CODE BREAKING and TRAJECTORY PLOTTING, and also
for some government tasks such as CENSUS CALCULATIONS.

• With MOORE’s LAW, computers have increased in functionality and


decreased in size, leading to a wide range of users and a wide array of OS.

• Operating Systems are generally designed to address a problem as to how


to create a usable computing system.
What constitutes an Operating System?
• A more common definition is that the OS consists of the Kernel – the
core program running at all times on the computer.

• Alongside the kernel, there are system programs, that are related to
the operating system but are not a part of the kernel, and the
application programs, that are unrelated to system operation.

• With the rise of mobile devices, the definition of Operating System


has expanded.
• Mobile operating systems often include not just a core kernel but also
middleware – a software framework that provide additional services
to application developers.

• To summarize, an operating system includes the continuously running


kernel, middleware framework that facilitates application
development and provide additional features, and system programs
that assist in system management.
Computer System Organization
• A modern general-purpose computer system comprises one or more
CPUs and several device controllers linked through a common bus,
which facilitates communication between components and shared
memory
• Each device controller is responsible for a specific type of device, such as a
disk drive, audio device, or graphics display.

• Depending on the controller, multiple devices can be connected; for


example, a single USB port can link to a USB hub that accommodates
several devices.

• Typically, operating systems include a device driver for each device


controller. This driver understands the specific controller and provides a
consistent interface to the rest of the operating system.

• The CPU and device controllers can operate in parallel, competing for
access to the memory and a memory controller is present to co-ordinate
the access to the shared memory.
Interrupts
• In operating systems, an interrupt is a signal that temporarily halts
the CPU's current operations to allow it to respond to a particular
event or condition.

• Once the CPU addresses this event, it can return to its previous task.
Interrupts are crucial for efficient multitasking and real-time
processing.
Types of Interrupts
• Hardware Interrupts: Generated by hardware devices (e.g., keyboard,
mouse, disk drives) to signal that they require attention.

• Software Interrupts: Triggered by programs, often using system calls


to request services from the operating system.

• Timer Interrupts: Generated by the system timer to allow the OS to


perform tasks such as process scheduling.

• Exceptions: A special kind of interrupt that occurs due to errors (like


division by zero) or specific conditions in a running program.
Interrupt Handling Process
• Interrupt Occurrence: The interrupt signal is sent to the CPU.
• Current Process State Saved: The CPU saves the state of the current
process (context) to allow for later resumption.
• Interrupt Vectoring: The OS uses an interrupt vector table to
determine the appropriate interrupt handler to execute.
• Interrupt Service Routine (ISR): The CPU executes the corresponding
ISR to handle the event.
• Restore Process State: After the ISR completes, the CPU restores the
saved state of the interrupted process.
• Resume Execution: The CPU resumes the execution of the
interrupted process.
Interrupt timeline for a single program doing output.
I/O Structure
• A large portion of operating system code is dedicated to managing
I/O, both because of its importance to the reliability and performance
of a system and because of the varying nature of the devices.

• A general-purpose computer system consists of multiple devices, all


of which exchange data via a common bus.

• The form of interrupt-driven I/O described in previous section is fine


for moving small amounts of data but can produce high overhead
when used for bulk data movement.

• To solve this problem, direct memory access (DMA) is used.


• DMA allows the device controller transfers an entire block of data
directly to or from the device and main memory, with no intervention
by the CPU.

• Only one interrupt is generated per block, to tell the device driver
that the operation has completed, rather than the one interrupt per
byte generated for low-speed devices. While the device controller is
performing these operations, the CPU is available to accomplish other
work.

• Some high-end systems use switch rather than bus architecture. On


these systems, multiple components can talk to other components
concurrently, making the DMA even more effective.
How a modern computer system works
Virtualization
• Virtualization is a technology that allows us to abstract the hardware of a
single computer (the CPU, memory, disk drives, network interface cards,
and so forth) into several different execution environments, thereby
creating the illusion that each separate environment is running on its own
private computer.

• These environments can be viewed as different individual operating


systems that may be running at the same time and interacting with each
other.

• A user of a virtual machine can switch among the various operating


systems in the same way a user can switch among the various processes
running concurrently in a single operating system.
A computer running (a) a single operating system and (b) three virtual machines.
• In this setup, Windows served as the host operating system, while the
VMware application acted as the virtual machine manager (VMM).

• The VMM is responsible for running guest operating systems,


managing their resource usage, and ensuring that each guest remains
isolated from the others.

• Despite modern operating systems being fully capable of reliably


running multiple applications, the adoption of virtualization continues
to rise.

• On laptops and desktops, a VMM allows users to install various


operating systems for experimentation or to run applications
designed for different systems.
Kernel Data Structures
1. Lists, Stacks, and Queues

• An array is a simple data structure in which each element can be


accessed directly.
• After arrays, lists are perhaps the most fundamental data structures
in computer science.
• Whereas each item in an array can be accessed directly, the items in a
list must be accessed in a particular order.
• That is, a list represents a collection of data values as a sequence.
• The most common method for implementing this structure is a linked
list, in which items are linked to one another.
• Linked lists are of several types:
• In a singly linked list, each item points to its successor.

• In a doubly linked list, a given item can refer either to its


predecessor or to its successor.
• In a circularly linked list, the last element in the list refers to the
first element, rather than to null.

• Linked lists accommodate items of varying sizes and allow easy


insertion and deletion of items.

• One potential disadvantage of using a list is that performance


for retrieving a specified item in a list of size n is linear—O(n),
as it requires potentially traversing all n elements in the worst
case.
• A stack is a sequentially ordered data structure that uses the last in,
first out (LIFO) principle for adding and removing items, meaning that
the last item placed onto a stack is the first item removed.

• The operations for inserting and removing items from a stack are
known as push and pop, respectively.

• An operating system often uses a stack when invoking function calls.

• Parameters, local variables, and the return address are pushed onto
the stack when a function is called; returning from the function call
pops those items off the stack.
• A queue, in contrast, is a sequentially ordered data structure that
uses the first in, first out (FIFO) principle: items are removed from a
queue in the order in which they were inserted.

• There are many everyday examples of queues, including shoppers


waiting in a checkout line at a store and cars waiting in line at a traffic
signal.

• Queues are also quite common in operating systems—jobs that are


sent to a printer are typically printed in the order in which they were
submitted, for example.
2. Trees
• A tree is a data structure that can be used to represent data
hierarchically.
• Data values in a tree structure are linked through parent–child
relationships.
• In a general tree, a parent may have an unlimited number of children.
• In a binary tree, a parent may have at most two children, which we
term the left child and the right child.
• A binary search tree additionally requires an ordering between the
parent’s two children in which left child <= right child.
3. Hash Functions and Maps

• A hash function takes data as its input, performs a numeric operation


on the data, and returns a numeric value. This numeric value can then
be used as an index into a table (typically an array) to quickly retrieve
the data.

• Whereas searching for a data item through a list of size n can require
up to O(n) comparisons, using a hash function for retrieving data from
a table can be as good as O(1), depending on implementation details.

• Because of this performance, hash functions are used extensively in


operating systems.
• One use of a hash function is to implement a hash map, which
associates (or maps) [key:value] pairs using a hash function.

• Once the mapping is established, we can apply the hash function to


the key to obtain the value from the hash map.

• For example, suppose that a user name is mapped to a password.


Password authentication then proceeds as follows: a user enters her
user name and password.
4. Bitmaps
• A bitmap is a string of n binary digits that can be used to represent the
status of n items.

• For example, suppose we have several resources, and the availability of


each resource is indicated by the value of a binary digit: 0 means that the
resource is available, while 1 indicates that it is unavailable (or vice versa).

• The value of the i th position in the bitmap is associated with the i th


resource. As an example, consider the bitmap shown below:

001011101
Resources 2, 4, 5, 6, and 8 are unavailable; resources 0, 1, 3, and 7 are
available.
Computing Environments
1. Traditional Computing
• As computing has evolved, the distinctions between traditional
computing environments have blurred.

• In the past, typical office setups comprised PCs connected to a


network, with servers handling file and print services. Remote access
was cumbersome, and portability relied on laptops.

• Today, advancements in web technologies and increased WAN


bandwidth have transformed these environments.
• Companies now use portals to provide web access to internal servers,
while network computers (or thin clients) replace traditional
workstations for enhanced security and maintenance.

• Mobile devices can sync with PCs and connect to wireless and cellular
networks to access company web portals and other resources.

• In the latter half of the 20th century, computing resources were


limited, with systems categorized as either batch or interactive.

• Batch systems processed jobs in bulk, while interactive systems


awaited user input. To optimize resource usage, time-sharing systems
allowed multiple users to share computing resources through
scheduling algorithms.
2. Mobile Computing
• Mobile computing involves using handheld devices like smartphones and
tablets, characterized by their portability and lightweight design.

• Historically, these devices sacrificed screen size, memory, and overall


functionality compared to desktops and laptops in exchange for mobile
access to services like email and web browsing.

• However, recent advancements have blurred the lines between the


functionality of consumer laptops and tablets, with modern mobile devices
offering capabilities that are sometimes impractical or unavailable on
traditional computers.

• Today, mobile devices are utilized for various purposes, including playing
music and videos, reading e-books, taking photos, and recording high-
definition video.
• The growth of applications for these devices is significant, with
developers leveraging unique features like GPS, accelerometers, and
gyroscopes.

• For instance, GPS enables precise location tracking for navigation


apps, while accelerometers allow users to interact with games
through tilting and shaking the device.

• Augmented reality applications also benefit from these features,


creating experiences difficult to replicate on laptops or desktops.

• Currently, the dominant operating systems in mobile computing are


Apple iOS, designed for iPhone and iPad, and Google Android, which
powers a wide range of smartphones and tablets.
3. Client –Server Computing
• Contemporary network architecture features arrangements in which
server systems satisfy requests generated by client systems. This form
of specialized distributed system, called a client–server system.

General structure of a client – server system


• Server systems can be broadly categorized as compute servers and
file servers:

• The compute-server system provides an interface to which a client


can send a request to perform an action (for example, read data).
In response, the server executes the action and sends the results to
the client. A server running a database that responds to client
requests for data is an example of such a system.

• The file-serve system provides a file-system interface where clients


can create, update, read, and delete files. An example of such a
system is a web server that delivers files to clients running web
browsers. The actual contents of the files can vary greatly, ranging
from traditional web pages to rich multimedia content such as
high-definition video.
4. Peer-to-Peer Computing

• Another structure for a distributed system is the peer-to-peer (P2P) system


model.

• In this model, clients and servers are not distinguished from one another.

• Instead, all nodes within the system are considered peers, and each may
act as either a client or a server, depending on whether it is requesting or
providing a service.

• Peer-to-peer systems offer an advantage over traditional client–server


systems. In a client–server system, the server is a bottleneck; but in a peer-
to-peer system, services can be provided by several nodes distributed
throughout the network.
• To participate in a peer-to-peer system, a node must first join the
network of peers. Once a node has joined the network, it can begin
providing services to—and requesting services from—other nodes in
the network.

• Determining what services are available is accomplished in one of two


general ways:

✓When a node joins a network, it registers its service with a


centralized lookup service on the network. Any node desiring a
specific service first contacts this centralized lookup service to
determine which node provides the service. The remainder of the
communication takes place between the client and the service
provider.
✓An alternative scheme uses no centralized lookup service. Instead,
a peer acting as a client must discover what node provides a
desired service by broadcasting a request for the service to all
other nodes in the network. The node (or nodes) providing that
service responds to the peer making the request. To support this
approach, a discovery protocol must be provided that allows peers
to discover services provided by other peers in the network.

Peer-to-peer system with no centralized service


5. Cloud Computing

• Cloud computing is a technology that allows users to access and store


data and applications over the internet instead of on a local computer
or server.

• It provides flexible resources and services, such as storage, computing


power, and applications, which can be scaled up or down based on
demand.

• In some ways, it’s a logical extension of virtualization, because it uses


virtualization as a base for its functionality.

• Users pay per month based on how much of those resources they
use.
• There are actually many types of cloud computing, including the following:

• Public cloud—a cloud available via the Internet to anyone willing to pay
for the services
• Private cloud—a cloud run by a company for that company’s own use
• Hybrid cloud—a cloud that includes both public and private cloud
components
• Software as a service (SaaS)—one or more applications (such as word
processors or spreadsheets) available via the Internet
• Platform as a service (PaaS)—a software stack ready for application use
via the Internet (for example, a database server)
• Infrastructure as a service (IaaS)—servers or storage available over the
Internet (for example, storage available for making backup copies of
production data)
6. Real-Time Embedded Systems

• Real-time embedded systems are specialized computing systems that


perform dedicated functions within a larger system and must operate
within strict timing constraints.

• These systems are often found in applications where timely responses


are critical, such as automotive control systems, medical devices,
industrial automation, and consumer electronics.

• These devices are found everywhere, from car engines and


manufacturing robots to optical drives and microwave ovens. They
tend to have very specific tasks.

• The systems they run on are usually primitive, and so the operating
systems provide limited features.
Chapter 2
Operating System structure
Operating-System Services
• An OS provides an environment for the execution of programs.
• It provides certain services to programs and the users of those programs.

1. User interface
• Almost all operating systems have a user interface (UI). This interface can
take several forms. Most commonly, a graphical user interface (GUI) is
used.

• Here, the interface is a window system with a mouse that serves as a


pointing device to direct I/O, choose from menus, and make selections
and a keyboard to enter text.
• Mobile systems such as phones and tablets provide a touch-screen interface,
enabling users to slide their fingers across the screen or press buttons on the
screen to select choices.

• Another option is a command-line interface (CLI), which uses text commands and a
method for entering them.
2. Program execution
• The system must be able to load a program into memory and to run
that program. The program must be able to end its execution, either
normally or abnormally.
3. I/O operations
• A running program may require I/O, which may involve a file or an
I/O device. For specific devices, special functions may be desired.

• For efficiency and protection, users usually cannot control I/O devices
directly. Therefore, the operating system must provide a means to do
I/O.
4. File-system manipulation

• The file system is of particular interest. Obviously, programs need to


read and write files and directories.

• They also need to create and delete them by name, search for a given
file, and list file information.

• Finally, some operating systems include permissions management to


allow or deny access to files or directories based on file ownership.

• Many operating systems provide a variety of file systems, sometimes


to allow personal choice and sometimes to provide specific features
or performance characteristics.
5. Communications

• There are many circumstances in which one process needs to


exchange information with another process.

• Such communication may occur between processes that are


executing on the same computer or between processes that are
executing on different computer systems tied together by a network.

• Communications may be implemented via shared memory, in which


two or more processes read and write to a shared section of memory,
or message passing, in which packets of information in predefined
formats are moved between processes by the operating system.
6. Error detection

• The operating system needs to be detecting and correcting errors


constantly.

• Errors may occur in the CPU and memory hardware (such as a memory
error or a power failure), in I/O devices (such as a parity error on disk, a
connection failure on a network, or lack of paper in the printer), and in the
user program (such as an arithmetic overflow or an attempt to access an
illegal memory location).

• For each type of error, the operating system should take the appropriate
action to ensure correct and consistent computing.

• Sometimes, it has no choice but to halt the system.

• At other times, it might terminate an error-causing process or return an


error code to a process for the process to detect and possibly correct.
7. Resource allocation
• When there are multiple processes running at the same time,
resources must be allocated to each of them.

• The operating system manages many different types of resources.


Some (such as CPU cycles, main memory, and file storage) may have
special allocation code, whereas others (such as I/O devices) may
have much more general request and release code.

• For instance, in determining how best to use the CPU, operating


systems have CPU-scheduling routines that take into account the
speed of the CPU, the process that must be executed, the number of
processing cores on the CPU, and other factors.
8. Logging

• We want to keep track of which programs use how much and what
kinds of computer resources.

• This record keeping may be used for accounting (so that users can be
billed) or simply for accumulating usage statistics.

• Usage statistics may be a valuable tool for system administrators who


wish to reconfigure the system to improve computing services.
9. Protection and security
• The owners of information stored in a multiuser or networked
computer system may want to control use of that information.

• When several separate processes execute concurrently, it should not


be possible for one process to interfere with the others or with the
operating system itself.

• Protection involves ensuring that all access to system resources is


controlled.

• Security of the system from outsiders is also important. Such security


starts with requiring each user to authenticate himself for herself to
the system, usually by means of a password, to gain access to system
resources.
User and Operating-System Interface
1. Command Interpreters
• Command interpreters, often referred to as shells, are programs that
provide a user interface to the operating system. They allow users to
interact with the system by entering commands, which the
interpreter processes and executes.

• The main function of the command interpreter is to get and execute


the next user-specified command.

• Many of the commands given at this level manipulate files: create,


delete, list, print, copy, execute, and so on. The various shells
available on UNIX systems operate in this way.
2. Graphical User Interface
• A Graphical User Interface (GUI) is a visual interface that allows users
to interact with a computer system through graphical elements,
rather than relying solely on text-based commands. G

• UIs are designed to be intuitive and user-friendly, making it easier for


users to navigate and perform tasks.

• Advantages of GUIs
• User-Friendly
• Visual Interaction
• Multitasking
• Visual Feedback
3. Touch-Screen Interface
• A touch-screen interface is a type of user interface that allows users
to interact with a device through touch gestures on a screen.

• This technology has become prevalent in smartphones, tablets,


kiosks, laptops, and various embedded systems, providing a more
direct and intuitive way to navigate and control devices.
4. Choice of Interface
• The choice of whether to use a command-line or GUI interface is
mostly one of personal preference.

• System administrators who manage computers and power users who


have deep knowledge of a system frequently use the command-line
interface. For them, it is more efficient, giving them faster access to
the activities they need to perform.

• Further, command-line interfaces usually make repetitive tasks easier,


in part because they have their own programmability.

• For example, if a frequent task requires a set of command-line steps,


those steps can be recorded into a file, and that file can be run just
like a program.
• The user interface can vary from system to system and even from user
to user within a system; however, it typically is substantially removed
from the actual system structure. The design of a useful and intuitive
user interface is therefore not a direct function of the operating
system.

• The user interface can vary from system to system and even from user
to user within a system; however, it typically is substantially removed
from the actual system structure. The design of a useful and intuitive
user interface is therefore not a direct function of the operating
system.
UNIT 2
CHAPTER 1
PROCESSES
Process
• A question that arises in discussing operating systems involves what to call
all the CPU activities.

• Early computers were batch systems that executed jobs, followed by the
emergence of time-shared systems that ran user programs, or tasks.

• Even on a single-user system, a user may be able to run several programs at


one time: a word processor, a web browser, and an e-mail package.

• Even if a computer can execute only one program at a time, such as on an


embedded device that does not support multitasking, the operating system
may need to support its own internal programmed activities, such as
memory management.
• A process is a program in execution.

• The status of the current activity of a process is represented by the


value of the program counter and the contents of the processor’s
registers.

• The memory layout of a process is typically divided into multiple


sections,
1) Text section— the executable code
2) Data section—global variables
3) Heap section—memory that is dynamically allocated during program
run time.
4) Stack section— temporary data storage when invoking functions (such
as function parameters, return addresses, and local variables)
Layout of a process in memory
• Notice that the sizes of the text and data sections are fixed, as their
sizes do not change during program run time.

• However, the stack and heap sections can shrink and grow
dynamically during program execution.

• Each time a function is called, an activation record containing function


parameters, local variables, and the return address is pushed onto the
stack; when control is returned from the function, the activation
record is popped from the stack.

• Similarly, the heap will grow as memory is dynamically allocated, and


will shrink when memory is returned to the system.
• We emphasize that a program by itself is not a process.

• A program is a passive entity, such as a file containing a list of


instructions stored on disk (often called an executable fil ).

• In contrast, a process is an active entity, with a program counter


specifying the next instruction to execute and a set of associated
resources.

• A program becomes a process when an executable file is loaded into


memory.

• Although two processes may be associated with the same program,


they are nevertheless considered two separate execution sequences.
Process State
• As a process executes, it changes state.

• The state of a process is defined in part by the current activity of that


process.

• A process may be in one of the following states:


➢New. The process is being created.
➢Running. Instructions are being executed.
➢Waiting. The process is waiting for some event to occur (such
as an I/O completion or reception of a signal).
➢Ready. The process is waiting to be assigned to a processor.
➢Terminated. The process has finished execution.
Diagram of process state
Process Control Block
• Each process is represented in the operating system by a process control block (PCB)—also called
a task control block.

• It contains many pieces of information associated with a specific process, including these:

✓ Process state. The state may be new, ready, running, waiting, halted, and so on.

✓ Program counter. The counter indicates the address of the next instruction to be executed for this
process.

✓ CPU registers. The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers. Along
with the program counter, this state information must be saved when an interrupt occurs, to
allow the process to be continued correctly afterward when it is rescheduled to run.
✓CPU-scheduling information. This information includes a process
priority, pointers to scheduling queue.

✓Memory-management information. This information may include


such items as the value of the base and limit registers and the page
tables, or the segment tables, depending on the memory system used
by the operating system.

✓Accounting information. This information includes the amount of CPU


in real time is used, time limits, account numbers, job or process
numbers, and so on.

✓I/O status information. This information includes the list of I/O


devices allocated to the process, a list of open files, and so on
Threads
• In a process, a thread refers to a single sequential activity being
executed.

• The process model discussed so far has implied that a process is a


program that performs a single thread of execution.

• For example, when a process is running a word-processor program, a


single thread of instructions is being executed.

• This single thread of control allows the process to perform only one
task at a time.
• Thus, the user cannot simultaneously type in characters and run the
spell checker.

• Most modern operating systems have extended the process concept


to allow a process to have multiple threads of execution and thus to
perform more than one task at a time.

• This feature is especially beneficial on multicore systems, where


multiple threads can run in parallel.

• A multithreaded word processor could, for example, assign one


thread to manage user input while another thread runs the spell
checker.
• Threads within the same process share certain resources like
memory address space and system resources, while having their
own stack, program counter, and set of registers.
• Threads play a crucial role in modern computing environments by enabling
parallel execution of tasks, which can significantly enhance the
performance and responsiveness of applications.

• They allow for multiple activities within a single process to proceed


concurrently, making efficient use of CPU resources, especially in systems
with multiple processors or cores.

• There are two primary types of threads in operating systems:


✓User-Level Threads: These are managed at the user level, without the
kernel's knowledge, making them faster and more efficient to create and
manage. However, they lack coordination with the kernel, which can lead
to issues if a thread performs a blocking operation, potentially stalling the
entire process.

✓Kernel-Level Threads: Managed by the operating system kernel, these


threads are slower to create and manage but have the advantage of being
fully recognized and scheduled by the kernel, allowing for better
coordination and handling of blocking operations without affecting other
threads.
Advantages of Using Threads
• Responsiveness: Multithreaded applications can continue running even if a part of it is
performing a long operation, thus improving user interaction.

• Resource Sharing: Threads within the same process can share resources like memory and files,
reducing the overhead of resource allocation.

• Economy: It's more economical in terms of system resource consumption to create and manage
threads as opposed to processes.

• Scalability: Multithreading can lead to better utilization of multiprocessor architectures, as


threads can run in parallel on different processors.

• Efficient Communication: Since threads share the same memory space, inter-thread
communication can be more efficient than inter-process communication.
Process Scheduling
• The objective of multiprogramming is to have some process running
at all times so as to maximize CPU utilization.

• The objective of time sharing is to switch a CPU core among processes


so frequently that users can interact with each program while it is
running.

• To meet these objectives, the process scheduler selects an available


process (possibly from a set of several available processes) for
program execution on a core.

• Each CPU core can run one process at a time.


• For a system with a single CPU core, there will never be more than
one process running at a time, whereas a multicore system can run
multiple processes at one time.

• If there are more processes than cores, excess processes will have to
wait until a core is free and can be rescheduled.

• The number of processes currently in memory is known as the degree


of multiprogramming.

• Balancing the objectives of multiprogramming and time sharing also


requires taking the general behavior of a process into account.
• In general, most processes can be described as either I/O bound or
CPU bound.

• An I/O-bound process is one that spends more of its time doing I/O
than it spends doing computations.

• A CPU-bound process, in contrast, generates I/O requests


infrequently, using more of its time doing computations.
Scheduling Queues
• The processes that are entering into the system are stored in the Job
Queue.

• Suppose if the processes are ready and waiting to execute on a CPU’s


core, they are put into a Ready queue.

• This queue is generally stored as a linked list; a ready-queue header


contains pointers to the first PCB in the list, and each PCB includes a
pointer field that points to the next PCB in the ready queue.
The ready queue and wait queues
• The system also includes other queues.

• When a process is allocated a CPU core, it executes for a while and


eventually terminates, is interrupted by a sub program, or waits for
the occurrence of a particular I/O event.

• Suppose the process makes an I/O request to a device such as a disk,


since devices run significantly slower than processors, the process will
have to wait for the I/O to become available.

• Processes that are waiting for a certain event to occur — such as


completion of I/O — are placed in a wait queue.
• A common representation of process scheduling is a queueing
diagram.

• Two types of queues are present: the ready queue and a set of wait
queues.

• The circles represent the resources that serve the queues, and the
arrows indicate the flow of processes in the system.

• A new process is initially put in the ready queue. It waits there until it
is selected for execution, or dispatched.
Queueing-diagram representation of process scheduling
• Once the process is allocated a CPU core and is executing, one of several
events could occur:

✓The process could issue an I/O request and then be placed in an I/O wait queue.

✓The process could create a new child process and then be placed in a wait queue
while it awaits the child’s termination.

✓The process could be removed forcibly from the core, as a result of an interrupt
or having its time slice expire, and be put back in the ready queue.
• In the first two cases, the process eventually switches from the
waiting state to the ready state and is then put back in the ready
queue.

• A process continues this cycle until it terminates, at which time it


is removed from all queues and has its PCB and resources
deallocated.
CPU Scheduling
• CPU scheduling is a critical function within an operating system that
determines the order and manner in which processes access the
central processing unit (CPU).

• The goal of CPU scheduling is to optimize the use of the CPU and
ensure that all processes are executed efficiently and fairly.

• The aim is to keep the CPU as busy as possible, ideally striving for
100% utilization.
• A process migrates among the ready queue and various wait queues
throughout its lifetime.

• The role of the CPU scheduler is to select from among the processes
that are in the ready queue and allocate a CPU core to one of them.

• The CPU scheduler must select a new process for the CPU frequently.

• An I/O-bound process may execute for only a few milliseconds before


waiting for an I/O request.

• Although a CPU-bound process will require a CPU core for longer


durations, the scheduler is unlikely to grant the core to a process for
an extended period.
• Instead, it is likely designed to forcibly remove the CPU from a process
and schedule another process to run.

• Therefore, the CPU scheduler executes at least once every 100


milliseconds, although typically much more frequently.

• Some operating systems have an intermediate form of scheduling,


known as swapping, whose key idea is that sometimes it can be
advantageous to remove a process from memory and thus reduce the
degree of multiprogramming.

• Later, the process can be reintroduced into memory, and its execution
can be continued where it left off.
• This scheme is known as swapping because a process can be
“swapped out” from memory to disk, where its current status is
saved, and later “swapped in” from disk back to memory, where its
status is restored.

• Swapping in an operating system is a process that moves data or


programs between the computer’s main memory (RAM) and a
secondary storage (usually a hard disk or SSD). This helps manage
the limited space in RAM and allows the system to run more
programs than it could otherwise handle simultaneously.

• Swapping is typically only necessary when memory has been


overcommitted and must be freed up.
Context Switch
• Interrupts cause the operating system to change a CPU core from its
current task and to run a kernel routine.

• Such operations happen frequently on general-purpose systems.

• Context switching in an operating system involves saving the context


or state of a running process so that it can be restored later, and then
loading the context or state of another. process and run it.

• Context Switching refers to the process/method used by the system


to change the process from one state to another using the CPUs
present in the system to perform its job.
• Switching the CPU core to another process requires performing a
state save of the current process and a state restore of a different
process. This task is known as a context switch.

• When a context switch occurs, the kernel saves the context of the old
process in its PCB and loads the saved context of the new process
scheduled to run.

• Context switch time is pure overhead, because the system does no


useful work while switching. Switching speed varies from machine to
machine, depending on the memory speed, the number of registers
that must be copied, and the existence of special instructions.

• A typical speed is a several microseconds.


Diagram showing context switch from process to process
Unit 3
Chapter 1
CPU Scheduling
CPU Scheduling
• The process of assigning CPU time to various processes is known as
CPU Scheduling.

• It allows CPU to execute a process while another process is kept on


standby (or is waiting for other system resources).

• Scheduling improves CPU utilization by reducing its idle time.

• It enhances the efficiency of the system by making it faster and more


responsive.
Terminologies Used in CPU Scheduling
• Arrival Time: Time at which the process arrives in the ready queue.

• Completion Time: Time at which process completes its execution.

• Burst Time: It is the amount of CPU time the process requires to


complete its execution.

• Turn Around Time: Time Difference between completion time and


arrival time.
➢Turn Around Time = Completion Time – Arrival Time

• Waiting Time(W.T): Time Difference between turn around time and


burst time.
➢Waiting Time = Turn Around Time – Burst Time
Scheduling Criteria
• Different CPU Scheduling algorithms have different structures and
the choice of a particular algorithm depends on a variety of factors.
Many conditions have been raised to compare CPU scheduling
algorithms.

• The criteria include the following:

1. CPU Utilization: The main purpose of any CPU algorithm is to keep


the CPU as busy as possible. Theoretically, CPU usage can range
from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the system load.
2. Throughput: The average CPU performance is the number of
processes performed and completed during each unit. This is called
throughput. The output may vary depending on the length or
duration of the processes.

3. Turn Round Time: For a particular process, the important


conditions are how long it takes to perform that process. The time
elapsed from the time of process delivery to the time of completion
is known as the conversion time. Conversion time is the amount of
time spent waiting for memory access, waiting in line, using CPU,
and waiting for I / O.
4. Waiting Time: The Scheduling algorithm does not affect the time
required to complete the process once it has started performing. It
only affects the waiting time of the process i.e. the time spent in
the waiting process in the ready queue.

5. Response Time: Response time is the time spent when the process
is in the ready state and gets the CPU for the first time. For
example, here we are using the First Come First Serve CPU
scheduling algorithm for the below 3 processes:
• Here, the response time of all the 3 processes are:
• P1: 0 ms
• P2: 7 ms because the process P2 have to wait for 8 ms during the execution of P1
and then after it will get the CPU for the first time. Also, the arrival time of P2 is 1
ms. So, the response time will be 8-1 = 7 ms.
• P3: 13 ms because the process P3 have to wait for the execution of P1 and P2 i.e.
after 8+7 = 15 ms, the CPU will be allocated to the process P3 for the first time.
Also, the arrival of P3 is 2 ms. So, the response time for P3 will be 15-2 = 13 ms.
• Response time = Time at which the process gets the CPU for the first time - Arrival
time
Types of CPU scheduling
There are two primary types of CPU scheduling:

➢Preemptive
• Preemptive scheduling is used when a process switches from the running
state to the ready state or from the waiting state to the ready state.

• The resources (mainly CPU cycles) are allocated to the process for a limited
amount of time and then taken away, and the process is again placed back
in the ready queue if that process still has CPU burst time remaining.

• That process stays in the ready queue till it gets its next chance to execute.
➢Non-preemptive

• Non-preemptive Scheduling is used when a process terminates, or a


process switches from running to the waiting state.

• In this scheduling, once the resources (CPU cycles) are allocated to a


process, the process holds the CPU till it gets terminated or reaches a
waiting state.

• In the case of non-preemptive scheduling does not interrupt a process


running on CPU in the middle of the execution.

• Instead, it waits till the process completes its CPU burst time, and then it
can allocate the CPU to another process.
Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING
Once resources(CPU Cycle) are allocated to a
In this resources(CPU Cycle) are allocated to
Basic process, the process holds it till it completes its
a process for a limited time.
burst time or switches to waiting state

Process can not be interrupted until it


Interrupt Process can be interrupted in between.
terminates itself or its time is up

If a process having high priority frequently If a process with a long burst time is running
Starvation arrives in the ready queue, a low priority CPU, then later coming process with less CPU
process may starve burst time may starve

Overhead It has overheads of scheduling the processes It does not have overheads

In preemptive scheduling, CPU utilization is


CPU Utilization It is low in non preemptive scheduling
high

Waiting Time Preemptive scheduling waiting time is less Non-preemptive scheduling waiting time is high

Non-preemptive scheduling response time is


Response Time Preemptive scheduling response time is less
high
Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Decisions are made by the scheduler and


Decisions are made by the process itself and
Decision making are based on priority and time slice
the OS just follows the process’s instructions
allocation

The OS has greater control over the The OS has less control over the scheduling
Process control
scheduling of processes of processes

Higher overhead due to frequent context Lower overhead since context switching is
Overhead
switching less frequent

Examples of preemptive scheduling are


Examples of non-preemptive scheduling are
Examples Round Robin and Shortest Remaining Time
First Come First Serve and Shortest Job First
First
Dispatcher
• Another component involved in the CPU-scheduling function is the
dispatcher.

• The dispatcher is the module that gives control of the CPU’s core to
the process selected by the CPU scheduler.

• This function involves the following:

✓Switching context from one process to another


✓Switching to user mode
✓Jumping to the proper location in the user program to resume that
program
• The dispatcher should be as fast as possible, since it is invoked during
every context switch.

• The time it takes for the dispatcher to stop one process and start
another running is known as the dispatch latency.

The role of the dispatcher.


Scheduling Algorithms
1. First - Come First – Serve (FCFS) Scheduling
• By far the simplest CPU - Scheduling algorithm.
• The process that requests the CPU first is allocated the CPU first.
• The implementation of the FCFS policy is easily managed by the FIFO
queue.
• When a process enters the ready queue, its PCB is linked onto the tail
of the queue.

• When the CPU is free, it is allocated to the process at the head of the
queue.

• The running process is then removed from the queue.

• The average waiting time under the FCFS policy, is often quite long.

• To understand this, consider the following example:


• Consider the following set of processes that arrive at time 0.
Process Burst Time (ms)
P1 24
P2 3
P3 3

• If the processes arrive in the order P1, P2, P3 and are served in FCFS
order, we get the result as shown in the following Gantt Chart.

Waiting time for P1 = 0 ms


Waiting time for P2 = 24 ms
Waiting time for P3 = 27 ms
Average Waiting Time = (0+24+27)/3 = 17 ms
• However, if the processes arrive in the order P2, P3, P1

Waiting time for P1 = 6 ms


Waiting time for P2 = 0 ms
Waiting time for P3 = 3 ms

Average Waiting Time = (6+0+3)/3 = 3 ms

• This reduction is substantial. Thus, the average waiting time under


FCFS policy is generally not minimal and may vary substantially if the
process’s burst time varies greatly.
• The FCFS Scheduling algorithm is non-preemptive.

• Once the CPU has been allocated to a process, that process keeps the
CPU until it releases the CPU, either by terminating or by requesting
the I/O.

• The FCFS algorithm is thus particularly troublesome for time sharing


systems, where it is important that each user get the share of the CPU
at regular intervals.

• It would be disastrous to allow one process to keep the CPU for an


extended period.
Problem 1
Consider the set of 5 processes whose arrival time and burst time are
given below:
Process ID Arrival Time Burst Time
P1 4 5
P2 6 4
P3 0 3
P4 6 2
P5 5 4

Calculate the average waiting time and average turnaround time, if


FCFS Scheduling Algorithm is followed.
Process ID Arrival Burst
Time Time
P1 4 5
P2 6 4
P3 0 3
P4 6 2
P5 5 4

The shaded box represents the idle time of the CPU


Turnaround Time = Completion Time – Arrival Time
Waiting Time = Turnaround Time – Burst Time
Process Arrival Burst
ID Time Time
P1 4 5
P2 6 4
Process ID Completion Turnaround Waiting Time
P3 0 3
Time Time
P4 6 2
P1 9 9–4=5 5–5=0
P5 5 4
P2 17 17 – 6 = 11 11 – 4 = 7

Average Turnaround Time P3 3 3–0=3 3–3=0


= (5+11+3+3+8) / 5 P4 19 19 – 6 = 3 13 – 2 = 11
= 40 / 5
= 8 units P5 13 13 – 5 = 8 8–4=4

Average Waiting Time


= (0+7+0+11+4) / 5
= 22 / 5
= 4.4 units
Problem 2
The Arrival time and Burst time for a set of 6 processes are given in the
table below:
Process ID Arrival Time Burst Time
P1 0 3
P2 1 2
P3 2 1
P4 3 4
P5 4 5
P6 5 2
If the FCFS algorithm is followed and there is 1 unit of overhead in
scheduling each process, find the efficiency of the algorithm.
Process ID Arrival Time Burst Time
P1 0 3
P2 1 2
P3 2 1
P4 3 4
P5 4 5
P6 5 2

Now, Efficiency = Useful time / Total time


Useless Time / Wasted Time = 6 X δ = 6 X 1
= 6 units = 17 / 23 units
= 0.7391
Total Time = 23 units
= 73.91%
Useful Time = 23 – 6 = 17 units
Multilevel Queue Scheduling Algorithm
• A multilevel Queue scheduling partitions the ready queue into several
separate queues.
Multilevel Feedback Queue Scheduling Algorithm
Module 3
Chapter 2
Deadlocks
Introduction
• In a multiprogramming environment, several threads may compete
for a finite number of resources. A thread requests resources; if the
resources are not available at that time, the thread enters a waiting
state. Sometimes, a waiting thread can never again change state,
because the resources it has requested are held by other waiting
threads. This situation is called a deadlock.

• A deadlock is a situation in computing where two or more processes


are unable to proceed because each is waiting for the other to release
a resource. In essence, it's a standstill where no process can continue,
and it can lead to significant performance issues if not managed
properly.
System Model
• A system has a finite number of resources that can be allocated among
competing threads, which may include various types such as CPU cycles, files, and
I/O devices.

• Each resource type consists of identical instances (e.g., four CPUs or two network
interfaces).

• When a thread requests a resource, it should be granted an instance of that type,


indicating proper class definition.

• Synchronization tools like mutex locks and semaphores are also considered
system resources and are common sources of deadlock.

• Threads must request resources before use and release them afterward, ensuring
that requests do not exceed available resources (e.g., not requesting two network
interfaces if only one exists).
Under the normal mode of operation, a thread may utilize a resource in
only the following sequence:

• Request. The thread requests the resource. If the request cannot


be granted immediately (for example, if a mutex lock is currently
held by another thread), then the requesting thread must wait
until it can acquire the resource.

• Use. The thread can operate on the resource (for example, if the
resource is a mutex lock, the thread can access its critical section).

• Release. The thread releases the resource.


• For each use of a kernel-managed resource by a thread, the operating
system checks to make sure that the thread has requested and has
been allocated the resource.

• A system table records whether each resource is free or allocated.

• For each resource that is allocated, the table also records the thread
to which it is allocated.

• If a thread requests a resource that is currently allocated to another


thread, it can be added to a queue of threads waiting for this
resource.
Deadlock Characterization
• A deadlock situation can arise if the following four conditions hold
simultaneously in a system:

➢Mutual exclusion. At least one resource must be held in a


nonsharable mode; that is, only one thread at a time can use the
resource. If another thread requests that resource, the requesting
thread must be delayed until the resource has been released.

➢Hold and wait. A thread must be holding at least one resource and
waiting to acquire additional resources that are currently being
held by other threads.
➢No preemption. Resources cannot be preempted; that is, a
resource can be released only voluntarily by the thread holding it,
after that thread has completed its task.

➢Circular wait. A set {T0, T1, ..., Tn} of waiting threads must exist
such that T0 is waiting for a resource held by T1, T1 is waiting for a
resource held by T2, ..., Tn−1 is waiting for a resource held by Tn,
and Tn is waiting for a resource held by T0.
Livelock example
• Livelock is a situation in an operating system where two or more
processes continuously change their state in response to each other
without making any actual progress.

• Unlike a deadlock, where processes are stuck waiting for each other
indefinitely, in a livelock, the processes remain active and responsive
but fail to advance in their tasks.
Resource allocation graph
• A Resource Allocation Graph (RAG) is a graphical representation used in
operating systems to illustrate the allocation of resources to processes and to
help in detecting deadlocks.

• It visually maps the relationships between processes and the resources they
require, making it easier to understand resource allocation and contention.

• This graph consists of a set of vertices V and a set of edges E.

• The set of vertices V is partitioned into two different types of nodes: T = {T1,
T2, ..., Tn}, the set consisting of all the active threads in the system, and R =
{R1, R2, ..., Rm}, the set consisting of all resource types in the system.
• A directed edge from thread Ti to resource type Rj is denoted by Ti →
Rj; it signifies that thread Ti has requested an instance of resource
type Rj and is currently waiting for that resource.

• A directed edge from resource type Rj to thread Ti is denoted by Rj →


Ti ; it signifies that an instance of resource type Rj has been allocated
to thread Ti .

• A directed edge Ti → Rj is called a request edge; a directed edge Rj →


Ti is called an assignment edge.

• Pictorially, we represent each thread Ti as a circle and each resource


type Rj as a rectangle.
Example
• The resource-allocation graph shown in Figure 8.4 depicts the following
situation.
➢The sets T, R, and E:
◦ T = {T1, T2, T3}
◦ R = {R1, R2, R3, R4}

◦ E = {T1 → R1, T2 → R3, R1 → T2, R2 → T2, R2 → T1, R3 → T3}


• Resource instances:
◦ One instance of resource type R1
◦ Two instances of resource type R2
◦ One instance of resource type R3
◦ Three instances of resource type R4

• Thread states:
◦ Thread T1 is holding an instance of resource type R2 and is waiting for
an instance of resource type R1.
◦ Thread T2 is holding an instance of R1 and an instance of R2 and is
waiting for an instance of R3.
◦ Thread T3 is holding an instance of R3.
• Given the definition of a resource-allocation graph, it can be shown
that, if the graph contains no cycles, then no thread in the system is
deadlocked.

• If the graph does contain a cycle, then a deadlock may exist.

• If each resource type has several instances, then a cycle does not
necessarily imply that a deadlock has occurred.

• To illustrate this concept, we return to the resource-allocation graph


depicted in Figure above.
• Suppose that thread T3 requests an instance of resource type R2.

• Since no resource instance is currently available, we add a request


edge T3 → R2 to the graph. At this point, two minimal cycles exist in
the system:

T1 → R1 → T2 → R3 → T3 → R2 → T1
T2 → R3 → T3 → R2 → T2

Threads T1, T2, and T3 are deadlocked.


Thread T2 is waiting for the resource R3,
which is held by thread T3. Thread T3 is
waiting for either thread T1 or thread T2 to
release resource R2. In addition, thread T1 is
waiting for thread T2 to release resource R1.
• Now consider the resource-allocation graph in Figure 8.6. In this
example, we also have a cycle:
T1 → R1 → T3 → R2 → T1
However, there is no deadlock. Observe that thread
T4 may release its instance of resource type R2.
That resource can then be allocated to T3, breaking
the cycle.
In summary, if a resource-allocation graph does not
have a cycle, then the system is not in a deadlocked
state.
If there is a cycle, then the system may or may not
be in a deadlocked state.

This observation is important when we deal with


the deadlock problem.
Methods for Handling Deadlocks
• Generally speaking, we can deal with the deadlock problem in one of
three ways:

✓We can ignore the problem altogether and pretend that deadlocks
never occur in the system.

✓We can use a protocol to prevent or avoid deadlocks, ensuring


that the system will never enter a deadlocked state.

✓We can allow the system to enter a deadlocked state, detect it,
and recover.
• Deadlock prevention provides a set of methods to ensure that at least
one of the necessary conditions cannot hold. These methods prevent
deadlocks by constraining how requests for resources can be made.

• Deadlock avoidance requires that the operating system be given


additional information in advance concerning which resources a
thread will request and use during its lifetime.

• With this additional knowledge, the operating system can decide for
each request whether or not the thread should wait.

• To decide whether the current request can be satisfied or must be


delayed, the system must consider the resources currently available,
the resources currently allocated to each thread, and the future
requests and releases of each thread.
Deadlock Prevention
1. Mutual Exclusion
• The mutual-exclusion condition must hold. That is, at least one resource must be non-
sharable.

• Sharable resources do not require mutually exclusive access and thus cannot be involved
in a deadlock.

• Read-only files are a good example of a sharable resource. If several threads attempt to
open a read-only file at the same time, they can be granted simultaneous access to the
file.

• A thread never needs to wait for a sharable resource.

• In general, however, we cannot prevent deadlocks by denying the mutual-exclusion


condition, because some resources are intrinsically non-sharable.
2. Hold and Wait
• To ensure that the hold-and-wait condition never occurs in the
system, we must guarantee that, whenever a thread requests a
resource, it does not hold any other resources.

• One protocol that we can use requires each thread to request and be
allocated all its resources before it begins execution. This is, of course,
impractical for most applications due to the dynamic nature of
requesting resources.

• An alternative protocol allows a thread to request resources only


when it has none. A thread may request some resources and use
them. Before it can request any additional resources, it must release
all the resources that it is currently allocated.

• Both these protocols have two main disadvantages. First, resource


utilization may be low, since resources may be allocated but unused
for a long period. Second, starvation is possible.
3. No Preemption
• The third necessary condition for deadlocks is that there be no
preemption of resources that have already been allocated.

• To ensure that this condition does not hold, we can use the following
protocol. If a thread is holding some resources and requests another
resource that cannot be immediately allocated to it (that is, the
thread must wait), then all resources the thread is currently holding
are preempted. In other words, these resources are implicitly
released.

• The preempted resources are added to the list of resources for which
the thread is waiting. The thread will be restarted only when it can
regain its old resources, as well as the new ones that it is requesting.
• Alternatively, if a thread requests some resources, we first check
whether they are available. If they are, we allocate them.

• If they are not, we check whether they are allocated to some other
thread that is waiting for additional resources.

• If so, we preempt the desired resources from the waiting thread and
allocate them to the requesting thread.

• If the resources are neither available nor held by a waiting thread, the
requesting thread must wait.

• This protocol is often applied to resources whose state can be easily


saved and restored later, such as CPU registers and database
transactions.
4. Circular Wait
• The three options presented thus far for deadlock prevention are
generally impractical in most situations.

• However, the fourth and final condition for deadlocks — the circular-
wait condition — presents an opportunity for a practical solution by
invalidating one of the necessary conditions.

• One way to ensure that this condition never holds is to impose a total
ordering of all resource types and to require that each thread
requests resources in an increasing order of enumeration.

• To illustrate, we let R = {R1, R2, ..., Rm} be the set of resource types.
We assign to each resource type a unique integer number, which
allows us to compare two resources and to determine whether one
precedes another in our ordering.
• Formally, we define a one-to-one function F: R → N, where N is the
set of natural numbers.

You might also like