0% found this document useful (0 votes)
19 views

Operating Systems

Uploaded by

Rashed Harun
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Operating Systems

Uploaded by

Rashed Harun
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Basic Introduction

What do you mean by an operating system? What are its basic functions?
An Operating System(OS) is software that manages and handles the hardware and software resources of
a computer system. It provides interaction between users of computers and computer hardware. An
operating system is responsible for managing and controlling all the activities and sharing of computer
resources. An operating system is a low-level Software that includes all the basic functions like processor
management, memory management, Error detection, etc.

Characteristics of Operating Systems


Let us now discuss some of the important characteristic features of operating systems:
Device Management: The operating system keeps track of all the devices. So, it is also called the
Input/Output controller that decides which process gets the device, when, and for how much
time.
File Management: It allocates and de-allocates the resources and also decides who gets the
resource.
Job Accounting: It keeps track of time and resources used by various jobs or users.
Error-detecting Aids: These contain methods that include the production of dumps, traces, error
messages, and other debugging and error-detecting methods.
Memory Management: It keeps track of the primary memory, like what part of it is in use by
whom, or what part is not in use, etc. and It also allocates the memory when a process or
program requests it.
Processor Management: It allocates the processor to a process and then de-allocates the
processor when it is no longer required or the job is done.
Control on System Performance: It records the delays between the request for a service and the
system.
Security: It prevents unauthorized access to programs and data using passwords or some kind of
protection technique.
Convenience: An OS makes a computer more convenient to use.
Efficiency: An OS allows the computer system resources to be used efficiently.
Ability to Evolve: An OS should be constructed in such a way as to permit the effective
development, testing, and introduction of new system functions at the same time without
interfering with service.
Throughput: An OS should be constructed so that It can give maximum throughput (Number of
tasks per unit time).
Functionalities of Operating System
Resource Management: When parallel accessing happens in the OS means when multiple users
are accessing the system the OS works as Resource Manager, Its responsibility is to provide
hardware to the user. It decreases the load in the system.
Process Management: It includes various tasks like scheduling and termination of the process. It
is done with the help of CPU Scheduling algorithms.
Storage Management: The file system mechanism used for the management of the storage.
NIFS, CIFS, CFS, NFS, etc. are some file systems. All the data is stored in various tracks of Hard
disks that are all managed by the storage manager. It included Hard Disk.
Memory Management: Refers to the management of primary memory. The operating system
has to keep track of how much memory has been used and by whom. It has to decide which
process needs memory space and how much. OS also has to allocate and deallocate the memory
space.
Security/Privacy Management: Privacy is also provided by the Operating system using
passwords so that unauthorized applications can’t access programs or data. For example,
Windows uses Kerberos authentication to prevent unauthorized access to data
Types of Operating Systems
There are several types of Operating Systems which are mentioned below.
1. Batch Operating System
2. Multi-Programming System
3. Multi-Processing System
4. Multi-Tasking Operating System
5. Time-Sharing Operating System
6. Distributed Operating System
7. Network Operating System
8. Real-Time Operating System
1. Batch Operating System
This type of operating system does not interact with the computer directly. There is an operator which
takes similar jobs having the same requirement and groups them into batches. It is the responsibility of
the operator to sort jobs with similar needs.

Advantages of Batch Operating System


It is very difficult to guess or know the time required for any job to complete. Processors of the
batch systems know how long the job would be when it is in the queue.
Multiple users can share the batch systems.
The idle time for the batch system is very less.
It is easy to manage large work repeatedly in batch systems.
Disadvantages of Batch Operating System
The computer operators should be well known with batch systems.
Batch systems are hard to debug.
It is sometimes costly.
The other jobs will have to wait for an unknown time if any job fails.
Examples of Batch Operating Systems: Payroll Systems, Bank Statements, etc.

2. Multi-Programming Operating System


Multiprogramming Operating Systems can be simply illustrated as more than one program is present in
the main memory and any one of them can be kept in execution. This is basically used for better
execution of resources.

Advantages of Multi-Programming Operating System


Multi Programming increases the Throughput of the System.
It helps in reducing the response time.
Disadvantages of Multi-Programming Operating System
There is not any facility for user interaction of system resources with the system.

3. Multi-Processing Operating System


Multi-Processing Operating System is a type of Operating System in which more than one CPU is used for
the execution of resources. It betters the throughput of the System.
Advantages of Multi-Processing Operating System
It increases the throughput of the system.
As it has several processors, so, if one processor fails, we can proceed with another processor.
Disadvantages of Multi-Processing Operating System
Due to the multiple CPU, it can be more complex and somehow difficult to understand.

4. Multi-Tasking Operating System


Multitasking Operating System is simply a multiprogramming Operating System with having facility of a
Round-Robin Scheduling Algorithm. It can run multiple programs simultaneously.
There are two types of Multi-Tasking Systems which are listed below.
Preemptive Multi-Tasking
Cooperative Multi-Tasking

Advantages of Multi-Tasking Operating System


Multiple Programs can be executed simultaneously in Multi-Tasking Operating System.
It comes with proper memory management.
Disadvantages of Multi-Tasking Operating System
The system gets heated in case of heavy programs multiple times.

5. Time-Sharing Operating Systems


Each task is given some time to execute so that all the tasks work smoothly. Each user gets the time of
the CPU as they use a single system. These systems are also known as Multitasking Systems. The task can
be from a single user or different users also. The time that each task gets to execute is called quantum.
After this time interval is over OS switches over to the next task.

Time-Sharing OS
Advantages of Time-Sharing OS
Each task gets an equal opportunity.
Fewer chances of duplication of software.
CPU idle time can be reduced.
Resource Sharing: Time-sharing systems allow multiple users to share hardware resources such
as the CPU, memory, and peripherals, reducing the cost of hardware and increasing efficiency.
Improved Productivity: Time-sharing allows users to work concurrently, thereby reducing the
waiting time for their turn to use the computer. This increased productivity translates to more
work getting done in less time.
Improved User Experience: Time-sharing provides an interactive environment that allows users
to communicate with the computer in real time, providing a better user experience than batch
processing.
Disadvantages of Time-Sharing OS
Reliability problem.
One must have to take care of the security and integrity of user programs and data.
Data communication problem.
High Overhead: Time-sharing systems have a higher overhead than other operating systems due
to the need for scheduling, context switching, and other overheads that come with supporting
multiple users.
Complexity: Time-sharing systems are complex and require advanced software to manage
multiple users simultaneously. This complexity increases the chance of bugs and errors.
Security Risks: With multiple users sharing resources, the risk of security breaches increases.
Time-sharing systems require careful management of user access, authentication, and
authorization to ensure the security of data and software.
Examples of Time-Sharing OS with explanation
IBM VM/CMS: IBM VM/CMS is a time-sharing operating system that was first introduced in
1972. It is still in use today, providing a virtual machine environment that allows multiple users
to run their own instances of operating systems and applications.
TSO (Time Sharing Option): TSO is a time-sharing operating system that was first introduced in
the 1960s by IBM for the IBM System/360 mainframe computer. It allowed multiple users to
access the same computer simultaneously, running their own applications.
Windows Terminal Services: Windows Terminal Services is a time-sharing operating system that
allows multiple users to access a Windows server remotely. Users can run their own applications
and access shared resources, such as printers and network storage, in real-time.
6. Distributed Operating System
These types of operating system is a recent advancement in the world of computer technology and are
being widely accepted all over the world and, that too, at a great pace. Various autonomous
interconnected computers communicate with each other using a shared communication network.
Independent systems possess their own memory unit and CPU. These are referred to as loosely coupled
systems or distributed systems. These systems’ processors differ in size and function. The major benefit
of working with these types of the operating system is that it is always possible that one user can access
the files or software which are not actually present on his system but some other system connected
within this network i.e., remote access is enabled within the devices connected in that network.

Advantages of Distributed Operating System


Failure of one will not affect the other network communication, as all systems are independent
of each other.
Electronic mail increases the data exchange speed.
Since resources are being shared, computation is highly fast and durable.
Load on host computer reduces.
These systems are easily scalable as many systems can be easily added to the network.
Delay in data processing reduces.
Disadvantages of Distributed Operating System
Failure of the main network will stop the entire communication.
To establish distributed systems the language is used not well-defined yet.
These types of systems are not readily available as they are very expensive. Not only that the
underlying software is highly complex and not understood well yet.
Examples of Distributed Operating Systems are LOCUS, etc.
The distributed os must tackle the following issues:
Networking causes delays in the transfer of data between nodes of a distributed system. Such
delays may lead to an inconsistent view of data located in different nodes, and make it difficult
to know the chronological order in which events occurred in the system.
Control functions like scheduling, resource allocation, and deadlock detection have to be
performed in several nodes to achieve computation speedup and provide reliable operation
when computers or networking components fail.
Messages exchanged by processes present in different nodes may travel over public networks
and pass through computer systems that are not controlled by the distributed operating system.
An intruder may exploit this feature to tamper with messages, or create fake messages to fool
the authentication procedure and masquerade as a user of the system.
7. Network Operating System
These systems run on a server and provide the capability to manage data, users, groups, security,
applications, and other networking functions. These types of operating systems allow shared access to
files, printers, security, applications, and other networking functions over a small private network. One
more important aspect of Network Operating Systems is that all the users are well aware of the
underlying configuration, of all other users within the network, their individual connections, etc. and
that’s why these computers are popularly known as tightly coupled systems.

Advantages of Network Operating System


● Highly stable centralized servers.
● Security concerns are handled through servers.
● New technologies and hardware up-gradation are easily integrated into the system.
● Server access is possible remotely from different locations and types of systems.
Disadvantages of Network Operating System
● Servers are costly.
● User has to depend on a central location for most operations.
● Maintenance and updates are required regularly.
Examples of Network Operating Systems are Microsoft Windows Server 2003, Microsoft Windows
Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, BSD, etc.

8. Real-Time Operating System


These types of OSs serve real-time systems. The time interval required to process and respond to inputs
is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict like missile systems,
air traffic control systems, robots, etc.
Types of Real-Time Operating Systems
Hard Real-Time Systems:
Hard Real-Time OSs are meant for applications where time constraints are very strict and even
the shortest possible delay is not acceptable. These systems are built for saving life like
automatic parachutes or airbags which are required to be readily available in case of an accident.
Virtual memory is rarely found in these systems.
Soft Real-Time Systems:
These OSs are for applications where time-constraint is less strict.
For more, refer to the Difference Between Hard Real-Time OS and Soft Real-Time OS.

Advantages of RTOS
Maximum Consumption: Maximum utilization of devices and systems, thus more output from
all the resources.
Task Shifting: The time assigned for shifting tasks in these systems is very less. For example, in
older systems, it takes about 10 microseconds in shifting from one task to another, and in the
latest systems, it takes 3 microseconds.
Focus on Application: Focus on running applications and less importance on applications that
are in the queue.
Real-time operating system in the embedded system: Since the size of programs is small, RTOS
can also be used in embedded systems like in transport and others.
Error Free: These types of systems are error-free.
Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS
Limited Tasks: Very few tasks run at the same time and their concentration is very less on a few
applications to avoid errors.
Use heavy system resources: Sometimes the system resources are not so good and they are
expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer to write on.
Device driver and interrupt signals: It needs specific device drivers and interrupts signal to
respond earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less prone to
switching tasks.
Difference Between 32-bit and 64-bit Operating Systems
Feature 32-bit OS 64-bit OS
Memory Maximum of 4 GB RAM Maximum of several terabytes of
RAM
Processor Can run on both 32-bit and 64-bit Requires a 64-bit processor
processors
Performance Limited by the maximum amount Can take advantage of more memory,
of RAM it can access enabling faster performance
Compatibility Can run 32-bit and 16-bit Can run 32-bit and 64-bit
applications applications
Address Space Uses 32-bit address space Uses 64-bit address space
Hardware support May not support newer hardware Supports newer hardware with 64-bit
drivers
Security Limited security features More advanced security features,
such as hardware-level protection
Application support Limited support for new software Supports newer software designed
for 64-bit architecture
Price Less expensive than 64-bit OS More expensive than 32-bit OS
Multitasking Can handle multiple tasks but with Can handle multiple tasks more
limited efficiency efficiently
Gaming Can run high graphical games, but Can run high graphical games and
may not be as efficient as with handle complex software more
64-bit OS efficiently
Virtualization Limited support for virtualization Better support for virtualization

Difference between Multiprogramming, multitasking, multithreading and multiprocessing


Multiprogramming
Multiple programs run simultaneously on a single CPU. This technique was developed for early
computers with limited memory to allow programs to run at the same time. Multiprogramming
maximizes CPU time by loading multiple programs into main memory and protecting them from
modifying each other.
Multitasking
Multiple tasks, such as processes, programs, or threads, run simultaneously on a single CPU, sharing a
common processing resource. Multitasking is an extension of multiprogramming that allows multiple
programs to run simultaneously with memory protection.
Multithreading
Multiple threads, or lightweight processes, run simultaneously on a single CPU. Multithreading is an
extension of multitasking that allows a single process to have multiple code segments running at the
same time.
Multiprocessing
Two or more CPUs are used within a single computer system, allowing multiple processes to run at the
same time.

Feature Multiprogramming Multitasking Multithreading Multiprocessing

Definition Running multiple Running Running multiple Running multiple


programs on a single multiple tasks threads within a processes on
CPU (applications) single task multiple CPUs (or
on a single CPU (application) cores)

Resource Sharing Resources (CPU, Resources Resources (CPU, Each process has its
memory) are shared (CPU, memory) memory) are own set of
among programs are shared shared among resources (CPU,
among tasks threads memory)

Scheduling Uses round-robin or Uses Uses Each process can


priority-based priority-based priority-based or have its own
scheduling to or time-slicing time-slicing scheduling
allocate CPU time to scheduling to scheduling to algorithm
programs allocate CPU allocate CPU time
time to tasks to threads

Memory Each program has its Each task has Threads share Each process has its
Management own memory space its own memory space own memory space
memory space within a task

Context Switching Requires a context Requires a Requires a context Requires a context


switch to switch context switch switch to switch switch to switch
between programs to switch between threads between processes
between tasks
Inter-Process Uses message Uses message Uses thread Uses inter-process
Communication (IPC) passing or shared passing or synchronization communication
memory for IPC shared memory mechanisms (e.g., mechanisms (e.g.,
for IPC locks, pipes, sockets) for
semaphores) for IPC
IPC

What is UEFI?
UEFI stands for Unified Extensible Firmware Interface. It is a modern replacement for the traditional BIOS
(Basic Input/Output System) firmware interface found in modern computers. UEFI acts as an automated
software by first turning on the computer, providing the functions needed to install operating systems
and hardware components The most recent firmware interface is known as the Unified Extensible
Firmware Interface (UEFI). ) Except for older BIOS computers. It improves performance, security, and
compatibility by bridging the gap between computer operating systems and hardware.
Features of UEFI
● Support for modern hardware: UEFI supports new hardware technologies and features such
as larger hard drives, faster boot times, and improved security measures.
● Graphical User Interface (GUI): Unlike the text-based interface of the BIOS, UEFI typically
includes a graphical interface that makes it easier to access and edit system settings
● Secure Boot: UEFI includes a Secure Boot feature, which helps prevent the installation of
malicious software during boot by checking the digital signature of the bootloader and OS
components
● Compatible disk sizes: UEFI supports GUID Partition Table (GPT) disks, allowing for larger
partitions and more partitions compared to the older Master Boot Record (MBR) partition
scheme
● Network capabilities: UEFI firmware can be network capable, allowing it to operate like
other firmware over the network.
What is BIOS?
It stands for Basic Input Output System. It is a firmware interface that acts as the first software layer
among hardware components and the running machine of a PC device. The BIOS is answerable for acting
vital duties at some stage in the boot technique and affords simple input/output services for the
operating machine and mounted software program.
Difference Between UEFI and BIOS

UEFI BIOS

UEFI acts as the first software that runs BIOS stands for Basic Input/Output System. It is a
when the computer is powered on, firmware interface that acts as the first software
providing the necessary functions to layer between hardware components and the
initialize the operating system and the operating system of a computer system
hardware components

It provides a unified driver model, which The drivers are specific to the BIOS firmware and may
allows drivers to be used for both firmware not be compatible with the operating system.
and operating systems.

Start hardware in parallel, which speeds up Slowly start the hardware, which can cause slow boot
boot time times.

A graphical user interface (GUI) is often Often, they are text-based, which can be very difficult
included for easy navigation and for users.
configuration.

GUID supports Partition Table (GPT) disks, Usually it is limited to the Master Boot Record (MBR)
allowing larger partitions and more partition setting, with limitations on partition size
partitions to be created. and number.

It can be a communication capability for Generally, lack of network capabilities, requiring


performing firmware updates and other manual firmware updates.
tasks on the network.

What is Kernel and write its main functions?


The kernel is basically a computer program usually considered as a central component or module of OS.
It is responsible for handling, managing, and controlling all operations of computer systems and
hardware. Whenever the system starts, the kernel is loaded first and remains in the main memory. It also
acts as an interface between user applications and hardware.
Functions of Kernel:
It is responsible for managing all computer resources such as CPU, memory, files, processes, etc.
It facilitates or initiates the interaction between components of hardware and software.
It manages RAM memory so that all running processes and programs can work effectively and efficiently.
It also controls and manages all primary tasks of the OS as well as manages access and use of various
peripherals connected to the computer.
It schedules the work done by the CPU so that the work of each user is executed as efficiently as
possible.
4. Write difference between micro kernel and monolithic kernel?
MicroKernel: It is a minimal OS that executes only important functions of OS. It only contains a
near-minimum number of features and functions that are required to implement OS.
Example: QNX, Mac OS X, K42, etc.

Monolithic Kernel: It is an OS architecture that supports all basic features of computer components
such as resource management, memory, file, etc.
Example: Solaris, DOS, OpenVMS, Linux, etc.
MicroKernel Monolithic Kernel
In this software or program, kernel services In this software or program, kernel services and
and user services are present in different user services are usually present in the same
address spaces. address space.
It is smaller in size as compared to the It is larger in size as compared to a microkernel.
monolithic kernel.
It is easily extendible as compared to a It is hard to as extend as compared to a
monolithic kernel. microkernel.
If a service crashes, it does affect on working If a service crashes, the whole system crashes in
of the microkernel. a monolithic kernel.
It uses message queues to achieve It uses signals and sockets to achieve
inter-process communication. inter-process communication.

49. What is the difference between the Operating system and kernel?
Operating System Kernel
Operating System is system software. The kernel is system software that is part of the
Microkerneloperating system.
Operating System provides an interface b/w The kernel provides an interface b/w the
the user and the hardware. application and hardware.
It also provides protection and security. Its main purpose is memory management, disk
management, process management and task
management.
All system needs a real-time operating All operating system needs a kernel to run.
real-time, and Microkernel system to run.
Type of operating system includes single and Type of kernel includes Monolithic and
multiuser OS, multiprocessor OS, real-time Microkernel.
OS, Distributed OS.
It is the first program to load when the It is the first program to load when the operating
computer boots up. system loads

CPU Scheduling

What is a process?
In computing, a process is the instance of a computer program that is being executed by one or many
threads. It contains the program code and its activity. Depending on the operating system (OS), a process
may be made up of multiple threads of execution that execute instructions concurrently.
States of Process
A process is in one of the following states:
New: Newly Created Process (or) being-created process.
Ready: After the creation process moves to the Ready state, i.e. the process is ready for
execution.
Run: Currently running process in CPU (only one process at a time can be under execution in a
single processor)
Wait (or Block): When a process requests I/O access.
Complete (or Terminated): The process completed its execution.
Suspended Ready: When the ready queue becomes full, some processes are moved to a
suspended ready state
Suspended Block: When the waiting queue becomes full.
Process management
Process management includes various tools and techniques such as process mapping, process analysis,
process improvement, process automation, and process control. By applying these tools and techniques,
organizations can streamline their processes, eliminate waste, and improve productivity. Overall, process
management is a critical aspect of modern business operations and can help organizations achieve their
goals and stay competitive in today’s rapidly changing marketplace.
Key Components of Process Management
Below are some key component of process management.
Process mapping: Creating visual representations of processes to understand how tasks flow,
identify dependencies, and uncover improvement opportunities.
Process analysis: Evaluating processes to identify bottlenecks, inefficiencies, and areas for
improvement.
Process redesign: Making changes to existing processes or creating new ones to optimize
workflows and enhance performance.
Process implementation: Introducing the redesigned processes into the organization and
ensuring proper execution.
Process monitoring and control: Tracking process performance, measuring key metrics, and
implementing control mechanisms to maintain efficiency and effectiveness.
Advantages of Process Management
Improved Efficiency: Process management can help organizations identify bottlenecks and
inefficiencies in their processes, allowing them to make changes to streamline workflows and
increase productivity.
Cost Savings: By identifying and eliminating waste and inefficiencies, process management can
help organizations reduce costs associated with their business operations.
Improved Quality: Process management can help organizations improve the quality of their
products or services by standardizing processes and reducing errors.
Increased Customer Satisfaction: By improving efficiency and quality, process management can
enhance the customer experience and increase satisfaction.
Compliance with Regulations: Process management can help organizations comply with
regulatory requirements by ensuring that processes are properly documented, controlled, and
monitored.
What is Context Switching?
Context switching is basically a process of saving the context of one process and loading the context of another
process. It is one of the cost-effective and time-saving measures executed by CPU the because it allows multiple
processes to share a single CPU. Therefore, it is considered an important part of a modern OS. This technique is
used by OS to switch a process from one state to another i.e., from running state to ready state. It also allows a
single CPU to handle and control various different processes or threads without even the need for additional
resources.
Why is context switching necessary?
Switching context is a requirement for the operating system to run different processes concurrently despite having
only one CPU. By promptly alternating between these processes, the operating system is capable of presenting the
impression of parallel execution, a vital feature for contemporary multi-tasking systems.
Schedulers
Schedulers are special system software that handles process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run.
There are three types of Scheduler:
1. Long-term (job) scheduler – Due to the smaller size of main memory initially all programs are
stored in secondary memory. When they are stored or loaded in the main memory they are
called processes. This is the decision of the long-term scheduler to determine how many
processes will stay in the ready queue. Hence, in simple words, the long-term scheduler decides
the degree of multi-programming of the system.
2. Medium-term scheduler – Most often, a running process needs I/O operation which doesn’t
require a CPU. Hence during the execution of a process when an I/O operation is required then
the operating system sends that process from the running queue to the blocked queue. When a
process completes its I/O operation then it should again be shifted to the ready queue. ALL these
decisions are taken by the medium-term scheduler. Medium-term scheduling is a part of
swapping.
3. Short-term (CPU) scheduler – When there are lots of processes in main memory initially all are
present in the ready queue. Among all of the processes, a single process is to be selected for
execution. This decision is handled by a short-term scheduler. Let’s have a look at the figure
given below. It may make a more clear view for you.
Dispatcher
A dispatcher is a special program which comes into play after the scheduler. When the scheduler
completes its job of selecting a process, it is the dispatcher which takes that process to the desired
state/queue. The dispatcher is the module that gives a process control over the CPU after it has been
selected by the short-term scheduler. This function involves the following:
● Switching context
● Switching to user mode
● Jumping to the proper location in the user program to restart that program

Properties DISPATCHER SCHEDULER


Definition Dispatcher is a module that gives Scheduler is something which selects a
control of CPU to the process process among various processes
selected by short term scheduler
Types There are no different types in There are 3 types of scheduler i.e.
dispatcher.It is just a code segment. Long-term, Short-term, Medium-term

Dependency Working of dispatcher is dependent Scheduler works independently. It


on scheduler.Means dispatcher have works immediately when needed
to wait until scheduler selects a
process.
Algorithm Dispatcher has no specific algorithm Scheduler works on various algorithm
for its implementation such as FCFS, SJF, RR etc.

Time Taken The time taken by dispatcher is Time taken by scheduler is usually
called dispatch latency. negligible.Hence we neglect it.
Functions Dispatcher is also responsible The only work of scheduler is selection
for:Context Switching, Switch to of processes.
user mode, Jumping to proper
location when process again
restarted

Tasks Dispatcher allocates the CPU to the Scheduler performs three task. Job
process selected by the short-time scheduling (Long-term scheduler), CPU
scheduler. scheduling (Short-term scheduler) and
swapping (Medium-term scheduler).

Purpose To move the process from the ready To select the process and decide which
queue to the CPU process to run

Execution time It takes a very short execution time It takes longer execution time than
dispatcher

Interaction The dispatcher works with the CPU The scheduler works with the ready
and the selected process queue and the dispatcher

What is a Process Control Block (PCB)?


A process control block (PCB) is a data structure used by operating systems to store important
information about running processes. It contains information such as the unique identifier of the process
(Process ID or PID), current status, program counter, CPU registers, memory allocation, open file
descriptions and accounting information. The circuit is critical to context switching because it allows the
operating system to efficiently manage and control multiple processes.
The process table is an array of PCBs, that means logically contains a PCB for all of the current processes
in the system.
Pointer: It is a stack pointer that is required to be saved when the process is switched from one
state to another to retain the current position of the process.
Process state: It stores the respective state of the process.
Process number: Every process is assigned a unique id known as process ID or PID which stores
the process identifier.
Program counter: It stores the counter,: which contains the address of the next instruction that
is to be executed for the process.
Register: Registers in the PCB, it is a data structure. When a processes is running and it’s time
slice expires, the current value of process specific registers would be stored in the PCB and the
process would be swapped out. When the process is scheduled to be run, the register values is
read from the PCB and written to the CPU registers. This is the main purpose of the registers in
the PCB.
Memory limits: This field contains the information about memory management system used by
the operating system. This may include page tables, segment tables, etc.
Open files list : This information includes the list of files opened for a process.

What is Process Scheduling?


Process Scheduling is the process of the process manager handling the removal of an active process from
the CPU and selecting another process based on a specific strategy.
Process Scheduling is an integral part of Multi-programming applications. Such operating systems allow
more than one process to be loaded into usable memory at a time and the loaded shared CPU process
uses repetition time.
There are three types of process schedulers:
Long term or Job Scheduler
Short term or CPU Scheduler
Medium-term Scheduler
Objectives of Process Scheduling Algorithm:
Utilization of CPU at maximum level. Keep CPU as busy as possible.
Allocation of CPU should be fair.
Throughput should be Maximum. i.e. Number of processes that complete their execution per
time unit should be maximized.
Minimum turnaround time, i.e. time taken by a process to finish execution should be the least.
There should be a minimum waiting time and the process should not starve in the ready queue.
Minimum response time. It means that the time when a process produces the first response
should be as less as possible.
Process Scheduling Algorithms
The operating system can use different scheduling algorithms to schedule processes. Here are some
commonly used timing algorithms:
First-come, first-served (FCFS): This is the simplest scheduling algorithm, where the process is
executed on a first-come, first-served basis. FCFS is non-preemptive, which means that once a
process starts executing, it continues until it is finished or waiting for I/O.
Shortest Job First (SJF): SJF is a proactive scheduling algorithm that selects the process with the
shortest burst time. The burst time is the time a process takes to complete its execution. SJF
minimizes the average waiting time of processes.
Round Robin (RR): Round Robin is a proactive scheduling algorithm that reserves a fixed amount
of time in a round for each process. If a process does not complete its execution within the
specified time, it is blocked and added to the end of the queue. RR ensures fair distribution of
CPU time to all processes and avoids starvation.
Priority Scheduling: This scheduling algorithm assigns priority to each process and the process
with the highest priority is executed first. Priority can be set based on process type, importance,
or resource requirements.
Multilevel queue: This scheduling algorithm divides the ready queue into several separate
queues, each queue having a different priority. Processes are queued based on their priority,
and each queue uses its own scheduling algorithm. This scheduling algorithm is useful in
scenarios where different types of processes have different priorities.
What are the different types of CPU Scheduling Algorithms?
There are mainly two types of scheduling methods:
Preemptive Scheduling: Preemptive scheduling is used when a process switches from running
state to ready state or from the waiting state to the ready state.
Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process terminates , or
when a process switches from running state to waiting state.
Different types of CPU Scheduling Algorithms
Let us now learn about these CPU scheduling algorithms in operating systems one by one:
1. First Come First Serve:
FCFS considered to be the simplest of all operating system scheduling algorithms. First come first serve
scheduling algorithm states that the process that requests the CPU first is allocated the CPU first and is
implemented by using FIFO queue.
Characteristics of FCFS:
FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
Tasks are always executed on a First-come, First-serve concept.
FCFS is easy to implement and use.
This algorithm is not much efficient in performance, and the wait time is quite high.
Advantages of FCFS:
Easy to implement
First come, first serve method
Disadvantages of FCFS:
FCFS suffers from Convoy effect.
The average waiting time is much higher than the other algorithms.
FCFS is very simple and easy to implement and hence not much efficient.
To learn about how to implement this CPU scheduling algorithm, please refer to our detailed article on
First come, First serve Scheduling.
2. Shortest Job First(SJF):
Shortest job first (SJF) is a scheduling process that selects the waiting process with the smallest
execution time to execute next. This scheduling method may or may not be preemptive. Significantly
reduces the average waiting time for other processes waiting to be executed. The full form of SJF is
Shortest Job First.

Characteristics of SJF:
Shortest Job first has the advantage of having a minimum average waiting time among all
operating system scheduling algorithms.
It is associated with each task as a unit of time to complete.
It may cause starvation if shorter processes keep coming. This problem can be solved using the
concept of ageing.
Advantages of Shortest Job first:
As SJF reduces the average waiting time thus, it is better than the first come first serve
scheduling algorithm.
SJF is generally used for long term scheduling
Disadvantages of SJF:
One of the demerit SJF has is starvation.
Many times it becomes complicated to predict the length of the upcoming CPU request

3. Longest Job First(LJF):


Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF), as the name suggests
this algorithm is based upon the fact that the process with the largest burst time is processed first.
Longest Job First is non-preemptive in nature.
Characteristics of LJF:
Among all the processes waiting in a waiting queue, CPU is always assigned to the process having
largest burst time.
If two processes have the same burst time then the tie is broken using FCFS i.e. the process that
arrived first is processed first.
LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LJF:
No other task can schedule until the longest job or process executes completely.
All the jobs or processes finish at the same time approximately.
Disadvantages of LJF:
Generally, the LJF algorithm gives a very high average waiting time and average turn-around time
for a given set of processes.
This may lead to convoy effect.

4. Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling algorithm
that works based on the priority of a process. In this algorithm, the editor sets the functions to be as
important, meaning that the most important process must be done first. In the case of any conflict, that
is, where there is more than one process with equal value, then the most important CPU planning
algorithm works on the basis of the FCFS (First Come First Serve) algorithm.
Characteristics of Priority Scheduling:
Schedules tasks based on priority.
When the higher priority work arrives and a task with less priority is executing, the higher
priority proess will takes the place of the less priority proess and
The later is suspended until the execution is complete.
Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling:
The average waiting time is less than FCFS
Less complex
Disadvantages of Priority Scheduling:
One of the most common demerits of the Preemptive priority CPU scheduling algorithm is the
Starvation Problem. This is the problem in which a process has to wait for a longer amount of
time to get scheduled into the CPU. This condition is called the starvation problem.
5. Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed time slot. It
is the preemptive version of First come First Serve CPU Scheduling algorithm. Round Robin CPU
Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin:
It’s simple, easy to use, and starvation-free as all processes get the balanced CPU allocation.
One of the most widely used methods in CPU scheduling as a core.
It is considered preemptive as the processes are given to the CPU for a very limited time.
Advantages of Round robin:
Round robin seems to be fair as every process gets an equal share of CPU.
The newly created process is added to the end of the ready queue.

Process Synchronization
Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-process system
to ensure that they access shared resources in a controlled and predictable manner. It aims to resolve
the problem of race conditions and other synchronization issues in a concurrent system.
The main objective of process synchronization is to ensure that multiple processes access shared
resources without interfering with each other and to prevent the possibility of inconsistent data due to
concurrent access. To achieve this, various synchronization techniques such as semaphores, monitors,
and critical sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to
avoid the risk of deadlocks and other synchronization problems. Process synchronization is an important
aspect of modern operating systems, and it plays a crucial role in ensuring the correct and efficient
functioning of multi-process systems.
Race Condition
When more than one process is executing the same code or accessing the same memory or any shared
variable in that condition there is a possibility that the output or the value of the shared variable is
wrong so for that all the processes doing the race to say that my output is correct this condition known
as a race condition. Several processes access and process the manipulations over the same data
concurrently, and then the outcome depends on the particular order in which the access takes place. A
race condition is a situation that may occur inside a critical section. This happens when the result of
multiple thread execution in the critical section differs according to the order in which the threads
execute. Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.
Semaphores
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled by
another thread. This is different than a mutex as the mutex can be signaled only by the thread that is
called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations wait() and
signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks, as the
locks can provide mutual exclusion. All the processes can share the same mutex semaphore that
is initialized to 1. Then, a process has to wait until the lock becomes 0. Then, the process can
make the mutex semaphore 1 and start its critical section. When it completes its critical section,
it can reset the value of the mutex semaphore to 0 and some other process can enter its critical
section.
Counting Semaphores: They can have any value and are not restricted to a certain domain. They
can be used to control access to a resource that has a limitation on the number of simultaneous
accesses. The semaphore can be initialized to the number of instances of the resource.
Whenever a process wants to use that resource, it checks if the number of remaining instances is
more than zero, i.e., the process has an instance available. Then, the process can enter its critical
section thereby decreasing the value of the counting semaphore by 1. After the process is over
with the use of the instance of the resource, it can leave the critical section thereby adding 1 to
the number of available instances of the resource.
65. What is Peterson’s approach?
It is a concurrent programming algorithm. It is used to synchronize two processes that maintain the
mutual exclusion for the shared resource. It uses two variables, a bool array flag of size 2 and an int
variable turn to accomplish it.

Memory Management
Memory Hierarchy Design and its Characteristics
In the Computer System Design, Memory Hierarchy is an enhancement to organize the memory such
that it can minimize the access time. The Memory Hierarchy was developed based on a program
behavior known as locality of references. The figure below clearly demonstrates the different levels of
the memory hierarchy.
Why Memory Hierarchy is Required in the System?
Memory Hierarchy is one of the most required things in Computer Memory as it helps in optimizing the
memory available in the computer. There are multiple levels present in the memory, each one having a
different size, different cost, etc. Some types of memory like cache, and main memory are faster as
compared to other types of memory but they are having a little less size and are also costly whereas
some memory has a little higher storage value, but they are a little slower. Accessing of data is not
similar in all types of memory, some have faster access whereas some have slower access.
Types of Memory Hierarchy
This Memory Hierarchy Design is divided into 2 main types:
External Memory or Secondary Memory: Comprising of Magnetic Disk, Optical Disk, and
Magnetic Tape i.e. peripheral storage devices which are accessible by the processor via an I/O
Module.
Internal Memory or Primary Memory: Comprising of Main Memory, Cache Memory & CPU
registers. This is directly accessible by the processor.

Memory Hierarchy Design


Memory Hierarchy Design
1. Registers
Registers are small, high-speed memory units located in the CPU. They are used to store the most
frequently used data and instructions. Registers have the fastest access time and the smallest storage
capacity, typically ranging from 16 to 64 bits.
2. Cache Memory
Cache memory is a small, fast memory unit located close to the CPU. It stores frequently used data and
instructions that have been recently accessed from the main memory. Cache memory is designed to
minimize the time it takes to access data by providing the CPU with quick access to frequently used data.
3. Main Memory
Main memory, also known as RAM (Random Access Memory), is the primary memory of a computer
system. It has a larger storage capacity than cache memory, but it is slower. Main memory is used to
store data and instructions that are currently in use by the CPU.
Types of Main Memory
Static RAM: Static RAM stores the binary information in flip flops and information remains valid
until power is supplied. It has a faster access time and is used in implementing cache memory.
Dynamic RAM: It stores the binary information as a charge on the capacitor. It requires
refreshing circuitry to maintain the charge on the capacitors after a few milliseconds. It contains
more memory cells per unit area as compared to SRAM.

4. Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a non-volatile memory
unit that has a larger storage capacity than main memory. It is used to store data and instructions that
are not currently in use by the CPU. Secondary storage has the slowest access time and is typically the
least expensive type of memory in the memory hierarchy.
5. Magnetic Disk
Magnetic Disks are simply circular plates that are fabricated with either a metal or a plastic or a
magnetized material. The Magnetic disks work at a high speed inside the computer and these are
frequently used.
6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is covered with a plastic film. It is generally
used for the backup of data. In the case of a magnetic tape, the access time for a computer is a little
slower and therefore, it requires some amount of time for accessing the strip.
7. ROM: ROM full form is Read Only Memory. ROM is a non volatile memory and it is used to store
important information which is used to operate the system. We can only read the programs and data
stored on it and can not modify of delete it.
● MROM(Masked ROM): Hard-wired devices with a pre-programmed collection of data or
instructions were the first ROMs. Masked ROMs are a type of low-cost ROM that works in this
way.
● PROM (Programmable Read Only Memory): This read-only memory is modifiable once by the
user. The user purchases a blank PROM and uses a PROM program to put the required contents
into the PROM. Its content can’t be erased once written.
● EPROM (Erasable Programmable Read Only Memory): EPROM is an extension to PROM where
you can erase the content of ROM by exposing it to Ultraviolet rays for nearly 40 minutes.
● EEPROM (Electrically Erasable Programmable Read Only Memory): Here the written contents
can be erased electrically. You can delete and reprogramme EEPROM up to 10,000 times. Erasing
and programming take very little time, i.e., nearly 4 -10 ms(milliseconds). Any area in an
EEPROM can be wiped and programmed selectively.

Characteristics of Memory Hierarchy


Capacity: It is the global volume of information the memory can store. As we move from top to
bottom in the Hierarchy, the capacity increases.
Access Time: It is the time interval between the read/write request and the availability of the
data. As we move from top to bottom in the Hierarchy, the access time increases.
Performance: Earlier when the computer system was designed without a Memory Hierarchy
design, the speed gap increased between the CPU registers and Main Memory due to a large
difference in access time. This results in lower performance of the system and thus,
enhancement was required. This enhancement was made in the form of Memory Hierarchy
Design because of which the performance of the system increases. One of the most significant
ways to increase system performance is minimizing how far down the memory hierarchy one has
to go to manipulate data.
Cost Per Bit: As we move from bottom to top in the Hierarchy, the cost per bit increases i.e.
Internal Memory is costlier than External Memory.
Virtual Memory in Operating System
Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though
it were part of the main memory. The addresses a program may use to reference memory are
distinguished from the addresses the memory system uses to identify physical storage sites and
program-generated addresses are translated automatically to the corresponding machine addresses.
A memory hierarchy, consisting of a computer system’s memory and a disk, that enables a process to
operate with only some portions of its address space in memory. A virtual memory is what its name
indicates- it is an illusion of a memory that is larger than the real memory. We refer to the software
component of virtual memory as a virtual memory manager. The basis of virtual memory is the
noncontiguous memory allocation model. The virtual memory manager removes some components
from memory to make room for other components.
The size of virtual storage is limited by the addressing scheme of the computer system and the amount
of secondary memory available not by the actual number of main storage locations.
It is a technique that is implemented using both hardware and software. It maps memory addresses used
by a program, called virtual addresses, into physical addresses in computer memory.
What are the benefits of virtual memory?
Virtual memory provides several benefits:
Increased memory capacity: It allows programs to use more memory than is physically available,
enabling the execution of larger programs or multiple programs simultaneously.
Memory isolation: Each program operates in its own virtual address space, ensuring that one
program cannot access or modify the memory of another program.
Simplified memory management: Virtual memory simplifies memory management for both the
operating system and application developers by providing a uniform memory model.
Improved system stability: Virtual memory helps prevent crashes and system instability by
allowing the operating system to handle memory shortages and prioritize memory usage
efficiently.
1. What is virtual memory?
It is a memory management technique feature of OS that creates the illusion to users of a very large
(main) memory. It is simply space where a greater number of programs can be stored by themselves in
the form of pages. It enables us to increase the use of physical memory by using a disk and also allows us
to have memory protection. It can be managed in two common ways by OS i.e., paging and
segmentation. It acts as temporary storage that can be used along with RAM for computer processes.
13. What is different between main memory and secondary memory.
Main memory: Main memory in a computer is RAM (Random Access Memory). It is also known as
primary memory or read-write memory or internal memory. The programs and data that the CPU
requires during the execution of a program are stored in this memory.
Secondary memory: Secondary memory in a computer are storage devices that can store data and
programs. It is also known as external memory or additional memory or backup memory or auxiliary
memory. Such storage devices are capable of storing high-volume data. Storage devices can be hard
drives, USB flash drives, CDs, etc.
Primary Memory Secondary Memory
Data can be directly accessed by the Firstly, data is transferred to primary memory
processing unit. and after then routed to the processing unit.

It can be both volatile and non-volatile in It is non-volatile in nature.


nature.
It is more costly than secondary memory. It is more cost-effective or less costly than
primary memory.
It is temporary because data is stored It is permanent because data is stored
temporarily. permanently.
In this memory, data can be lost whenever In this memory, data is stored permanently and
there is a power failure. therefore cannot be lost even in case of power
failure.
It is much faster than secondary memory and It is slower as compared to primary memory and
saves data that is currently used by the saves different kinds of data in different formats.
computer.
It can be accessed by data. It can be accessed by I/O channels.

Write a difference between internal fragmentation and external fragmentation?


S.NO Internal fragmentation External fragmentation
1. In internal fragmentation fixed-sized In external fragmentation, variable-sized memory
memory, blocks square measure blocks square measure appointed to the method.
appointed to process.
2. Internal fragmentation happens when External fragmentation happens when the method
the method or process is larger than the or process is removed.
memory.
3. The solution to internal fragmentation Solution for external fragmentation is compaction,
is the best-fit block. paging and segmentation.
4. Internal fragmentation occurs when External fragmentation occurs when memory is
memory is divided into fixed-sized divided into variable-size partitions based on the
partitions. size of processes.
5. The difference between memory The unused spaces formed between
allocated and required space or non-contiguous memory fragments are too small to
memory is called Internal serve a new process, which is called External
fragmentation. fragmentation.

84. Write a difference between paging and segmentation?


S.NO Paging Segmentation
1. In paging, program is divided into fixed In segmentation, the program is divided into
or mounted-size pages. variable-size sections.
2. For the paging operating system is For segmentation compiler is accountable.
accountable.
3. Page size is determined by hardware. Here, the section size is given by the user.

4. It is faster in comparison of Segmentation is slow.


segmentation.
5. Paging could result in internal Segmentation could result in external
fragmentation. fragmentation.
6. In paging, logical address is split into Here, logical address is split into section number
that page number and page offset. and section offset.
7. Paging comprises a page table which While segmentation also comprises the segment
encloses the base address of every table which encloses the segment number and
page. segment offset.
8. A page table is employed to keep up the Section Table maintains the section data.
page data.
9. In paging, operating system must In segmentation, the operating system maintains a
maintain a free frame list. list of holes in the main memory.
10. Paging is invisible to the user. Segmentation is visible to the user.
11. In paging, processor needs page In segmentation, the processor uses segment
number, offset to calculate the absolute number, and offset to calculate the full address.
address.

Segmentation in Operating System


A process is divided into Segments. The chunks that a program is divided into which are not necessarily
all of the exact sizes are called segments. Segmentation gives the user’s view of the process which paging
does not provide. Here the user’s view is mapped to physical memory.
Types of Segmentation in Operating System
Virtual Memory Segmentation: Each process is divided into a number of segments, but the
segmentation is not done all at once. This segmentation may or may not take place at the run
time of the program.
Simple Segmentation: Each process is divided into a number of segments, all of which are
loaded into memory at run time, though not necessarily contiguously.
State the main difference between logical and physical address space?
Parameter LOGICAL ADDRESS PHYSICAL ADDRESS
Basic generated by the CPU. location in a memory unit.
Address Logical Address Space is a set of all Physical Address is a set of all physical
Space logical addresses generated by the CPU addresses mapped to the
in reference to a program. corresponding logical addresses.
Visibility Users can view the logical address of a Users can never view the physical
program. address of the program.
Generation generated by the CPU. Computed by MMU.
Access The user can use the logical address to The user can indirectly access physical
access the physical address. addresses but not directly.

Paging
Paging is a memory management scheme that eliminates the need for a contiguous allocation of physical
memory. The process of retrieving processes in the form of pages from the secondary storage into the
main memory is known as paging. The basic purpose of paging is to separate each procedure into pages.
Additionally, frames will be used to split the main memory. This scheme permits the physical address
space of a process to be non – contiguous.
Page Replacement Algorithms in Operating Systems
In an operating system that uses paging for memory management, a page replacement algorithm is
needed to decide which page needs to be replaced when a new page comes in. Page replacement
becomes necessary when a page fault occurs and there are no free page frames in memory. However,
another page fault would arise if the replaced page is referenced again. Hence it is important to replace a
page that is not likely to be referenced in the immediate future. If no page frame is free, the virtual
memory manager performs a page replacement operation to replace one of the pages existing in
memory with the page whose reference caused the page fault. It is performed as follows: The virtual
memory manager uses a page replacement algorithm to select one of the pages currently in memory for
replacement, accesses the page table entry of the selected page to mark it as “not present” in memory,
and initiates a page-out operation for it if the modified bit of its page table entry indicates that it is a
dirty page.
Page Fault: A page fault happens when a running program accesses a memory page that is mapped into
the virtual address space but not loaded in physical memory. Since actual physical memory is much
smaller than virtual memory, page faults happen. In case of a page fault, Operating System might have to
replace one of the existing pages with the newly needed page. Different page replacement algorithms
suggest different ways to decide which page to replace. The target for all algorithms is to reduce the
number of page faults.
Page Replacement Algorithms:
1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm, the
operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the number of page
faults.
Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in memory
so it replaces the oldest page slot i.e 1. —>1 Page Fault. 6 comes, it is also not available in memory so it
replaces the oldest page slot i.e 3 —>1 Page Fault. Finally, when 3 come it is not available so it replaces 0
1 page fault.
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of
page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we
consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get 9 total page faults, but if we
increase slots to 4, we get 10-page faults.

2. Optimal Page replacement: In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find
number of page fault.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not used for
the longest duration of time in the future.—>1 Page fault. 0 is already there so —> 0 Page fault. 4 will
takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
Optimal page replacement is perfect, but not possible in practice as the operating system cannot know
future requests. The use of Optimal Page replacement is to set up a benchmark so that other
replacement algorithms can be analyzed against it.

3. Least Recently Used: In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames. Find
number of page faults.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7 because it is least recently
used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.

4. Most Recently Used (MRU): In this algorithm, page will be replaced which has been used recently.
Belady’s anomaly can occur in this algorithm.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so–> 0 page fault
when 3 comes it will take place of 0 because it is most recently used —>1 Page fault
when 0 comes it will take place of 3 —>1 Page fault
when 4 comes it will take place of 0 —>1 Page fault
2 is already in memory so —> 0 Page fault
when 3 comes it will take place of 2 —>1 Page fault
when 0 comes it will take place of 3 —>1 Page fault
when 3 comes it will take place of 0 —>1 Page fault
when 2 comes it will take place of 3 —>1 Page fault
when 3 comes it will take place of 2 —>1 Page faul

Processes & Threads


2. What is thread in OS?
Thread is a path of execution that is composed of a program counter, thread id, stack, and set of registers
within the process. It is a basic unit of CPU utilization that makes communication more effective and
efficient, enables utilization of multiprocessor architectures to a greater scale and greater efficiency, and
reduces the time required in context switching. It simply provides a way to improve and increase the
performance of applications through parallelism. Threads are sometimes called lightweight processes
because they have their own stack but can access shared data.

Multiple threads running in a process share: Address space, Heap, Static data, Code segments, File
descriptors, Global variables, Child processes, Pending alarms, Signals, and signal handlers.

Each thread has its own: Program counter, Registers, Stack, and State.
9. What is difference between process and thread?
Process: It is basically a program that is currently under execution by one or more threads. It is a very
important part of the modern-day OS.
Thread: It is a path of execution that is composed of the program counter, thread id, stack, and set of
registers within the process.

Process Thread
It is a computer program that is under It is the component or entity of the process
execution. that is the smallest execution unit.
These are heavy-weight operators. These are lightweight operators.
It has its own memory space. It uses the memory of the process they belong
to.
It is more difficult to create a process as It is easier to create a thread as compared to
compared to creating a thread. creating a process.
It requires more resources as compared to It requires fewer resources as compared to
thread. processes.
It takes more time to create and terminate a It takes less time to create and terminate a
process as compared to a thread. thread as compared to a process.

It usually run-in separate memory space. It usually run-in shared memory space.

It does not share data. It shares data with each other.


It can be divided into multiple threads. It can’t be further subdivided.

50. What is the difference between process and thread?


S.NO Process Thread
1. Process means any program is in Thread means a segment of a process.
execution.
2. The process is less efficient in terms of Thread is more efficient in terms of
communication. communication.
3. The process is isolated. Threads share memory.
4. The process is called heavyweight the Thread is called lightweight process.
process.
5. Process switching uses, another process Thread switching does not require to call an
interface in operating system. operating system and cause an interrupt to the
kernel.
6. If one process is blocked then it will not The second, thread in the same task could not
affect the execution of other process run, while one server thread is blocked.
7. The process has its own Process Control Thread has Parents’ PCB, its own Thread Control
Block, Stack and Address Space. Block and Stack and common Address space.

1. What is a process and process table?


A process is an instance of a program in execution. For example, a Web Browser is a process, and a shell
(or command prompt) is a process. The operating system is responsible for managing all the processes
that are running on a computer and allocates each process a certain amount of time to use the
processor. In addition, the operating system also allocates various other resources that processes will
need, such as computer memory or disks. To keep track of the state of all the processes, the operating
system maintains a table known as the process table. Inside this table, every process is listed along with
the resources the process is using and the current state of the process.
3. What is a Thread?
A thread is a single sequence stream within a process. Because threads have some of the properties of
processes, they are sometimes called lightweight processes. Threads are a popular way to improve the
application through parallelism. For example, in a browser, multiple tabs can be different threads. MS
Word uses multiple threads, one thread to format the text, another thread to process inputs, etc.

Deadlock
A deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and there is only
one track, none of the trains can move once they are in front of each other. A similar situation occurs in
operating systems when there are two or more processes that hold some resources and wait for
resources held by other(s). For example, in the below diagram, Process 1 is holding Resource 1 and
waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.

Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)
1. Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a
time)
2. Hold and Wait: A process is holding at least one resource and waiting for resources.
3. No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
4. Circular Wait: A set of processes waiting for each other in circular form.

Methods for handling deadlock


There are three ways to handle deadlock
1) Deadlock prevention or avoidance:
Prevention:
The idea is to not let the system into a deadlock state. This system will make sure that above mentioned
four conditions will not arise. These techniques are very costly so we use this in cases where our priority
is making a system deadlock-free.
One can zoom into each category individually, Prevention is done by negating one of the
above-mentioned necessary conditions for deadlock.
Prevention can be done in four different ways:
● Eliminate mutual exclusion
● Allow preemption
● Solve hold and Wait
● Circular wait Solution
Avoidance:
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make an assumption. We
need to ensure that all information about resources that the process will need is known to us before the
execution of the process. We use Banker’s algorithm (Which is in turn a gift from Dijkstra) to avoid
deadlock.
In prevention and avoidance, we get the correctness of data but performance decreases.
2) Deadlock detection and recovery: If Deadlock prevention or avoidance is not applied to the software
then we can handle this by deadlock detection and recovery. which consist of two phases:
● In the first phase, we examine the state of the process and check whether there is a deadlock or
not in the system.
● If found deadlock in the first phase then we apply the algorithm for recovery of the deadlock.
In Deadlock detection and recovery, we get the correctness of data but performance decreases.
3) Deadlock ignorance: If a deadlock is very rare, then let it happen and reboot the system. This is the
approach that both Windows and UNIX take. we use the ostrich algorithm for deadlock ignorance.
In Deadlock, ignorance performance is better than the above two methods but the correctness of data is
not there.
Safe State:
A safe state can be defined as a state in which there is no deadlock. It is achievable if:
● If a process needs an unavailable resource, it may wait until the same has been released by a
process to which it has already been allocated. if such a sequence does not exist, it is an unsafe
state.
● All the requested resources are allocated to the process.
What is Banker’s algorithm?
The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that tests for safety by
simulating the allocation for the predetermined maximum possible amounts of all resources, then makes
an “s-state” check to test for possible activities, before deciding whether allocation should be allowed to
continue.
Details …………………..

File and Disk Management


What is a File System?
A file system is a method an operating system uses to store, organize, and manage files and directories
on a storage device. Some common types of file systems include:
1. FAT (File Allocation Table): An older file system used by older versions of Windows and other
operating systems.
2. NTFS (New Technology File System): A modern file system used by Windows. It supports
features such as file and folder permissions, compression, and encryption.
3. ext (Extended File System): A file system commonly used on Linux and Unix-based operating
systems.
4. HFS (Hierarchical File System): A file system used by macOS.
5. APFS (Apple File System): A new file system introduced by Apple for their Macs and iOS devices.
A file is a collection of related information that is recorded on secondary storage. Or file is a collection of
logically related entities. From the user’s perspective, a file is the smallest allotment of logical secondary
storage.
Unix File System
Unix File System is a logical method of organizing and storing large amounts of information in a way that
makes it easy to manage. A file is the smallest unit in which the information is stored. Unix file system
has several important features. All data in Unix is organized into files. All files are organized into
directories. These directories are organized into a tree-like structure called the file system. Files in Unix
System are organized into multi-level hierarchy structure known as a directory tree. At the very top of
the file system is a directory called “root” which is represented by a “/”. All other files are “descendants”
of root.
Types of Unix Files
The UNIX files system contains several different types of files

Ordinary Files
An ordinary file is a file on the system that contains data, text, or program instructions.
● Used to store your information, such as some text you have written or an image you have
drawn. This is the type of file that you usually work with.
● Always located within/under a directory file.
● Do not contain other files.
● In long-format output of ls -l, this type of file is specified by the “-” symbol.
Directories
Directories store both special and ordinary files. For users familiar with Windows or Mac OS, UNIX
directories are equivalent to folders. A directory file contains an entry for every file and subdirectory that
it houses. If you have 10 files in a directory, there will be 10 entries in the directory. Each entry has two
components. (1) The Filename (2) A unique identification number for the file or directory (called the
inode number)
● Branching points in the hierarchical tree.
● Used to organize groups of files.
● May contain ordinary files, special files or other directories.
● Never contain “real” information which you would work with (such as text). Basically, just
used for organizing files.
● All files are descendants of the root directory, ( named / ) located at the top of the tree.
In long-format output of ls –l , this type of file is specified by the “d” symbol.
Special Files
Used to represent a real physical device such as a printer, tape drive or terminal, used for Input/Output
(I/O) operations. Device or special files are used for device Input/Output(I/O) on UNIX and Linux systems.
They appear in a file system just like an ordinary file or a directory. On UNIX systems there are two
flavors of special files for each device, character special files and block special files :
● When a character special file is used for device Input/Output(I/O), data is transferred one
character at a time. This type of access is called raw device access.
● When a block special file is used for device Input/Output(I/O), data is transferred in large
fixed-size blocks. This type of access is called block device access.
For terminal devices, it’s one character at a time. For disk devices though, raw access means reading or
writing in whole chunks of data – blocks, which are native to your disk.
● In long-format output of ls -l, character special files are marked by the “c” symbol.
● In long-format output of ls -l, block special files are marked by the “b” symbol.
Pipes
UNIX allows you to link commands together using a pipe. The pipe acts a temporary file which only exists
to hold data from one command until it is read by another.A Unix pipe provides a one-way flow of
data.The output or result of the first command sequence is used as the input to the second command
sequence. To make a pipe, put a vertical bar (|) on the command line between two commands.For
example: who | wc -l In long-format output of ls –l , named pipes are marked by the “p” symbol.
Sockets
A Unix socket (or Inter-process communication socket) is a special file which allows for advanced
inter-process communication. A Unix Socket is used in a client-server application framework. In essence,
it is a stream of data, very similar to network stream (and network sockets), but all the transactions are
local to the filesystem. In long-format output of ls -l, Unix sockets are marked by “s” symbol.
Symbolic Link
Symbolic link is used for referencing some other file of the file system.Symbolic link is also known as Soft
link. It contains a text form of the path to the file it references. To an end user, symbolic link will appear
to have its own name, but when you try reading or writing data to this file, it will instead reference these
operations to the file it points to. If we delete the soft link itself , the data file would still be there.If we
delete the source file or move it to a different location, symbolic file will not function properly. In
long-format output of ls –l , Symbolic link are marked by the “l” symbol (that’s a lower case L).
What is blocking and buffering in operating system.
Blocking: the process of grouping several components into one block
Clustering: grouping file components according to access behaviour
Considerations affecting block size:
size of available main memory
space reserved for programs (and their internal data space) that use the files
size of one component of the block
characteristics of the external storage device used
Buffering: Software interface that reconciles blocked components of the file with the program that
accesses information as single components. A buffering interface is of one of two types: blocking routine
or deblocking routine.
Or
Buffering means when we running any application, OS loads that into the buffer(RAM). Blocking means
OS will block some applications, which will do malicious operations, like corrupting the Registry.

You might also like