Operating Systems
Operating Systems
What do you mean by an operating system? What are its basic functions?
An Operating System(OS) is software that manages and handles the hardware and software resources of
a computer system. It provides interaction between users of computers and computer hardware. An
operating system is responsible for managing and controlling all the activities and sharing of computer
resources. An operating system is a low-level Software that includes all the basic functions like processor
management, memory management, Error detection, etc.
Time-Sharing OS
Advantages of Time-Sharing OS
Each task gets an equal opportunity.
Fewer chances of duplication of software.
CPU idle time can be reduced.
Resource Sharing: Time-sharing systems allow multiple users to share hardware resources such
as the CPU, memory, and peripherals, reducing the cost of hardware and increasing efficiency.
Improved Productivity: Time-sharing allows users to work concurrently, thereby reducing the
waiting time for their turn to use the computer. This increased productivity translates to more
work getting done in less time.
Improved User Experience: Time-sharing provides an interactive environment that allows users
to communicate with the computer in real time, providing a better user experience than batch
processing.
Disadvantages of Time-Sharing OS
Reliability problem.
One must have to take care of the security and integrity of user programs and data.
Data communication problem.
High Overhead: Time-sharing systems have a higher overhead than other operating systems due
to the need for scheduling, context switching, and other overheads that come with supporting
multiple users.
Complexity: Time-sharing systems are complex and require advanced software to manage
multiple users simultaneously. This complexity increases the chance of bugs and errors.
Security Risks: With multiple users sharing resources, the risk of security breaches increases.
Time-sharing systems require careful management of user access, authentication, and
authorization to ensure the security of data and software.
Examples of Time-Sharing OS with explanation
IBM VM/CMS: IBM VM/CMS is a time-sharing operating system that was first introduced in
1972. It is still in use today, providing a virtual machine environment that allows multiple users
to run their own instances of operating systems and applications.
TSO (Time Sharing Option): TSO is a time-sharing operating system that was first introduced in
the 1960s by IBM for the IBM System/360 mainframe computer. It allowed multiple users to
access the same computer simultaneously, running their own applications.
Windows Terminal Services: Windows Terminal Services is a time-sharing operating system that
allows multiple users to access a Windows server remotely. Users can run their own applications
and access shared resources, such as printers and network storage, in real-time.
6. Distributed Operating System
These types of operating system is a recent advancement in the world of computer technology and are
being widely accepted all over the world and, that too, at a great pace. Various autonomous
interconnected computers communicate with each other using a shared communication network.
Independent systems possess their own memory unit and CPU. These are referred to as loosely coupled
systems or distributed systems. These systems’ processors differ in size and function. The major benefit
of working with these types of the operating system is that it is always possible that one user can access
the files or software which are not actually present on his system but some other system connected
within this network i.e., remote access is enabled within the devices connected in that network.
Advantages of RTOS
Maximum Consumption: Maximum utilization of devices and systems, thus more output from
all the resources.
Task Shifting: The time assigned for shifting tasks in these systems is very less. For example, in
older systems, it takes about 10 microseconds in shifting from one task to another, and in the
latest systems, it takes 3 microseconds.
Focus on Application: Focus on running applications and less importance on applications that
are in the queue.
Real-time operating system in the embedded system: Since the size of programs is small, RTOS
can also be used in embedded systems like in transport and others.
Error Free: These types of systems are error-free.
Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages of RTOS
Limited Tasks: Very few tasks run at the same time and their concentration is very less on a few
applications to avoid errors.
Use heavy system resources: Sometimes the system resources are not so good and they are
expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer to write on.
Device driver and interrupt signals: It needs specific device drivers and interrupts signal to
respond earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less prone to
switching tasks.
Difference Between 32-bit and 64-bit Operating Systems
Feature 32-bit OS 64-bit OS
Memory Maximum of 4 GB RAM Maximum of several terabytes of
RAM
Processor Can run on both 32-bit and 64-bit Requires a 64-bit processor
processors
Performance Limited by the maximum amount Can take advantage of more memory,
of RAM it can access enabling faster performance
Compatibility Can run 32-bit and 16-bit Can run 32-bit and 64-bit
applications applications
Address Space Uses 32-bit address space Uses 64-bit address space
Hardware support May not support newer hardware Supports newer hardware with 64-bit
drivers
Security Limited security features More advanced security features,
such as hardware-level protection
Application support Limited support for new software Supports newer software designed
for 64-bit architecture
Price Less expensive than 64-bit OS More expensive than 32-bit OS
Multitasking Can handle multiple tasks but with Can handle multiple tasks more
limited efficiency efficiently
Gaming Can run high graphical games, but Can run high graphical games and
may not be as efficient as with handle complex software more
64-bit OS efficiently
Virtualization Limited support for virtualization Better support for virtualization
Resource Sharing Resources (CPU, Resources Resources (CPU, Each process has its
memory) are shared (CPU, memory) memory) are own set of
among programs are shared shared among resources (CPU,
among tasks threads memory)
Memory Each program has its Each task has Threads share Each process has its
Management own memory space its own memory space own memory space
memory space within a task
What is UEFI?
UEFI stands for Unified Extensible Firmware Interface. It is a modern replacement for the traditional BIOS
(Basic Input/Output System) firmware interface found in modern computers. UEFI acts as an automated
software by first turning on the computer, providing the functions needed to install operating systems
and hardware components The most recent firmware interface is known as the Unified Extensible
Firmware Interface (UEFI). ) Except for older BIOS computers. It improves performance, security, and
compatibility by bridging the gap between computer operating systems and hardware.
Features of UEFI
● Support for modern hardware: UEFI supports new hardware technologies and features such
as larger hard drives, faster boot times, and improved security measures.
● Graphical User Interface (GUI): Unlike the text-based interface of the BIOS, UEFI typically
includes a graphical interface that makes it easier to access and edit system settings
● Secure Boot: UEFI includes a Secure Boot feature, which helps prevent the installation of
malicious software during boot by checking the digital signature of the bootloader and OS
components
● Compatible disk sizes: UEFI supports GUID Partition Table (GPT) disks, allowing for larger
partitions and more partitions compared to the older Master Boot Record (MBR) partition
scheme
● Network capabilities: UEFI firmware can be network capable, allowing it to operate like
other firmware over the network.
What is BIOS?
It stands for Basic Input Output System. It is a firmware interface that acts as the first software layer
among hardware components and the running machine of a PC device. The BIOS is answerable for acting
vital duties at some stage in the boot technique and affords simple input/output services for the
operating machine and mounted software program.
Difference Between UEFI and BIOS
UEFI BIOS
UEFI acts as the first software that runs BIOS stands for Basic Input/Output System. It is a
when the computer is powered on, firmware interface that acts as the first software
providing the necessary functions to layer between hardware components and the
initialize the operating system and the operating system of a computer system
hardware components
It provides a unified driver model, which The drivers are specific to the BIOS firmware and may
allows drivers to be used for both firmware not be compatible with the operating system.
and operating systems.
Start hardware in parallel, which speeds up Slowly start the hardware, which can cause slow boot
boot time times.
A graphical user interface (GUI) is often Often, they are text-based, which can be very difficult
included for easy navigation and for users.
configuration.
GUID supports Partition Table (GPT) disks, Usually it is limited to the Master Boot Record (MBR)
allowing larger partitions and more partition setting, with limitations on partition size
partitions to be created. and number.
Monolithic Kernel: It is an OS architecture that supports all basic features of computer components
such as resource management, memory, file, etc.
Example: Solaris, DOS, OpenVMS, Linux, etc.
MicroKernel Monolithic Kernel
In this software or program, kernel services In this software or program, kernel services and
and user services are present in different user services are usually present in the same
address spaces. address space.
It is smaller in size as compared to the It is larger in size as compared to a microkernel.
monolithic kernel.
It is easily extendible as compared to a It is hard to as extend as compared to a
monolithic kernel. microkernel.
If a service crashes, it does affect on working If a service crashes, the whole system crashes in
of the microkernel. a monolithic kernel.
It uses message queues to achieve It uses signals and sockets to achieve
inter-process communication. inter-process communication.
49. What is the difference between the Operating system and kernel?
Operating System Kernel
Operating System is system software. The kernel is system software that is part of the
Microkerneloperating system.
Operating System provides an interface b/w The kernel provides an interface b/w the
the user and the hardware. application and hardware.
It also provides protection and security. Its main purpose is memory management, disk
management, process management and task
management.
All system needs a real-time operating All operating system needs a kernel to run.
real-time, and Microkernel system to run.
Type of operating system includes single and Type of kernel includes Monolithic and
multiuser OS, multiprocessor OS, real-time Microkernel.
OS, Distributed OS.
It is the first program to load when the It is the first program to load when the operating
computer boots up. system loads
CPU Scheduling
What is a process?
In computing, a process is the instance of a computer program that is being executed by one or many
threads. It contains the program code and its activity. Depending on the operating system (OS), a process
may be made up of multiple threads of execution that execute instructions concurrently.
States of Process
A process is in one of the following states:
New: Newly Created Process (or) being-created process.
Ready: After the creation process moves to the Ready state, i.e. the process is ready for
execution.
Run: Currently running process in CPU (only one process at a time can be under execution in a
single processor)
Wait (or Block): When a process requests I/O access.
Complete (or Terminated): The process completed its execution.
Suspended Ready: When the ready queue becomes full, some processes are moved to a
suspended ready state
Suspended Block: When the waiting queue becomes full.
Process management
Process management includes various tools and techniques such as process mapping, process analysis,
process improvement, process automation, and process control. By applying these tools and techniques,
organizations can streamline their processes, eliminate waste, and improve productivity. Overall, process
management is a critical aspect of modern business operations and can help organizations achieve their
goals and stay competitive in today’s rapidly changing marketplace.
Key Components of Process Management
Below are some key component of process management.
Process mapping: Creating visual representations of processes to understand how tasks flow,
identify dependencies, and uncover improvement opportunities.
Process analysis: Evaluating processes to identify bottlenecks, inefficiencies, and areas for
improvement.
Process redesign: Making changes to existing processes or creating new ones to optimize
workflows and enhance performance.
Process implementation: Introducing the redesigned processes into the organization and
ensuring proper execution.
Process monitoring and control: Tracking process performance, measuring key metrics, and
implementing control mechanisms to maintain efficiency and effectiveness.
Advantages of Process Management
Improved Efficiency: Process management can help organizations identify bottlenecks and
inefficiencies in their processes, allowing them to make changes to streamline workflows and
increase productivity.
Cost Savings: By identifying and eliminating waste and inefficiencies, process management can
help organizations reduce costs associated with their business operations.
Improved Quality: Process management can help organizations improve the quality of their
products or services by standardizing processes and reducing errors.
Increased Customer Satisfaction: By improving efficiency and quality, process management can
enhance the customer experience and increase satisfaction.
Compliance with Regulations: Process management can help organizations comply with
regulatory requirements by ensuring that processes are properly documented, controlled, and
monitored.
What is Context Switching?
Context switching is basically a process of saving the context of one process and loading the context of another
process. It is one of the cost-effective and time-saving measures executed by CPU the because it allows multiple
processes to share a single CPU. Therefore, it is considered an important part of a modern OS. This technique is
used by OS to switch a process from one state to another i.e., from running state to ready state. It also allows a
single CPU to handle and control various different processes or threads without even the need for additional
resources.
Why is context switching necessary?
Switching context is a requirement for the operating system to run different processes concurrently despite having
only one CPU. By promptly alternating between these processes, the operating system is capable of presenting the
impression of parallel execution, a vital feature for contemporary multi-tasking systems.
Schedulers
Schedulers are special system software that handles process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run.
There are three types of Scheduler:
1. Long-term (job) scheduler – Due to the smaller size of main memory initially all programs are
stored in secondary memory. When they are stored or loaded in the main memory they are
called processes. This is the decision of the long-term scheduler to determine how many
processes will stay in the ready queue. Hence, in simple words, the long-term scheduler decides
the degree of multi-programming of the system.
2. Medium-term scheduler – Most often, a running process needs I/O operation which doesn’t
require a CPU. Hence during the execution of a process when an I/O operation is required then
the operating system sends that process from the running queue to the blocked queue. When a
process completes its I/O operation then it should again be shifted to the ready queue. ALL these
decisions are taken by the medium-term scheduler. Medium-term scheduling is a part of
swapping.
3. Short-term (CPU) scheduler – When there are lots of processes in main memory initially all are
present in the ready queue. Among all of the processes, a single process is to be selected for
execution. This decision is handled by a short-term scheduler. Let’s have a look at the figure
given below. It may make a more clear view for you.
Dispatcher
A dispatcher is a special program which comes into play after the scheduler. When the scheduler
completes its job of selecting a process, it is the dispatcher which takes that process to the desired
state/queue. The dispatcher is the module that gives a process control over the CPU after it has been
selected by the short-term scheduler. This function involves the following:
● Switching context
● Switching to user mode
● Jumping to the proper location in the user program to restart that program
Time Taken The time taken by dispatcher is Time taken by scheduler is usually
called dispatch latency. negligible.Hence we neglect it.
Functions Dispatcher is also responsible The only work of scheduler is selection
for:Context Switching, Switch to of processes.
user mode, Jumping to proper
location when process again
restarted
Tasks Dispatcher allocates the CPU to the Scheduler performs three task. Job
process selected by the short-time scheduling (Long-term scheduler), CPU
scheduler. scheduling (Short-term scheduler) and
swapping (Medium-term scheduler).
Purpose To move the process from the ready To select the process and decide which
queue to the CPU process to run
Execution time It takes a very short execution time It takes longer execution time than
dispatcher
Interaction The dispatcher works with the CPU The scheduler works with the ready
and the selected process queue and the dispatcher
Characteristics of SJF:
Shortest Job first has the advantage of having a minimum average waiting time among all
operating system scheduling algorithms.
It is associated with each task as a unit of time to complete.
It may cause starvation if shorter processes keep coming. This problem can be solved using the
concept of ageing.
Advantages of Shortest Job first:
As SJF reduces the average waiting time thus, it is better than the first come first serve
scheduling algorithm.
SJF is generally used for long term scheduling
Disadvantages of SJF:
One of the demerit SJF has is starvation.
Many times it becomes complicated to predict the length of the upcoming CPU request
4. Priority Scheduling:
Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU scheduling algorithm
that works based on the priority of a process. In this algorithm, the editor sets the functions to be as
important, meaning that the most important process must be done first. In the case of any conflict, that
is, where there is more than one process with equal value, then the most important CPU planning
algorithm works on the basis of the FCFS (First Come First Serve) algorithm.
Characteristics of Priority Scheduling:
Schedules tasks based on priority.
When the higher priority work arrives and a task with less priority is executing, the higher
priority proess will takes the place of the less priority proess and
The later is suspended until the execution is complete.
Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling:
The average waiting time is less than FCFS
Less complex
Disadvantages of Priority Scheduling:
One of the most common demerits of the Preemptive priority CPU scheduling algorithm is the
Starvation Problem. This is the problem in which a process has to wait for a longer amount of
time to get scheduled into the CPU. This condition is called the starvation problem.
5. Round robin:
Round Robin is a CPU scheduling algorithm where each process is cyclically assigned a fixed time slot. It
is the preemptive version of First come First Serve CPU Scheduling algorithm. Round Robin CPU
Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin:
It’s simple, easy to use, and starvation-free as all processes get the balanced CPU allocation.
One of the most widely used methods in CPU scheduling as a core.
It is considered preemptive as the processes are given to the CPU for a very limited time.
Advantages of Round robin:
Round robin seems to be fair as every process gets an equal share of CPU.
The newly created process is added to the end of the ready queue.
Process Synchronization
Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-process system
to ensure that they access shared resources in a controlled and predictable manner. It aims to resolve
the problem of race conditions and other synchronization issues in a concurrent system.
The main objective of process synchronization is to ensure that multiple processes access shared
resources without interfering with each other and to prevent the possibility of inconsistent data due to
concurrent access. To achieve this, various synchronization techniques such as semaphores, monitors,
and critical sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to
avoid the risk of deadlocks and other synchronization problems. Process synchronization is an important
aspect of modern operating systems, and it plays a crucial role in ensuring the correct and efficient
functioning of multi-process systems.
Race Condition
When more than one process is executing the same code or accessing the same memory or any shared
variable in that condition there is a possibility that the output or the value of the shared variable is
wrong so for that all the processes doing the race to say that my output is correct this condition known
as a race condition. Several processes access and process the manipulations over the same data
concurrently, and then the outcome depends on the particular order in which the access takes place. A
race condition is a situation that may occur inside a critical section. This happens when the result of
multiple thread execution in the critical section differs according to the order in which the threads
execute. Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.
Semaphores
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled by
another thread. This is different than a mutex as the mutex can be signaled only by the thread that is
called the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
A Semaphore is an integer variable, which can be accessed only through two operations wait() and
signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks, as the
locks can provide mutual exclusion. All the processes can share the same mutex semaphore that
is initialized to 1. Then, a process has to wait until the lock becomes 0. Then, the process can
make the mutex semaphore 1 and start its critical section. When it completes its critical section,
it can reset the value of the mutex semaphore to 0 and some other process can enter its critical
section.
Counting Semaphores: They can have any value and are not restricted to a certain domain. They
can be used to control access to a resource that has a limitation on the number of simultaneous
accesses. The semaphore can be initialized to the number of instances of the resource.
Whenever a process wants to use that resource, it checks if the number of remaining instances is
more than zero, i.e., the process has an instance available. Then, the process can enter its critical
section thereby decreasing the value of the counting semaphore by 1. After the process is over
with the use of the instance of the resource, it can leave the critical section thereby adding 1 to
the number of available instances of the resource.
65. What is Peterson’s approach?
It is a concurrent programming algorithm. It is used to synchronize two processes that maintain the
mutual exclusion for the shared resource. It uses two variables, a bool array flag of size 2 and an int
variable turn to accomplish it.
Memory Management
Memory Hierarchy Design and its Characteristics
In the Computer System Design, Memory Hierarchy is an enhancement to organize the memory such
that it can minimize the access time. The Memory Hierarchy was developed based on a program
behavior known as locality of references. The figure below clearly demonstrates the different levels of
the memory hierarchy.
Why Memory Hierarchy is Required in the System?
Memory Hierarchy is one of the most required things in Computer Memory as it helps in optimizing the
memory available in the computer. There are multiple levels present in the memory, each one having a
different size, different cost, etc. Some types of memory like cache, and main memory are faster as
compared to other types of memory but they are having a little less size and are also costly whereas
some memory has a little higher storage value, but they are a little slower. Accessing of data is not
similar in all types of memory, some have faster access whereas some have slower access.
Types of Memory Hierarchy
This Memory Hierarchy Design is divided into 2 main types:
External Memory or Secondary Memory: Comprising of Magnetic Disk, Optical Disk, and
Magnetic Tape i.e. peripheral storage devices which are accessible by the processor via an I/O
Module.
Internal Memory or Primary Memory: Comprising of Main Memory, Cache Memory & CPU
registers. This is directly accessible by the processor.
4. Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a non-volatile memory
unit that has a larger storage capacity than main memory. It is used to store data and instructions that
are not currently in use by the CPU. Secondary storage has the slowest access time and is typically the
least expensive type of memory in the memory hierarchy.
5. Magnetic Disk
Magnetic Disks are simply circular plates that are fabricated with either a metal or a plastic or a
magnetized material. The Magnetic disks work at a high speed inside the computer and these are
frequently used.
6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is covered with a plastic film. It is generally
used for the backup of data. In the case of a magnetic tape, the access time for a computer is a little
slower and therefore, it requires some amount of time for accessing the strip.
7. ROM: ROM full form is Read Only Memory. ROM is a non volatile memory and it is used to store
important information which is used to operate the system. We can only read the programs and data
stored on it and can not modify of delete it.
● MROM(Masked ROM): Hard-wired devices with a pre-programmed collection of data or
instructions were the first ROMs. Masked ROMs are a type of low-cost ROM that works in this
way.
● PROM (Programmable Read Only Memory): This read-only memory is modifiable once by the
user. The user purchases a blank PROM and uses a PROM program to put the required contents
into the PROM. Its content can’t be erased once written.
● EPROM (Erasable Programmable Read Only Memory): EPROM is an extension to PROM where
you can erase the content of ROM by exposing it to Ultraviolet rays for nearly 40 minutes.
● EEPROM (Electrically Erasable Programmable Read Only Memory): Here the written contents
can be erased electrically. You can delete and reprogramme EEPROM up to 10,000 times. Erasing
and programming take very little time, i.e., nearly 4 -10 ms(milliseconds). Any area in an
EEPROM can be wiped and programmed selectively.
Paging
Paging is a memory management scheme that eliminates the need for a contiguous allocation of physical
memory. The process of retrieving processes in the form of pages from the secondary storage into the
main memory is known as paging. The basic purpose of paging is to separate each procedure into pages.
Additionally, frames will be used to split the main memory. This scheme permits the physical address
space of a process to be non – contiguous.
Page Replacement Algorithms in Operating Systems
In an operating system that uses paging for memory management, a page replacement algorithm is
needed to decide which page needs to be replaced when a new page comes in. Page replacement
becomes necessary when a page fault occurs and there are no free page frames in memory. However,
another page fault would arise if the replaced page is referenced again. Hence it is important to replace a
page that is not likely to be referenced in the immediate future. If no page frame is free, the virtual
memory manager performs a page replacement operation to replace one of the pages existing in
memory with the page whose reference caused the page fault. It is performed as follows: The virtual
memory manager uses a page replacement algorithm to select one of the pages currently in memory for
replacement, accesses the page table entry of the selected page to mark it as “not present” in memory,
and initiates a page-out operation for it if the modified bit of its page table entry indicates that it is a
dirty page.
Page Fault: A page fault happens when a running program accesses a memory page that is mapped into
the virtual address space but not loaded in physical memory. Since actual physical memory is much
smaller than virtual memory, page faults happen. In case of a page fault, Operating System might have to
replace one of the existing pages with the newly needed page. Different page replacement algorithms
suggest different ways to decide which page to replace. The target for all algorithms is to reduce the
number of page faults.
Page Replacement Algorithms:
1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm, the
operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the number of page
faults.
Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in memory
so it replaces the oldest page slot i.e 1. —>1 Page Fault. 6 comes, it is also not available in memory so it
replaces the oldest page slot i.e 3 —>1 Page Fault. Finally, when 3 come it is not available so it replaces 0
1 page fault.
Belady’s anomaly proves that it is possible to have more page faults when increasing the number of
page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we
consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get 9 total page faults, but if we
increase slots to 4, we get 10-page faults.
2. Optimal Page replacement: In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find
number of page fault.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not used for
the longest duration of time in the future.—>1 Page fault. 0 is already there so —> 0 Page fault. 4 will
takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
Optimal page replacement is perfect, but not possible in practice as the operating system cannot know
future requests. The use of Optimal Page replacement is to set up a benchmark so that other
replacement algorithms can be analyzed against it.
3. Least Recently Used: In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames. Find
number of page faults.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7 because it is least recently
used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
4. Most Recently Used (MRU): In this algorithm, page will be replaced which has been used recently.
Belady’s anomaly can occur in this algorithm.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so–> 0 page fault
when 3 comes it will take place of 0 because it is most recently used —>1 Page fault
when 0 comes it will take place of 3 —>1 Page fault
when 4 comes it will take place of 0 —>1 Page fault
2 is already in memory so —> 0 Page fault
when 3 comes it will take place of 2 —>1 Page fault
when 0 comes it will take place of 3 —>1 Page fault
when 3 comes it will take place of 0 —>1 Page fault
when 2 comes it will take place of 3 —>1 Page fault
when 3 comes it will take place of 2 —>1 Page faul
Multiple threads running in a process share: Address space, Heap, Static data, Code segments, File
descriptors, Global variables, Child processes, Pending alarms, Signals, and signal handlers.
Each thread has its own: Program counter, Registers, Stack, and State.
9. What is difference between process and thread?
Process: It is basically a program that is currently under execution by one or more threads. It is a very
important part of the modern-day OS.
Thread: It is a path of execution that is composed of the program counter, thread id, stack, and set of
registers within the process.
Process Thread
It is a computer program that is under It is the component or entity of the process
execution. that is the smallest execution unit.
These are heavy-weight operators. These are lightweight operators.
It has its own memory space. It uses the memory of the process they belong
to.
It is more difficult to create a process as It is easier to create a thread as compared to
compared to creating a thread. creating a process.
It requires more resources as compared to It requires fewer resources as compared to
thread. processes.
It takes more time to create and terminate a It takes less time to create and terminate a
process as compared to a thread. thread as compared to a process.
It usually run-in separate memory space. It usually run-in shared memory space.
Deadlock
A deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and there is only
one track, none of the trains can move once they are in front of each other. A similar situation occurs in
operating systems when there are two or more processes that hold some resources and wait for
resources held by other(s). For example, in the below diagram, Process 1 is holding Resource 1 and
waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)
1. Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a
time)
2. Hold and Wait: A process is holding at least one resource and waiting for resources.
3. No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
4. Circular Wait: A set of processes waiting for each other in circular form.
Ordinary Files
An ordinary file is a file on the system that contains data, text, or program instructions.
● Used to store your information, such as some text you have written or an image you have
drawn. This is the type of file that you usually work with.
● Always located within/under a directory file.
● Do not contain other files.
● In long-format output of ls -l, this type of file is specified by the “-” symbol.
Directories
Directories store both special and ordinary files. For users familiar with Windows or Mac OS, UNIX
directories are equivalent to folders. A directory file contains an entry for every file and subdirectory that
it houses. If you have 10 files in a directory, there will be 10 entries in the directory. Each entry has two
components. (1) The Filename (2) A unique identification number for the file or directory (called the
inode number)
● Branching points in the hierarchical tree.
● Used to organize groups of files.
● May contain ordinary files, special files or other directories.
● Never contain “real” information which you would work with (such as text). Basically, just
used for organizing files.
● All files are descendants of the root directory, ( named / ) located at the top of the tree.
In long-format output of ls –l , this type of file is specified by the “d” symbol.
Special Files
Used to represent a real physical device such as a printer, tape drive or terminal, used for Input/Output
(I/O) operations. Device or special files are used for device Input/Output(I/O) on UNIX and Linux systems.
They appear in a file system just like an ordinary file or a directory. On UNIX systems there are two
flavors of special files for each device, character special files and block special files :
● When a character special file is used for device Input/Output(I/O), data is transferred one
character at a time. This type of access is called raw device access.
● When a block special file is used for device Input/Output(I/O), data is transferred in large
fixed-size blocks. This type of access is called block device access.
For terminal devices, it’s one character at a time. For disk devices though, raw access means reading or
writing in whole chunks of data – blocks, which are native to your disk.
● In long-format output of ls -l, character special files are marked by the “c” symbol.
● In long-format output of ls -l, block special files are marked by the “b” symbol.
Pipes
UNIX allows you to link commands together using a pipe. The pipe acts a temporary file which only exists
to hold data from one command until it is read by another.A Unix pipe provides a one-way flow of
data.The output or result of the first command sequence is used as the input to the second command
sequence. To make a pipe, put a vertical bar (|) on the command line between two commands.For
example: who | wc -l In long-format output of ls –l , named pipes are marked by the “p” symbol.
Sockets
A Unix socket (or Inter-process communication socket) is a special file which allows for advanced
inter-process communication. A Unix Socket is used in a client-server application framework. In essence,
it is a stream of data, very similar to network stream (and network sockets), but all the transactions are
local to the filesystem. In long-format output of ls -l, Unix sockets are marked by “s” symbol.
Symbolic Link
Symbolic link is used for referencing some other file of the file system.Symbolic link is also known as Soft
link. It contains a text form of the path to the file it references. To an end user, symbolic link will appear
to have its own name, but when you try reading or writing data to this file, it will instead reference these
operations to the file it points to. If we delete the soft link itself , the data file would still be there.If we
delete the source file or move it to a different location, symbolic file will not function properly. In
long-format output of ls –l , Symbolic link are marked by the “l” symbol (that’s a lower case L).
What is blocking and buffering in operating system.
Blocking: the process of grouping several components into one block
Clustering: grouping file components according to access behaviour
Considerations affecting block size:
size of available main memory
space reserved for programs (and their internal data space) that use the files
size of one component of the block
characteristics of the external storage device used
Buffering: Software interface that reconciles blocked components of the file with the program that
accesses information as single components. A buffering interface is of one of two types: blocking routine
or deblocking routine.
Or
Buffering means when we running any application, OS loads that into the buffer(RAM). Blocking means
OS will block some applications, which will do malicious operations, like corrupting the Registry.