Operating System Interview Question
Operating System Interview Question
The operating system is a software program that facilitates computer hardware to communicate and
operate with the computer software. It is the most important part of a computer system without it
computer is just like a box.
o It is designed to make sure that a computer system performs well by managing its
computational activities.
Batched Operating Systems: Programs are collected and executed in batches without user
interaction during execution.
Distributed Operating Systems: Multiple computers are networked together to share resources
and process tasks as a unified system.
Time-Sharing Operating Systems: Multiple users can use the computer simultaneously by
quickly switching between tasks, providing the illusion of simultaneous execution.
Multi-Programmed Operating Systems: The system keeps multiple programs in memory and
switches between them to maximize CPU usage.
Real-Time Operating Systems: These systems process tasks immediately or within strict time
constraints, crucial for critical applications like medical devices or industrial systems.
4) What is a socket?
A socket is used to make connection between two applications. Endpoints of the connection are
called socket.
Real-time system is used in the case when rigid-time requirements have been placed on the
operation of a processor. It contains a well defined and fixed time constraints.
6) What is kernel?
Kernel is the core and most important part of a computer operating system which provides basic
services for all parts of the OS.
A monolithic kernel is a kernel which includes all operating system code is in single executable image.
o User Processes
o New Process
o Running Process
o Waiting Process
o Ready Process
o Terminated Process
10) What is the difference between micro kernel and macro kernel?
Micro kernel: micro kernel is the kernel which runs minimal performance affecting services for
operating system. In micro kernel operating system all other operations are performed by processor.
It is a very useful memory saving technique that is used for multi-programmed time sharing systems.
It provides functionality that multiple users can share a single copy of program during the same
period.
o The local data for each user process must be stored separately.
12) What is the difference between process and program?
Paging is used to solve the external fragmentation problem in operating system. This technique
ensures that the data you need is available as quickly as possible.
Demand paging specifies that if an area of memory is not currently being used, it is swapped to disk
to make room for an application's need.
As many as processors are increased, you will get the considerable increment in throughput. It is cost
effective also because they can share resources. So, the overall reliability increases.
Virtual memory is a very useful memory management technique which enables processes to execute
outside of memory. This technique is especially used when an executing program cannot fit in the
physical memory.
Thrashing is a phenomenon in virtual memory scheme when the processor spends most of its time in
swapping pages, rather than executing instructions.
18) What are the four necessary and sufficient conditions behind the deadlock?
1) Mutual Exclusion Condition: It specifies that the resources involved are non-sharable.
2) Hold and Wait Condition: It specifies that there must be a process that is holding a resource
already allocated to it while waiting for additional resource that are currently being held by other
processes.
3) No-Preemptive Condition: Resources cannot be taken away while they are being used by
processes.
4) Circular Wait Condition: It is an explanation of the second condition. It specifies that the
processes in the system form a circular list or a chain where each process in the chain is waiting for a
resource held by next process in the chain.
A thread is a basic unit of CPU utilization. It consists of a thread ID, program counter, register set and
a stack.
FCFS stands for First Come, First Served. It is a type of scheduling algorithm. In this scheme, if a
process requests the CPU first, it is allocated to the CPU first. Its implementation is managed by a
FIFO queue.
SMP stands for Symmetric MultiProcessing. It is the most common type of multiple processor
system. In SMP, each processor runs an identical copy of the operating system, and these copies
communicate with one another when required.
RAID stands for Redundant Array of Independent Disks. It is used to store the same data redundantly
to improve the overall performance.
Deadlock is a specific situation or condition where two processes are waiting for each other to
complete so that they can start. But this situation causes hang for both of them.
24) Which are the necessary conditions to achieve a deadlock?
o Mutual Exclusion: At least one resource must be held in a non-sharable mode. If any other
process requests this resource, then that process must wait for the resource to be released.
o Hold and Wait: A process must be simultaneously holding at least one resource and waiting
for at least one resource that is currently being held by some other process.
o No preemption: Once a process is holding a resource ( i.e. once its request has been
granted ), then that resource cannot be taken away from that process until the process
voluntarily releases it.
o Circular Wait: A set of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is
waiting for P[ ( i + 1 ) % ( N + 1 ) ].
Note: This condition implies the hold-and-wait condition, but it is easier to deal with the conditions if
the four are considered separately.
Banker's algorithm is used to avoid deadlock. It is the one of deadlock-avoidance method. It is named
as Banker's algorithm on the banking system where bank never allocates available cash in such a
manner that it can no longer satisfy the requirements of all of its customers.
26) What is the difference between logical address space and physical address space?
Logical address space specifies the address that is generated by CPU. On the other hand physical
address space specifies the address that is seen by the memory unit.
o Internal fragmentation: It is occurred when we deal with the systems that have fixed size
allocation units.
o External fragmentation: It is occurred when we deal with systems that have variable-size
allocation units.
29) What is spooling?
Spooling is a process in which data is temporarily gathered to be used and executed by a device,
program or the system. It is associated with printing. When different applications send output to the
printer at the same time, spooling keeps these all jobs into a disk file and queues them accordingly to
the printer.
30) What is the difference between internal commands and external commands?
Internal commands are the built-in part of the operating system while external commands are the
separate file programs that are stored in a separate folder or directory.
Semaphore is a protected variable or abstract data type that is used to lock the resource being used.
The value of the semaphore indicates the status of a common resource.
o Binary semaphores
o Counting semaphores
Binary semaphore takes only 0 and 1 as value and used to implement mutual exclusion and
synchronize concurrent processes.
Belady's Anomaly is also called FIFO anomaly. Usually, on increasing the number of frames allocated
to a process virtual memory, the process execution is faster, because fewer page faults occur.
Sometimes, the reverse happens, i.e., the execution time increases even when more frames are
allocated to the process. This is Belady's Anomaly. This is true for certain page reference patterns.
Starvation is Resource management problem. In this problem, a waiting process does not get the
resources it needs for a long time because the resources are being allocated to other processes.
o Economical
37) What is the difference between logical and physical address space?
Logical address specifies the address which is generated by the CPU whereas physical address
specifies to the address which is seen by the memory unit.
After fragmentation
Overlays makes a process to be larger than the amount of memory allocated to it. It ensures that
only important instructions and data at any given time are kept in memory.
Thrashing specifies an instance of high paging activity. This happens when it is spending more time
paging instead of executing.
Batch Operating System is a type of Operating System which creates Batches for the execution of
certain jobs or processes.
The Batches contains jobs of such a kind that the jobs or processes which are very similar in the
procedure to be followed by the jobs or processes. The Batch Operating System has an operator
which performs these tasks.
An operator groups together comparable jobs or processes that have the same criteria into batches.
The operator is in charge and takes up the job of grouping jobs or processes with comparable
requirements.
41) Do the Batch Operating System interact with Computer for processing the needs of jobs or
processes required?
No, this is not that kind of Operating Systems which tries to interact with the computer. But, this job
is taken up by the Operator present in the Batch Operating Systems.
1. The time which the Operating System is at rest is very small or also known as Idle Time for
the Operating System is very small.
2. Very big tasks can also be managed very easily with the help of Batch Operating Systems
4. It is incredibly challenging to estimate or determine how long it will take to finish any task.
The batch system processors are aware of how long a work will take to complete when it is in
line.
1. If any work fails in the Batch Operating System, the other jobs will have to wait for an
indeterminate period of time.
4. The computer operators who are using Batch Operating Systems must to be knowledgeable
with batch systems.
They are used in Payroll System and for generating Bank Statements.
1. File Management
2. Job Management
3. Process Management
4. Device Management
5. Memory Management
46) What are the Services provided by the Operating System?
3. File Management
4. Program Execution
Programs can communicate with the operating system by making a system call. When a computer
application requests anything from the kernel of the operating system, it performs a system
call.System call uses Application Programming Interfaces(API)to deliver operating system services to
user programs
1. Communication
2. Information Maintenance
3. File Management
4. Device Management
5. Process Contro
49) What are the functions which are present in the Process Control and File Management System
Call?
1. Create
2. Allocate
3. Abort
4. End
5. Terminate
6. Free Memory
50) What are the functions which are present in the File Management System Call?
1. Create
2. Open
3. Read
4. Close
5. Delete
A process is essentially software that is being run on the Operating System. The Process is a
Procedure which must be carried out in a sequential manner.
The fundamental unit of work that has to be implemented in the system is called a process.
An active program known as a process is the basis of all computing. Although relatively similar, the
method is not the same as computer code. A process is a "active" entity, in contrast to the program,
which is sometimes thought of as some sort of "passive" entity.
2. User Process
A data structure called a Process Control Block houses details about the processes connected to it.
The term "process control block" can also refer to a "task control block," "process table entry," etc.
As data structure for processes is done in terms of the Process Control Block (PCB), it is crucial for
process management. Additionally, it describes the operating system's present condition.
2. Process Number
3. Program Counter
4. Registers
5. Memory Limits
55) What are the Files used in the Process Control Block?
3. Accounting Information
Thread Process
Threads are executed within the same process Processes are executed in the different memory spaces
Threads are not independent of each other Processes are independent of each other
2. Threads ensure that the communication between threads are very easier
3. The Throughput of the system is increased if the process is divided into multiple threads
4. When a thread in a multi-threaded process completes its execution, its output can be
returned right away.
1. The code becomes more challenging to maintain and debug as there are more threads.
2. The process of creating threads uses up system resources like memory and CPU.
3. Because unhandled exceptions might cause the application to crash, we must manage them
inside the worker method.
The kernel is unaware of the user-level threads since they are implemented at the user level.
They are treated like single-threaded processes under this system. User Kernel Threads are smaller
and quicker than kernel-level threads
The User Kernel Threads are represented by a Small Process Control Block (PCB), Stack, Program
Counter (PC), Stack.
Here, the User Kernel Threads are independent of Kernel Involvement in Synchronization.
1. Creating user-level threads is quicker and simpler than creating kernel-level threads. They are
also simpler to handle.
3. Thread switching in user-level threads does not need kernel mode privileges.
2. If one user-level thread engages in a blocking action, the entire process is halted.
Kernel Level Threads are the threads which are handled by the Operating System directly. The kernel
controls both the process's threads and the context information for each one. As a result, kernel-
level threads execute more slowly than user-level threads.
63) What are Kernel Level Threads Advantages and Disadvantages?
1. Kernel-level threads allow the scheduling of many instances of the same process across
several CPUs.
3. Another thread of the same process may be scheduled by the kernel if a kernel-level thread
is stalled.
1. To pass control from one thread in a process to another, a mode switch to kernel mode is
necessary.
2. Compared to user-level threads, kernel-level threads take longer to create and maintain.
The task of the process manager that deals with removing the active process from the CPU and
choosing a different process based on a certain strategy is known as process scheduling.
In this instance of Pre Emptive Process Scheduling, the OS allots the resources to a process for a
predetermined period of time. The process transitions from running state to ready state or from
waiting state to ready state during resource allocation. This switching happens because the CPU may
assign other processes precedence and substitute the currently active process for the higher priority
process.
In this case of Non Pre Emptive Process Scheduling, the resource cannot be withdrawn from a
process before the process has finished running. When a running process finishes and transitions to
the waiting state, resources are switched.
68) What is Context Switching?
Context switching is a technique or approach that the operating system use to move a process from
one state to another so that it can carry out its intended function using system CPUs.
When a system performs a switch, it maintains the status of the previous operating process in the
form of registers and allots the CPU to the new process to carry out its operations.
The old process must wait in a ready queue while a new one is running in the system. At the point
when another process interrupted it, the old process resumes execution.
It outlines the features of an operating system that supports numerous workloads at once without
the use of extra processors by allowing several processes to share a single CPU.
After the scheduler completes the process scheduling, a unique application called a dispatcher enters
the picture. The dispatcher is the one who moves a process to the desired state or queue once the
scheduler has finished its selection task. The module known as the dispatcher is what grants a
process control over the CPU once the short-term schedule has chosen it.
Dispatcher Scheduler
Dispatcher is the one who moves the process to the desired Scheduler is the one which selects a process which is feasible to b
state executed at this point of time.
The time taken by Dispatcher is known as Dispatch Latency The Time taken by the Scheduler is not counted basically
Dispatcher allows Context Switching to occur Scheduler only allows the process to the ready queue
Process synchronization, often known as synchronization, is the method an operating system uses to
manage processes that share the same memory space. By limiting the number of processes that may
modify the shared memory at once via hardware or variables, it helps ensure the consistency of the
data.
Peterson's solution to the critical section issue is a traditional one. The critical section problem makes
sure no two processes or jobsalter or modify the value of a resource at the same time.
1. Wait or P Function ()
2. Signal or V Function ()
The section of a program known as the Critical Section attempts to access shared resources.
The operating system has trouble authorizing and preventing processes from entering the crucial
part because more than one process cannot operate in the critical area at once.
1. Deadlock Prevention
3. Deadlock Avoidance
4. Deadlock Ignorance
78) How can we detect and recover the Deadlock occurred in Operating System?
First, what we need to do is to allow the process to enter the deadlock state. So, it is the time of
recovery.
We can recover the process from deadlock state by terminating or aborting all deadlocked processes
one at a time.
Process Pre Emption is also another technique used for Deadlocked Process Recovery.
Paging is a storage mechanism. Paging is used to retrieve processes from secondary memory to
primary memory.
The main memory is divided into small blocks called pages. Now, each of the pages contains the
process which is retrieved into main memory and it is stored in one frame of memory.
It is very important to have pages and frames which are of equal sizes which are very useful for
mapping and complete utilization of memory.
Logical and physical memory addresses, both of which are distinct, are the two types of memory
addresses that are employed in the paging process. The logical address is the address that the CPU
creates for each page in the secondary memory, but the physical address is the actual location of the
frame where each page will be allocated. We now require a technique known as address translation
carried out by the page table in order to translate this logical address into a physical address.
Whenever logical address is created by the Central Processing Unit (CPU), the page number is stored
in the Translational Look Aside Buffer. Along, with the page number, the frame number is also stored.
2. Optimal
First Come First Serve (FCFS) CPU Scheduling Algorithm: Processes are executed in the order
they arrive, with no preemption.
Priority Scheduling CPU Scheduling Algorithm: Processes are executed based on their priority,
with higher-priority tasks going first.
Shortest Job First (SJF) CPU Scheduling Algorithm: Processes with the shortest execution time
are completed first.
Round Robin (RR) CPU Scheduling Algorithm: Each process gets a fixed time slice
(quantum), and processes are rotated in a circular queue.
Longest Job First (LJF) CPU Scheduling Algorithm: Processes with the longest execution time
are completed first.
Shortest Remaining Time First (SRTF) CPU Scheduling Algorithm: The process with the
shortest remaining execution time is given the CPU next.
Multiple Queue CPU Scheduling Algorithm: Processes are divided into different queues based
on priority or type, with each queue having its own scheduling algorithm.
Round Robin is a CPU scheduling mechanism whose cycles around assigning each task a specific time
slot. It is the First come First Serve CPU Scheduling method prior Pre Emptive Process Scheduling
approach. The Round Robin CPU algorithm frequently emphasizes the Time Sharing method.
Operating systems use disk scheduling to plan when Input or Output requests for the disk will arrive.
Input or Output scheduling is another name for disk scheduling.
2. The movement of the disk arm might increase if two or more requests are placed far apart
from one another.
3. Since hard disks are among the slower components of the computer system, they must be
accessible quickly.
88) What are the Disk Scheduling Algorithms used in Operating Systems?
3. LOOK
4. SCAN
5. C SCAN
6. C LOOK
A feature of programming languages called monitors and ithelps in controlledaccess to shared data.
The Monitor is a collection of shared actions, data structures, and synchronization between parallel
procedure calls. A monitor is therefore sometimes referred to as a synchronization tool. Some of the
languages that support the usage of monitors are Java, C#, Visual Basic, Ada, and concurrent Euclid.
Although other processes cannot access the monitor's internal variables, they can invoke its methods