0% found this document useful (0 votes)
2 views33 pages

OS Questions

The document provides an overview of operating systems, detailing their role as intermediaries between users and hardware, and includes a comprehensive list of over 100 interview questions and answers covering various topics from basic concepts to advanced theories. Key topics include processes, threads, scheduling algorithms, memory management, and system functions. This resource is designed to help both freshers and experienced professionals prepare for operating system interviews effectively.

Uploaded by

gopogo56
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
2 views33 pages

OS Questions

The document provides an overview of operating systems, detailing their role as intermediaries between users and hardware, and includes a comprehensive list of over 100 interview questions and answers covering various topics from basic concepts to advanced theories. Key topics include processes, threads, scheduling algorithms, memory management, and system functions. This resource is designed to help both freshers and experienced professionals prepare for operating system interviews effectively.

Uploaded by

gopogo56
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 33

Aptitude Engineering Mathematics Discrete Mathematics Operating System DBMS Computer Netw

Operating System Interview Questions


Last Updated : 20 Sep, 2024

An operating system acts as a GUI between the user and the computer
system. In other words, an OS acts as an intermediary between the user
and the computer hardware, managing resources such as memory,
processing power, and input/output operations. Here some examples of
popular operating systems include Windows, MacOS, Linux, Android
etc.

In this article, we provide you with the top 100+ OS interview questions
with answers that cover everything from the basics of OS architecture
to advanced operating systems concepts such as file systems,
scheduling algorithms, and multithreading. Whether you are a fresher
or an experienced IT professional, this article gives you all the
confidence you need to ace your next OS interview.

Table of Content

Basics OS Interview Questions


Intermediate OS Interview Questions
Advanced OS Interview Questions

Basics OS Interview Questions

1. What is a process and process table?

A process is an instance of a program in execution. For example, a Web


We use cookies to ensure you have the best browsing experience on our website. By using our site, you
Browser is a process, and a shell (or command prompt) is a
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
process. The operating system is responsible for managing all the
Got It !
processes that are running on a computer and allocates each process a
certain amount of time to use the processor. In addition, the operating
system also allocates various other resources that processes will need,
such as computer memory or disks. To keep track of the state of all the
processes, the operating system maintains a table known as the process
table. Inside this table, every process is listed along with the resources
the process is using and the current state of the process.

2. What are the different states of the process?

Processes can be in one of three states: running, ready, or waiting. The


running state means that the process has all the resources it needs for
execution and it has been given permission by the operating system to
use the processor. Only one process can be in the running state at any
given time. The remaining processes are either in a waiting state (i.e.,
waiting for some external event to occur such as user input or disk
access) or a ready state (i.e., waiting for permission to use the
processor). In a real operating system, the waiting and ready states are
implemented as queues that hold the processes in these states.

For more details you can refer States of a Process in Operating Systems
published article.

3. What is a Thread?

A thread is a single sequence stream within a process. Because threads


have some of the properties of processes, they are sometimes called
lightweight processes. Threads are a popular way to improve the
application through parallelism. For example, in a browser, multiple
tabs can be different threads. MS Word uses multiple threads, one
thread to format the text, another thread to process inputs, etc.

4. What are the differences between process and thread?

Process
We istoprogram
use cookies under
ensure you have action
the best and
browsing threadonisourthe
experience smallest
website. By using segment
our site, you of
acknowledge
instructions that you have
(segment of read and understood
a process) thatourcan
Cookie
bePolicy & Privacyindependently
handled Policy
by a scheduler.
Threads are lightweight processes that share the same address space
including the code section, data section and operating system resources
such as the open files and signals. However, each thread has its own
program counter (PC), register set and stack space allowing them to the
execute independently within the same process context. Unlike
processes, threads are not fully independent entities and can
communicate and synchronize more efficiently making them suitable for
the concurrent and parallel execution in the multi-threaded
environment.

For more details you can refer Difference between Process and Thread
published article.

5. What are the benefits of multithreaded programming?

Multithreaded programming makes the system more responsive and


enables resource sharing. It leads to the use of multiprocess
architecture. It is more economical and preferred.

6. What is Thrashing?

Thrashing is a situation when the performance of a computer degrades


or collapses. Thrashing occurs when a system spends more time
processing page faults than executing transactions. While processing
page faults is necessary in order to appreciate the benefits of virtual
memory, thrashing has a negative effect on the system. As the page
fault rate increases, more transactions need processing from the paging
device. The queue at the paging device increases, resulting in increased
service time for a page fault.

7. What is Buffer?

A buffer is a memory area that stores data being transferred between


two devices or between a device and an application.
We use cookies to ensure you have the best browsing experience on our website. By using our site, you
8. What is virtual
acknowledge that youmemory?
have read and understood our Cookie Policy & Privacy Policy
Virtual memory creates an illusion that each user has one or more
contiguous address spaces, each beginning at address zero. The sizes of
such virtual address spaces are generally very high. The idea of virtual
memory is to use disk space to extend the RAM. Running processes
don’t need to care whether the memory is from RAM or disk. The
illusion of such a large amount of memory is created by subdividing the
virtual memory into smaller pieces, which can be loaded into physical
memory whenever they are needed by a process.

9. Explain the main purpose of an operating system?

An operating system acts as an intermediary between the user of a


computer and computer hardware. The purpose of an operating system
is to provide an environment in which a user can execute programs
conveniently and efficiently.

An operating system is a software that manages computer hardware.


The hardware must provide appropriate mechanisms to ensure the
correct operation of the computer system and to prevent user programs
from interfering with the proper operation of the system.

10. What is demand paging?

The process of loading the page into memory on demand (whenever a


page fault occurs) is known as demand paging.

11. What is a kernel?

A kernel is the central component of an operating system that manages


the operations of computers and hardware. It basically manages
operations of memory and CPU time. It is a core component of an
operating system. Kernel acts as a bridge between applications and
data processing performed at the hardware level using inter-process
communication and system calls.
We use cookies to ensure you have the best browsing experience on our website. By using our site, you
12. What are that
acknowledge theyou different
have read andscheduling algorithms?
understood our Cookie Policy & Privacy Policy
1. First-Come, First-Served (FCFS) Scheduling
2. Shortest-Job-Next (SJN) Scheduling
3. Priority Scheduling
4. Shortest Remaining Time
5. Round Robin(RR) Scheduling
6. Multiple-Level Queues Scheduling

13. Describe the objective of multi-programming?

Multi-programming increases CPU utilization by organizing jobs (code


and data) so that the CPU always has one to execute. The main
objective of multi-programming is to keep multiple jobs in the main
memory. If one job gets occupied with IO, the CPU can be assigned to
other jobs.

14. What is the time-sharing system?

Time-sharing is a logical extension of multiprogramming. The CPU


performs many tasks by switches that are so frequent that the user can
interact with each program while it is running. A time-shared operating
system allows multiple users to share computers simultaneously.

15. What problem we face in computer system without OS?

Poor resource management


Lack of User Interface
No File System
No Networking
Error handling

16. Give some benefits of multithreaded programming?

A thread is also known as a lightweight process. The idea is to achieve


parallelism by dividing a process into multiple threads. Threads within
We usesame
the cookies process
to ensure you haveinthe
run best browsing
shared memoryexperience on our website. By using our site, you
space,
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
17. Briefly explain FCFS?
FCFS stands for First Come First served. In the FCFS scheduling
algorithm, the job that arrived first in the ready queue is allocated to the
CPU and then the job that came second and so on. FCFS is a non-
preemptive scheduling algorithm as a process that holds the CPU until
it either terminates or performs I/O. Thus, if a longer job has been
assigned to the CPU then many shorter jobs after it will have to wait.

18. What is the RR scheduling algorithm?

A round-robin scheduling algorithm is used to schedule the process


fairly for each job in a time slot or quantum and interrupting the job if it
is not completed by then the job comes after the other job which is
arrived in the quantum time makes these scheduling fairly.

Round-robin is cyclic in nature, so starvation doesn’t occur


Round-robin is a variant of first-come, first-served scheduling
No priority or special importance is given to any process or task
RR scheduling is also known as Time slicing scheduling

19. Enumerate the different RAID levels?

A redundant array of independent disks is a set of several physical disk


drives that the operating system sees as a single logical unit. It played a
significant role in narrowing the gap between increasingly fast
processors and slow disk drives. RAID has different levels:

Level-0
Level-1
Level-2
Level-3
Level-4
Level-5
Level-6

20. What is Banker’s algorithm?


We use cookies to ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
The banker’s algorithm is a resource allocation and deadlock avoidance
algorithm that tests for safety by simulating the allocation for the
predetermined maximum possible amounts of all resources, then makes
an “s-state” check to test for possible activities, before deciding whether
allocation should be allowed to continue.

21. State the main difference between logical and physical


address space?

Parameter LOGICAL ADDRESS PHYSICAL ADDRESS

Logical address is It is located in a memory


Basic
generated by the CPU. unit.

Physical Address is a set of


Logical Address Space is a
all physical addresses
Address set of all logical addresses
mapped to the
Space generated by the CPU in
corresponding logical
reference to a program.
addresses.

Users can never view the


Users can view the logical
Visibility physical address of the
address of a program.
program.

Generation generated by the CPU. Computed by MMU.

The user can use the logical The user can indirectly
Access address to access the access physical addresses
physical address. but not directly.

22. How does dynamic loading aid in better memory space


utilization?

With dynamic loading, a routine is not loaded until it is called. This


method is especially useful when large amounts of code are needed in
order to handle infrequently occurring cases such as error routines.
We use cookies to ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
23. What are overlays?
The concept of overlays is that whenever a process is running it will not
use the complete program at the same time, it will use only some part
of it. Then overlay concept says that whatever part you required, you
load it and once the part is done, then you just unload it, which means
just pull it back and get the new part you required and run it. Formally,
“The process of transferring a block of program code or other data into
internal memory, replacing what is already stored”.

24. What is fragmentation?

Processes are stored and removed from memory, which makes free
memory space, which is too little to even consider utilizing by different
processes. Suppose, that process is not ready to dispense to memory
blocks since its little size and memory hinder consistently staying
unused is called fragmentation. This kind of issue occurs during a
dynamic memory allotment framework when free blocks are small, so it
can’t satisfy any request.

25. What is the basic function of paging?

Paging is a method or technique which is used for non-contiguous


memory allocation. It is a fixed-size partitioning theme (scheme). In
paging, both main memory and secondary memory are divided into
equal fixed-size partitions. The partitions of the secondary memory area
unit and the main memory area unit are known as pages and frames
respectively.

Paging is a memory management method accustomed fetch processes


from the secondary memory into the main memory in the form of pages.
in paging, each process is split into parts wherever the size of every part
is the same as the page size. The size of the last half could also be but
the page size. The pages of the process area unit hold on within the
frames of main memory relying upon their accessibility

26.
We useHow does
cookies to ensureswapping result
you have the best inexperience
browsing betteronmemory
our website.management?
By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Swapping is a simple memory/process management technique used by
the operating system(os) to increase the utilization of the processor by
moving some blocked processes from the main memory to the
secondary memory thus forming a queue of the temporarily suspended
processes and the execution continues with the newly arrived process.
During regular intervals that are set by the operating system, processes
can be copied from the main memory to a backing store and then copied
back later. Swapping allows more processes to be run that can fit into
memory at one time

27. Write a name of classic synchronization problems?

Bounded-buffer
Readers-writers
Dining philosophers
Sleeping barber

28. What is the Direct Access Method?

The direct Access method is based on a disk model of a file, such that it
is viewed as a numbered sequence of blocks or records. It allows
arbitrary blocks to be read or written. Direct access is advantageous
when accessing large amounts of information. Direct memory access
(DMA) is a method that allows an input/output (I/O) device to send or
receive data directly to or from the main memory, bypassing the CPU to
speed up memory operations. The process is managed by a chip known
as a DMA controller (DMAC).

29. When does thrashing occur?

Thrashing occurs when processes on the system frequently access


pages, not available memory.

30. What is the best page size when designing an operating


We use cookies to ensure you have the best browsing experience on our website. By using our site, you
system? acknowledge that you have read and understood our Cookie Policy & Privacy Policy
The best paging size varies from system to system, so there is no single
best when it comes to page size. There are different factors to consider
in order to come up with a suitable page size, such as page table,
paging time, and its effect on the overall efficiency of the operating
system.

31. What is multitasking?

Multitasking is a logical extension of a multiprogramming system that


supports multiple programs to run concurrently. In multitasking, more
than one task is executed at the same time. In this technique, the
multiple tasks, also known as processes, share common processing
resources such as a CPU.

32. What is caching?

The cache is a smaller and faster memory that stores copies of the data
from frequently used main memory locations. There are various
different independent caches in a CPU, which store instructions and
data. Cache memory is used to reduce the average time to access data
from the Main memory.

33. What is spooling?

Spooling refers to simultaneous peripheral operations online, spooling


refers to putting jobs in a buffer, a special area in memory, or on a disk
where a device can access them when it is ready. Spooling is useful
because devices access data at different rates.

34. What is the functionality of an Assembler?

The Assembler is used to translate the program written in Assembly


language into machine code. The source program is an input of an
assembler that contains assembly language instructions. The output
We use cookies to by
generated ensurethe
you have the best browsing
assembler experience
is the on ourcode
object website.or
By using our site, you
machine code
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
understandable by the computer.
35. What are interrupts?

The interrupts are a signal emitted by hardware or software when a


process or an event needs immediate attention. It alerts the processor to
a high-priority process requiring interruption of the current working
process. In I/O devices one of the bus control lines is dedicated to this
purpose and is called the Interrupt Service Routine (ISR).

36. What is GUI?

GUI is short for Graphical User Interface. It provides users with an


interface wherein actions can be performed by interacting with icons
and graphical symbols.

37. What is preemptive multitasking?

Preemptive multitasking is a type of multitasking that allows computer


programs to share operating systems (OS) and underlying hardware
resources. It divides the overall operating and computing time between
processes, and the switching of resources between different processes
occurs through predefined criteria.

38. What is a pipe and when is it used?

A Pipe is a technique used for inter-process communication. A pipe is a


mechanism by which the output of one process is directed into the input
of another process. Thus it provides a one-way flow of data between
two related processes.

39. What are the advantages of semaphores?

They are machine-independent.


Easy to implement.
Correctness is easy to determine.
Can
We use have
cookies many
to ensure youdifferent critical
have the best browsingsections
experience with different
on our website. semaphores.
By using our site, you
acknowledge that you have read and understood
Semaphores acquire many resources simultaneously. our Cookie Policy & Privacy Policy

No waste of resources due to busy waiting.


40. What is a bootstrap program in the OS?

Bootstrapping is the process of loading a set of instructions when a


computer is first turned on or booted. During the startup process,
diagnostic tests are performed, such as the power-on self-test (POST),
which set or checks configurations for devices and implements routine
testing for the connection of peripherals, hardware, and external
memory devices. The bootloader or bootstrap program is then loaded to
initialize the OS.

41. What is IPC?

Inter-process communication (IPC) is a mechanism that allows


processes to communicate with each other and synchronize their
actions. The communication between these processes can be seen as a
method of cooperation between them.

42. What are the different IPC mechanisms?

These are the methods in IPC:

Pipes (Same Process): This allows a flow of data in one direction


only. Analogous to simplex systems (Keyboard). Data from the
output is usually buffered until the input process receives it which
must have a common origin.
Named Pipes (Different Processes): This is a pipe with a specific
name it can be used in processes that don’t have a shared common
process origin. E.g. FIFO where the details written to a pipe are first
named.
Message Queuing: This allows messages to be passed between
processes using either a single queue or several message queues.
This is managed by the system kernel these messages are
coordinated using an API.
Semaphores: This is used in solving problems associated with
synchronization
We use cookies to ensure youand avoiding
have the raceexperience
best browsing conditions.
on our These are
website. By integer
using our site, you
acknowledge that you have read and understood
values that are greater than or equal to 0. our Cookie Policy & Privacy Policy
Shared Memory: This allows the interchange of data through a
defined area of memory. Semaphore values have to be obtained
before data can get access to shared memory.
Sockets: This method is mostly used to communicate over a network
between a client and a server. It allows for a standard connection
which is computer and OS independent

43. What is the difference between preemptive and non-


preemptive scheduling?

In preemptive scheduling, the CPU is allocated to the processes for a


limited time whereas, in Non-preemptive scheduling, the CPU is
allocated to the process till it terminates or switches to waiting for
the state.
The executing process in preemptive scheduling is interrupted in the
middle of execution when a higher priority one comes whereas, the
executing process in non-preemptive scheduling is not interrupted in
the middle of execution and waits till its execution.
In Preemptive Scheduling, there is the overhead of switching the
process from the ready state to the running state, vice-verse, and
maintaining the ready queue. Whereas the case of non-preemptive
scheduling has no overhead of switching the process from running
state to ready state.
In preemptive scheduling, if a high-priority process frequently arrives
in the ready queue then the process with low priority has to wait for
a long, and it may have to starve. On the other hand, in non-
preemptive scheduling, if CPU is allocated to the process having a
larger burst time then the processes with a small burst time may
have to starve.
Preemptive scheduling attains flexibility by allowing the critical
processes to access the CPU as they arrive in the ready queue, no
matter what process is executing currently. Non-preemptive
scheduling is called rigid as even if a critical process enters the ready
queue
We use cookiesthe process
to ensure running
you have CPU isexperience
the best browsing not disturbed.
on our website. By using our site, you
acknowledge that you have read and understood
Preemptive Scheduling has to maintain the integrity our Cookie Policy & Privacy Policy data
of shared
that’s why it is cost associative which is not the case with Non-
preemptive Scheduling.

44. What is the zombie process?

A process that has finished the execution but still has an entry in the
process table to report to its parent process is known as a zombie
process. A child process always first becomes a zombie before being
removed from the process table. The parent process reads the exit
status of the child process which reaps off the child process entry from
the process table.

45. What are orphan processes?

A process whose parent process no more exists i.e. either finished or


terminated without waiting for its child process to terminate is called an
orphan process.

46. What are starvation and aging in OS?

Starvation: Starvation is a resource management problem where a


process does not get the resources it needs for a long time because the
resources are being allocated to other processes.

Aging: Aging is a technique to avoid starvation in a scheduling system.


It works by adding an aging factor to the priority of each request. The
aging factor must increase the priority of the request as time passes and
must ensure that a request will eventually be the highest priority
request

47. Write about monolithic kernel?

A Monolithic Kernel is another classification of Kernel. Like microkernel,


this one also manages system resources between application and
hardware, but user services and kernel services are implemented under
the same address space. It increases the size of the kernel, thus
We use cookies to ensure you have the best browsing experience on our website. By using our site, you
increasing the size
acknowledge ofhave
that you an read
operating system
and understood as well.
our Cookie Policy This kernel
& Privacy Policy provides
CPU scheduling, memory management, file management, and other
operating system functions through system calls. As both services are
implemented under the same address space, this makes operating
system execution faster.

48. What is Context Switching?

Switching of CPU to another process means saving the state of the old
process and loading the saved state for the new process. In Context
Switching the process is stored in the Process Control Block to serve
the new process so that the old process can be resumed from the same
part it was left.

49. What is the difference between the Operating system and


kernel?

Operating System Kernel

The kernel is a core component of an


operating system and serves as the
Operating System is system
main interface between the computer’s
software. I
physical hardware and the processes
running on it.

Operating System provides an


The kernel provides an interface
interface between the user and
between the application and hardware.
the computer hardware.

Its main purpose is memory


management, disk It manages the system resources,
management, process including processor, memory and
management and task device drivers.
management.

Type of operating system


We useincludes singleyou
cookies to ensure and multiuser
have Type
the best browsing of kernel
experience on ourincludes
website. ByMonolithic
using our site,and
you
acknowledge
OS, that you have
multiprocessor OS,read and understood our CookieMicrokernel.
real- Policy & Privacy Policy
time OS, Distributed OS.
50. What is the difference between process and thread?

Process Thread

Process means any program is in


Thread means a segment of a process.
execution.

The process is less efficient in Thread is more efficient in terms of


terms of communication. communication.

The process is isolated. Threads share memory.

The process is called


Thread is called lightweight process.
heavyweight the process.

Process switching uses, another Thread switching does not require to


process interface in operating call an operating system and cause an
system. interrupt to the kernel.

If one process is blocked then it The second, thread in the same task
will not affect the execution of could not run, while one server thread
other process is blocked.

The process has its own Process Thread has Parents’ PCB, its own
Control Block, Stack and Thread Control Block and Stack and
Address Space. common Address space.

51. What is PCB?

the process control block (PCB) is a block that is used to track the
process’s execution status. A process control block (PCB) contains
information about the process, i.e. registers, quantum, priority, etc. The
process table is an array of PCBs, that means logically contains a PCB
for all of the current processes in the system.
We use cookies to ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
52. When is a system in a safe state?
The set of dispatchable processes is in a safe state if there exists at
least one temporal order in which all processes can be run to
completion without resulting in a deadlock.

53. What is Cycle Stealing?

cycle stealing is a method of accessing computer memory (RAM) or bus


without interfering with the CPU. It is similar to direct memory access
(DMA) for allowing I/O controllers to read or write RAM without CPU
intervention.

54. What are a Trap and Trapdoor?

A trap is a software interrupt, usually the result of an error condition,


and is also a non-maskable interrupt and has the highest priority
Trapdoor is a secret undocumented entry point into a program used to
grant access without normal methods of access authentication.

55. Write a difference between program and process?

Program Process

Program contains a set of


Process is an instance of an
instructions designed to complete
executing program.
a specific task.

Process is an active entity as it is


Program is a passive entity as it
created during execution and loaded
resides in the secondary memory.
into the main memory.

The program exists in a single Process exists for a limited span of


place and continues to exist until it time as it gets terminated after the
is deleted. completion of a task.

We use cookies to ensureisyoua have


A program theentity.
static best browsing experience on our website.
The process By using our
is a dynamic site, you
entity.
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Program Process

Program does not have any Process has a high resource


resource requirement, it only requirement, it needs resources like
requires memory space for storing CPU, memory address, and I/O
the instructions. during its lifetime.

The program does not have any


The process has its own control
control block.
block called Process Control Block.

56. What is a dispatcher?

The dispatcher is the module that gives process control over the CPU
after it has been selected by the short-term scheduler. This function
involves the following:

Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that
program

57. Define the term dispatch latency?

Dispatch latency can be described as the amount of time it takes for a


system to respond to a request for a process to begin operation. With a
scheduler written specifically to honor application priorities, real-time
applications can be developed with a bounded dispatch latency.

58. What are the goals of CPU scheduling?

Max CPU utilization [Keep CPU as busy as possible]Fair allocation of


CPU.
Max throughput [Number of processes that complete their execution
We use
percookies
time to unit]
ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Min turnaround time [Time taken by a process to finish execution]
Min waiting time [Time a process waits in ready queue]
Min response time [Time when a process produces the first response]

59. What is a critical- section?

When more than one processes access the same code segment that
segment is known as the critical section. The critical section contains
shared variables or resources which are needed to be synchronized to
maintain the consistency of data variables. In simple terms, a critical
section is a group of instructions/statements or regions of code that
need to be executed atomically such as accessing a resource (file, input
or output port, global data, etc.).

60. Write the name of synchronization techniques?

Mutexes
Condition variables
Semaphores
File locks

Intermediate OS Interview Questions

61. Write a difference between a user-level thread and a kernel-


level thread?
We use cookies to ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
User-level thread Kernel level thread

User threads are implemented by kernel threads are implemented by


users. OS.

OS doesn’t recognize user-level Kernel threads are recognized by


threads. OS.

Implementation of User threads is Implementation of the perform


easy. kernel thread is complicated.

Context switch time is less. Context switch time is more.

Context switch requires no


Hardware support is needed.
hardware support.

If one user-level thread performs a If one kernel thread perform a the


blocking operation then entire blocking operation then another
process will be blocked. thread can continue execution.

User-level threads are designed as Kernel level threads are designed as


dependent threads. independent threads.

62. Write down the advantages of multithreading?

Some of the most important benefits of MT are:

Improved throughput. Many concurrent compute operations and I/O


requests within a single process.
Simultaneous and fully symmetric use of multiple processors for
computation and I/O.
Superior application responsiveness. If a request can be launched on
its own thread, applications do not freeze or show the “hourglass”.
An entire application will not block or otherwise wait, pending the
completion
We use of you
cookies to ensure another
have therequest.
best browsing experience on our website. By using our site, you
acknowledge that you have
Improved server responsiveness. read and understood
Largeour or
Cookie Policy & Privacy
complex Policy or slow
requests
clients don’t block other requests for service. The overall throughput
of the server is much greater.
Minimized system resource usage. Threads impose minimal impact
on system resources. Threads require less overhead to create,
maintain, and manage than a traditional process.
Program structure simplification. Threads can be used to simplify the
structure of complex applications, such as server-class and
multimedia applications. Simple routines can be written for each
activity, making complex programs easier to design and code, and
more adaptive to a wide variation in user demands.
Better communication. Thread synchronization functions can be used
to provide enhanced process-to-process communication. In addition,
sharing large amounts of data through separate threads of execution
within the same address space provides extremely high-bandwidth,
low-latency communication between separate tasks within an
application

63. Difference between Multithreading and Multitasking?

Multi-threading Multi-tasking

Multiple threads are executing at


Several programs are executed
the same time at the same or
concurrently.
different part of the program.

CPU switches between multiple CPU switches between multiple


threads. tasks and processes.

It is the process of a lightweight


It is a heavyweight process.
part.

It is a feature of the process. It is a feature of the OS.

Multi-threading is sharing of Multitasking is sharing of computing


computing resources among resources(CPU, memory, devices,
We use cookies to ensure
threads of a you haveprocess.
single the best browsing experienceetc.)
on our website.processes.
among By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
64. What are the drawbacks of semaphores?

Priority Inversion is a big limitation of semaphores.


Their use is not enforced but is by convention only.
The programmer has to keep track of all calls to wait and signal the
semaphore.
With improper use, a process may block indefinitely. Such a situation
is called Deadlock.

65. What is Peterson’s approach?

It is a concurrent programming algorithm. It is used to synchronize two


processes that maintain the mutual exclusion for the shared resource. It
uses two variables, a bool array flag of size 2 and an int variable turn to
accomplish it.

66. Define the term Bounded waiting?

A system is said to follow bounded waiting conditions if a process


wants to enter into a critical section will enter in some finite time.

67. What are the solutions to the critical section problem?

There are three solutions to the critical section problem:

Software solutions
Hardware solutions
Semaphores

68. What is a Banker’s algorithm?

The banker’s algorithm is a resource allocation and deadlock avoidance


algorithm that tests for safety by simulating the allocation for the
predetermined maximum possible amounts of all resources, then makes
an “s-state” check to test for possible activities, before deciding whether
We use cookies toshould
allocation ensure you
behave the best browsing
allowed experience on our website. By using our site, you
to continue.
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
69. What is concurrency?

A state in which a process exists simultaneously with another process


than those it is said to be concurrent.

70. Write a drawback of concurrency?

It is required to protect multiple applications from one another.


It is required to coordinate multiple applications through additional
mechanisms.
Additional performance overheads and complexities in operating
systems are required for switching among applications.
Sometimes running too many applications concurrently leads to
severely degraded performance.

71. What are the necessary conditions which can lead to a


deadlock in a system?

Mutual Exclusion: There is a resource that cannot be shared.


Hold and Wait: A process is holding at least one resource and waiting
for another resource, which is with some other process.
No Preemption: The operating system is not allowed to take a resource
back from a process until the process gives it back.
Circular Wait: A set of processes waiting for each other in circular form.

72. What are the issues related to concurrency?

Non-atomic: Operations that are non-atomic but interruptible by


multiple processes can cause problems.
Race conditions: A race condition occurs of the outcome depends on
which of several processes gets to a point first.
Blocking: Processes can block waiting for resources. A process could
be blocked for a long period of time waiting for input from a terminal.
If the process is required to periodically update some data, this
We use cookiesbe
would to ensure
veryyou have the best browsing experience on our website. By using our site, you
undesirable.
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Starvation: It occurs when a process does not obtain service to
progress.
Deadlock: It occurs when two processes are blocked and hence
neither can proceed to execute

73. Why do we use precedence graphs?

A precedence graph is a directed acyclic graph that is used to show the


execution level of several processes in the operating system. It has the
following properties also:

Nodes of graphs correspond to individual statements of program


code.
An edge between two nodes represents the execution order.
A directed edge from node A to node B shows that statement A
executes first and then Statement B executes

74. Explain the resource allocation graph?

The resource allocation graph is explained to us what is the state of the


system in terms of processes and resources. One of the advantages of
having a diagram is, sometimes it is possible to see a deadlock directly
by using RAG.

75. What is a deadlock?

Deadlock is a situation when two or more processes wait for each other
to finish and none of them ever finish. Consider an example when two
trains are coming toward each other on the same track and there is only
one track, none of the trains can move once they are in front of each
other. A similar situation occurs in operating systems when there are
two or more processes that hold some resources and wait for resources
held by other(s).

76. What is the goal and functionality of memory management?


We use cookies to ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
The goal and functionality of memory management are as follows;
Relocation
Protection
Sharing
Logical organization
Physical organization

77. Write a difference between physical address and logical


address?

Parameters Logical address Physical Address

It is the virtual address The physical address is a


Basic
generated by CPU. location in a memory unit.

Set of all logical addresses


Set of all physical addresses
generated by the CPU in
mapped to the corresponding
Address reference to a program is
logical addresses is referred
referred to as Logical
to as a Physical Address.
Address Space.

The user can view the The user can never view the
Visibility logical address of a physical address of the
program. program

The user uses the logical


The user can not directly
Access address to access the
access the physical address
physical address.

The Logical Address is Physical Address is


Generation
generated by the CPU Computed by MMU

78. Explain address binding?

The Association of program instruction and data to the actual physical


memory
We use cookieslocations
to ensure youishave
called Address
the best Binding.on our website. By using our site, you
browsing experience
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
79. Write different types of address binding?
Address Binding is divided into three types as follows.

Compile-time Address Binding


Load time Address Binding
Execution time Address Binding

80. Write an advantage of dynamic allocation algorithms?

When we do not know how much amount of memory would be


needed for the program beforehand.
When we want data structures without any upper limit of memory
space.
When you want to use your memory space more efficiently.
Dynamically created lists insertions and deletions can be done very
easily just by the manipulation of addresses whereas in the case of
statically allocated memory insertions and deletions lead to more
movements and wastage of memory.
When you want to use the concept of structures and linked lists in
programming, dynamic memory allocation is a must

81. Write a difference between internal fragmentation and


external fragmentation?

Internal fragmentation External fragmentation

In internal fragmentation fixed- In external fragmentation, variable-


sized memory, blocks square sized memory blocks square measure
measure appointed to process. appointed to the method.

Internal fragmentation
happens when the method or External fragmentation happens when
process is larger than the the method or process is removed.
memory.

The solution to internal


We use Solution
cookies to ensure you have the best browsing foronexternal
experience our website.fragmentation is
By using our site, you
fragmentation is the best-fit
acknowledge that you have read and understood our Cookie
compaction, Policyand
paging & Privacy Policy
segmentation.
block.
Internal fragmentation External fragmentation

External fragmentation occurs when


Internal fragmentation occurs
memory is divided into variable-size
when memory is divided into
partitions based on the size of
fixed-sized partitions.
processes.

The difference between The unused spaces formed between


memory allocated and required non-contiguous memory fragments are
space or memory is called too small to serve a new process, which
Internal fragmentation. is called External fragmentation.

82. Define the Compaction?

The process of collecting fragments of available memory space into


contiguous blocks by moving programs and data in a computer’s
memory or disk.

83. Write about the advantages and disadvantages of a hashed-


page table?

Advantages

The main advantage is synchronization.


In many situations, hash tables turn out to be more efficient than
search trees or any other table lookup structure. For this reason, they
are widely used in many kinds of computer software, particularly for
associative arrays, database indexing, caches, and sets.

Disadvantages

Hash collisions are practically unavoidable. when hashing a random


subset of a large set of possible keys.
Hash tables become quite inefficient when there are many collisions.
Hash table does not allow null values, like a hash map.
We use cookies Compaction.
Define to ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy

84. Write a difference between paging and segmentation?


Paging Segmentation

In paging, program is divided


In segmentation, the program is divided
into fixed or mounted-size
into variable-size sections.
pages.

For the paging operating For segmentation compiler is


system is accountable. accountable.

Page size is determined by Here, the section size is given by the


hardware. user.

It is faster in comparison of
Segmentation is slow.
segmentation.

Paging could result in internal Segmentation could result in external


fragmentation. fragmentation.

In paging, logical address is


Here, logical address is split into section
split into that page number
number and section offset.
and page offset.

Paging comprises a page table While segmentation also comprises the


which encloses the base segment table which encloses the
address of every page. segment number and segment offset.

A page table is employed to Section Table maintains the section


keep up the page data. data.

In segmentation, the operating system


In paging, operating system
maintains a list of holes in the main
must maintain a free frame list.
memory.

Paging is invisible to the user. Segmentation is visible to the user.

We use cookies to ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Paging Segmentation

In paging, processor needs In segmentation, the processor uses


page number, offset to segment number, and offset to calculate
calculate the absolute address. the full address.

85. Write a definition of Associative Memory and Cache


Memory?

Associative Memory Cache Memory

A memory unit accessed by content Fast and small memory is called


is called associative memory. cache memory.

It reduces the time required to find It reduces the average memory


the item stored in memory. access time.

Here data is accessed by its Here, data are accessed by their


content. address.

It is used where search time is very It is used when a particular group


short. of data is accessed repeatedly.

Its basic characteristic is its logic Its basic characteristic is its fast
circuit for matching its content. access

86. What is “Locality of reference”?

The locality of reference refers to a phenomenon in which a computer


program tends to access the same set of memory locations for a
particular time period. In other words, Locality of Reference refers to the
tendency of the computer program to access instructions whose
addresses are near one another.
We use cookies to ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
87. Write down the advantages of virtual memory?
A higher degree of multiprogramming.
Allocating memory is easy and cheap
Eliminates external fragmentation
Data (page frames) can be scattered all over the PM
Pages are mapped appropriately anyway
Large programs can be written, as the virtual space available is huge
compared to physical memory.
Less I/O required leads to faster and easy swapping of processes.
More physical memory is available, as programs are stored on virtual
memory, so they occupy very less space on actual physical memory.
More efficient swapping

88. How to calculate performance in virtual memory?

The performance of virtual memory of a virtual memory management


system depends on the total number of page faults, which depend on
“paging policies” and “frame allocation”.

Effective access time = (1-p) x Memory access time + p x page fault time

89. Write down the basic concept of the file system?

A file is a collection of related information that is recorded on secondary


storage. Or file is a collection of logically related entities. From the
user’s perspective, a file is the smallest allotment of logical secondary
storage.

90. Write the names of different operations on file?

Operation on file:

Create
Open
Read
Write
Rename
We use cookies to ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Delete
Append
Truncate
Close

91. Define the term Bit-Vector?

A Bitmap or Bit Vector is a series or collection of bits where each bit


corresponds to a disk block. The bit can take two values: 0 and 1: 0
indicates that the block is allocated and 1 indicates a free block.

92. What is a File allocation table?

FAT stands for File Allocation Table and this is called so because it
allocates different files and folders using tables. This was originally
designed to handle small file systems and disks. A file allocation table
(FAT) is a table that an operating system maintains on a hard disk that
provides a map of the cluster (the basic units of logical storage on a
hard disk) that a file has been stored in.

93. What is rotational latency?

Rotational Latency: Rotational Latency is the time taken by the desired


sector of the disgeek to rotate into a position so that it can access the
read/write heads. So the disk scheduling algorithm that gives minimum
rotational latency is better.

94. What is seek time?

Seek Time: Seek time is the time taken to locate the disk arm to a
specified track where the data is to be read or written. So the disk
scheduling algorithm that gives a minimum average seek time is better.

Advanced OS Interview Questions

95. What is Belady’s Anomaly?

We use cookies to ensure you have the best browsing experience on our website. By using our site, you
Bélády’s anomaly
acknowledge is an
that you haveanomaly with some
read and understood page
our Cookie Policyreplacement
& Privacy Policy policies
increasing the number of page frames resulting in an increase in the
number of page faults. It occurs when the First In First Out page
replacement is used.

96. What happens if a non-recursive mutex is locked more than


once?

Deadlock. If a thread that had already locked a mutex, tries to lock the
mutex again, it will enter into the waiting list of that mutex, which
results in a deadlock. It is because no other thread can unlock the
mutex. An operating system implementer can exercise care in
identifying the owner of the mutex and return it if it is already locked by
the same thread to prevent deadlocks.

97. What are the advantages of a multiprocessor system?

There are some main advantages of a multiprocessor system:

Enhanced performance.
Multiple applications.
Multi-tasking inside an application.
High throughput and responsiveness.
Hardware sharing among CPUs.

98. What are real-time systems?

A real-time system means that the system is subjected to real-time, i.e.,


the response should be guaranteed within a specified timing constraint
or the system should meet the specified deadline.

99. How to recover from a deadlock?

We can recover from a deadlock by following methods:

Process termination
Abort all the deadlock processes
We use cookies to Abort one
ensure you haveprocess at a time
the best browsing untilonthe
experience deadlock
our website. is eliminated
By using our site, you
acknowledge that you have read and understood our Cookie Policy & Privacy Policy
Resource preemption
Rollback
Selecting a victim

100. What factors determine whether a detection algorithm


must be utilized in a deadlock avoidance system?

One is that it depends on how often a deadlock is likely to occur under


the implementation of this algorithm. The other has to do with how
many processes will be affected by deadlock when this algorithm is
applied.

101. Explain the resource allocation graph?

The resource allocation graph is explained to us what is the state of the


system in terms of processes and resources. One of the advantages of
having a diagram is, sometimes it is possible to see a deadlock directly
by using RAG.

Also check: Last Minute Notes – Operating Systems

We will soon be covering more Operating System questions.

Conclusion
In conclusion, the field of operating systems is a crucial aspect of
computer science, and a thorough understanding of its concepts is
essential for anyone looking to excel in this area. By reviewing the Top
2024 100+ operating systems interview questions we have compiled,
you can gain a deeper understanding of the key principles and concepts
of OS and be better prepared to tackle any interview questions that may
come your way. Remember to study and practice regularly, and use
these questions as a starting point to delve deeper into the complex
world of operating systems. With dedication and hard work, you can
become
We use cookiesan expert
to ensure in this
you have field
the best andexperience
browsing succeed in website.
on our any OS-related
By using our site,job
you or
interview.
acknowledge that you have read and understood our Cookie Policy & Privacy Policy

You might also like