Operating System Model Answer Paper
Operating System Model Answer Paper
System calls :
Definition : System calls are special functions used by user programs to request services
from the operating system (like reading files, creating processes, etc.).
Types of System Calls :
1. Process Control : These system calls manage processes (start, end, or wait).
Examples: {i} Exit() – Terminate a process {ii} Wait() – Wait for a process
{iii} Exec() – Replace a process with another {iv} Fork() – Create a new process
2. File Management : Used to work with files — open, read, write, or close them.
Examples: {i} Open(), close() {ii} Read(), write()
{iii} Create(), delete() {iv} Seek() – Move file pointer.
3. Device Management : Used to control hardware devices like printers, keyboards, etc
Examples : {i} Request_device() Ask OS to use a device {ii} Release_device() – Free the
device {iii} Read_device(), write_device()
Conclusion : System calls help user programs perform important tasks by asking the OS
for help. They are grouped into : Process control , File management ,Device management ,
Info maintenance , Communication
Diagram :
Deadlock :
Definition : A deadlock is a situation in a multiprogramming system where a set of
processes are blocked forever, waiting for each other to release resources. This happens
when the system enters a state where each process in the set is waiting for an event that
can only be caused by another process in the set.
2) Hold and Wait: A process holding at least one resource is waiting for additional
resources.
4) Circular Wait: A circular chain of processes exists, where each process is waiting for
a resource held by the next process.
Diagram
2) Wait-for Graph: This is a simplified version of the resource allocation graph, where
we only consider processes that are waiting for other processes.
3) Banker’s Algorithm: In some systems, algorithms like the Banker’s algorithm can be
used to detect deadlocks by checking if resource allocation leaves the system in an
unsafe state.
Methods for Deadlock Recovery:
1) Process Termination :
Abort all deadlocked processes: This approach simply stops all the processes involved in
the deadlock. After termination, resources can be freed and reassigned.
Abort one process at a time: In some cases, only one process is terminated at a time to
break the deadlock. The system can check which process is causing the deadlock and
remove it from the resource cycle.
2) Resource Preemption:
Pre-empt resources from processes: The system can forcefully take resources away from
one process and allocate them to another process, breaking the circular wait. However,
this method is tricky, as it might cause inconsistency or corruption in data.
Rollback: Some systems allow processes to roll back to a safe state before they entered
the deadlock, so the system can try re-allocating resources in a different order.
Diagram
Mutual Exclusion :
Definition : Mutual Exclusion is one of the four necessary conditions for deadlock. It
means that a resource can only be used by one process at a time. If a process holds a
resource, no other process can use it until the process releases it.
Example :- Printer : If two processes want to print a document, only one process can use
the printer at a time. The other must wait until the printer is available.
Diagram
Deadlock Prevention : A deadlock occurs in a system when two or more processes are
blocked forever, waiting for each other to release resources. Deadlock prevention is a
method of ensuring that deadlock does not occur by eliminating one or more of the
necessary conditions that lead to it.
3) Eliminate No Preemption:
Description: Once a process has acquired a resource, it cannot be forcibly taken away.
Prevention: If a process holding certain resources is preempted, the OS should forcefully
take the resource away and assign it to another process.
Diagram
Summary : By removing one of the four necessary conditions (Mutual Exclusion, Hold and
Wait, No Preemption, Circular Wait), we can prevent deadlock from occurring. For
instance, forcing processes to request all resources at once (hold and wait) or requiring
resources to be preempted when needed (no preemption) can ensure that deadlock is
avoided.
Operating System
Definition : An Operating System is a system software that acts as an interface between
the user and the computer hardware. It manages all hardware and s in software resources,
such as CPU, memory, input/output devices, and files.
Functions of OS:
1) Process management : Handles creation, scheduling, and termination of processes.
3) File management : Manages reading, writing, and storing of files. Maintains file
structure, names, and permissions. Ensures data is stored and retrieved properly.
4) Device management : Controls input/output devices like keyboard, printer, etc.
Uses device drivers for communication.
Diagram
Multiprogramming : It means many programs are in memory at the same time. CPU picks
on It waits for the event and to be moved back into memory.
e program, runs it, & if it’s waiting (like for input), CPU switches to another.
Goal: To keep CPU busy all the time. Improves performance and CPU efficiency.
Example: If one program waits for input, CPU works on another. So time is not wasted.
Diagram :
Function : The Operating System manages all computer resources. It controls processes,
memory, files, and devices. Its main function is to run the system smoothly.
Use : We use the OS to operate the computer system. It helps run applications, access
files, and manage hardware. Users interact with the system through the OS.
Services: OS provides services like file handling, memory management, and device
control. These services help programs run properly. They ensure communication between
software and hardware
Diagram
Features : OS features include multitasking, security, and user interface. These are the
special qualities of an OS. They make the system user-friendly and powerful.
OS as a Resource Manager
The Operating System (OS) is like the manager of a computer. It controls and manages the
hardware resources like the CPU, memory, devices, and files..
1) CPU Management: OS decides which process gets the CPU’s time to run. It makes
sure no process is left waiting too long.
3) File Management: OS organizes and stores files on the computer. It makes sure the
right program can access the right file at the right time.
5) Resource Allocation: OS makes sure different programs do not fight over the same
resources, like the printer or memory.
Diagram
Summary : In simple words, the OS is responsible for sharing all the computer’s resources
in a way. that makes everything work together smoothly.
Components of PCB:
1) Process ID (PID) : Every process is assigned a unique identification number called
the Process ID. This helps the OS differentiate between processes.
2) Process State : It stores the current state of the process, like whether it’s running,
waiting, ready, or terminated. This helps the OS know what the process is doing.
3) Program Counter : The program counter stores the address of the next instruction to
be executed for the process. It helps in keeping track of where the process was in its
execution.
4) Process Priority : The priority of the process may be stored in the PCB, especially in
systems that use priority scheduling for process execution. It helps the OS decide
which process should run next.
5) CPU Registers : These are the CPU registers used by the process during execution. It
includes values like accumulator, general-purpose registers, etc. When a process is
interrupted, the values in these registers are saved in the PCB.
6) Memory Management Information : The OS keeps track of the memory allocated to
the process, such as the base and limit registers. This helps the OS manage
memory for different processes.
7) Accounting Information : It includes details like CPU time used, process priority, and
other statistics related to process. This information helps OS in managing process
scheduling resource allocation.
8) I/O Status Information : This contains information about the I/O devices being used
by the process, such as files opened or devices requested. The OS uses this to
manage I/O operations.
Role of PCB :
1) Process Management: The OS uses the PCB to manage processes efficiently. It
helps in switching between processes (context switching).
Conclusion : In simple terms, the Process Control Block (PCB) is like a file where the OS
stores all the important information about a process. Without the PCB, the OS would not
be able to keep track of processes or switch between them properly.
Diagram
3) Indirect Communication: Messages are sent to a queue & then read by another
process..
Eg : A process A sends a message to process B to request data or perform an action.
4) Shared Memory: Processes can share a section of memory. One process writes to it,
& another reads from it. It’s faster because the processes directly access the
memory.
Eg : Process A writes data to the shared memory, and Process B reads from it.
IPC Methods
1) Pipes: A pipe allows one process to send data to another process. Anonymous
pipes are for related processes (like parent-child). Named pipes allow different
processes to communicate.
Eg : A producer process writes to the pipe, and a consumer process reads from it.
2) Message Queues: Messages are placed in a queue, and anothejr process can
retrieve them This method is used to store data temporarily.
Eg : Process A sends a message to a queue, and process B retrieves it when needed.
3) Semaphores: These are used to control access to shared resources. They make sure
only one process accesses a resource at a time.
Eg : A semaphore ensures that only one process can access a critical section of the code at
a time.
Advantages of IPC
• Data Sharing: Allows processes to share large amounts of data efficiently.
Resource Sharing: Enables multiple processes to share resources like memory, devices, or
data without conflicts.
Diagram
Conclusion : IPC allows processes to communicate, share data, and coordinate without
interfering with synchronization: Helps processes synchronize and cooperate, avoiding
race conditions and deadlocks.ch other. Methods like message passing, shared memory,
and semaphores help make this possible.
How it works:
Memory is divided into fixed-size blocks.
A bit map keeps track of the status of each block.
To allocate memory, the OS searches for consecutive 0s (free blocks).
To free memory, the corresponding bits are set back to 0.
Example :
Bitmap: 1 1 0 0 0 1 0
↑↑↑
Used Used Free…
Advantages Disadvantages
Simple and easy to implement. Searching for large free blocks can be slow.
Fast checking of free/used memory Not flexible for variable-sized memory
blocks. requests.
How it works:
When a program requests memory, the OS searches the list for a free block large enough.
The block is split or allocated, and the list is updated.
When memory is freed, blocks may be merged (coalesced) if adjacent blocks are also free.
Example :
[Free: 0-99] → [Allocated: 100-199] → [Free: 200-299]
Advantages Disadvantages
Flexible for variable-size allocations. Searching can be slow.
Supports dynamic memory usage. Needs extra memory for pointers and status info.
Conclusion : Both bitmap and linked list methods are used for tracking memory usage.
Bitmap is best for fixed-size memory allocation.
Linked List is useful for dynamic, variable-size memory allocation.
Choosing the method depends on system design, speed, and flexibility needs.
Diagram
Paging
Definition : Paging is a memory management technique used by the Operating System.
It divides the process into small fixed-size blocks called pages, and divides the main
memory into blocks of the same size called frames.
Pages are loaded into available memory frames, so processes don’t need to be in one
continuous block, which helps reduce memory wastage (called fragmentation).
Types of Paging:
1) Simple Paging: All pages and frames are of equal size. Pages are loaded into any free
memory frame. There is no external fragmentation, but internal fragmentation may
occur.
2) Demand Paging : In this type, only the required pages are loaded into memory when
needed. This saves memory and increases efficiency. Pages not in memory cause a
page fault and are then loaded.
3) Virtual Paging (or Virtual Memory Paging): Uses secondary memory (like a hard disk)
as an extension of RAM. Pages are moved between RAM and disk as needed. It
allows large programs to run even if RAM is small.
Components of Paging
• Pages : Fixed-size blocks of logical memory.
• Frames : Fixed-size blocks of physical memory (RAM).
• Page Table : A table used by OS to map logical pages to physical frames.
• Logical Address (Virtual Address) : The address used by programs.
• Physical Address: Actual location in RAM where data is stored.
Advantages Disadvantages
No external fragmentation. Page table can be large. Internal fragmentation (last
Efficient memory usage page may be partially empty)
Simple to implement. Supports Extra overhead due to address translation
virtual memory
1) Simple Paging : Fixed-size pages and frames One page table per process. Easy but
page tables can become large
2) Hierarchical Paging : Breaks page table into multiple levels. Reduces size of each
table
3) Used in large address spaces (like 32-bit or 64-bit)
4) Hashed Paging : Uses hash table to map pages
5) Used when address space is very large (like in 64-bit systems)
6) Inverted Paging: One page table for whole system, not per process
7) Each entry contains info about which process/page is stored in that frame saves
memory, but slower lookup
Diagram
Conclusion : Paging helps manage memory better by breaking programs and memory into
fixed-size parts, and the types like simple, demand, and virtual paging help improve
efficiency and memory usage.
Various paging techniques like simple, hierarchical, and inverted paging are used based on
system architecture.
Linked List
Definition : A Linked List is a linear data structure used to store a collection of elements.
Unlike arrays, linked lists do not store elements in continuous memory locations. Instead,
each element (called a node) contains two parts:
Data – The value or information.
Pointer (Link) – The address of the next node in the list.
1) Singly Linked List: Each node points to the next node only. The last node points to
NULL (end of the list).
2) Doubly Linked List : Each node has two pointers: one to the next node and one to
the previous node. It allows forward and backward movement.
3) Circular Linked List: In this type, the last node points back to the first node, forming
a circle. It can be singly or doubly circular.
Advantages Disadvantages
Dynamic memory usage (no Slower access (no direct
fixed size). index like arrays).
Easy to insert/delete nodes Extra memory used for
(especially in the middle). storing pointers.
Diagram
Logical Address
Definition : A logical address is the address that the CPU generates when a program is
running. It is also called a virtual address. This address is used by the program, but it is not
the real location in physical memory. The program thinks it is using this address, but the
actual data may be somewhere else. Logical addresses are useful in multi-tasking, as they
give each process its own memory view.
Physical Address
Definition : A physical address is the actual address in RAM (main memory) where the data
or instruction is stored. It is used by the memory unit to fetch or store data. This address is
calculated by the Memory Management Unit (MMU). The user or program cannot see
physical addresses directly. All data in memory finally goes to or comes from a physical
address.
Conclusion : The logical address is what the program uses. The physical address is where
the actual data is. MMU converts logical into physical addresses during execution.This
helps in safe, fast, and efficient memory management.
Swapping
Definition : Swapping is a memory management technique used by the operating system
to increase the number of processes that can run. It means moving a process from main
memory (RAM) to the hard disk (secondary memory), and later bringing it back to main
memory when needed.
Why Swapping is Needed?
The main memory is limited and can’t hold all running processes at the same time.
So, when memory is full, and a new process needs to run, the OS swaps out (removes) a
process from RAM to free up space. When the swapped-out process is needed again, it’s
swapped back in.
Advantages Disadvantages
Increases multiprogramming: More Time-consuming: Moving data between
processes can be managed at a time. RAM and disk takes time (called swap time).
Efficient use of memory: Frees up RAM Can cause thrashing if swapping happens
space for active processes. too frequently.
Supports big programs even if RAM is Hard disk is slower than RAM, so
small. performance may reduce.
Swapping : Used in older systems more commonly. Now used as part of virtual memory
systems in modern OS like Windows, Linux (via swap files or partitions).
Diagram
Conclusion : Swapping is a technique where processes are moved between RAM and disk
to manage memory efficiently. It helps in handling more processes, but too much swapping
can slow down the system.
Goal of I/O Software in Operating System
Definition : I/O (Input/Output) software controls how a computer interacts with external
devices like keyboard, mouse, printer, disk, etc.
1. Device Independence:
The same I/O software should work for different devices.
Example: You can print from any printer, not just one brand.
2. Uniform Naming:
Devices should be accessed using simple names (like disk1, printer) instead of hardware
details.
3. Error Handling:
It detects and handles device errors (like paper jam, read/write failure) automatically.
Diagram
Conclusion : The goal of I/O software is to make communication with devices simple,
reliable, and efficient, while hiding hardware complexity from users and applications.
Buddy System Reallocation in Operating System
What is the Buddy System?
The buddy system is a memory allocation method.
It divides memory into blocks of sizes that are powers of 2 (like 4 KB, 8 KB, 16 KB, etc.).
It helps in efficient allocation and deallocation of memory.
Advantages Disadvantages
Fast Allocation & Deallocation Internal Fragmentation
Easy to Implement: Memory Wastage
Less External Fragmentation: Limited Block Sizes:
Efficient Coalescing: Extra Memory for Management:
Diagram
Conclusion : The buddy system reallocation is a smart way to manage memory by using
splitting and merging of memory blocks in powers of 2, making memory reuse and
management easier and faster.
Virtual Memory
Definition : Virtual Memory is a memory management technique used by the operating
system. It allows a computer to run big programs or many programs at the same time, even
if there is not enough RAM. It uses a part of the hard disk as extra memory. So, the system
thinks it has more RAM than it actually does.
Why Virtual Memory is Needed?
1) Paging : Breaks memory into fixed-size blocks called pages (in virtual memory) and
frames (in physical memory). Only required pages are loaded into RAM.
2) Segmentation : Divides memory into logical segments like code, stack, data.
3) Each segment is loaded when needed.
4) Demand Paging : Loads pages only when they are required, not in advance.
5) Saves RAM and increases performance.
Advantages Disadvantages
Can run large applications even with Slower than real RAM (because hard disk is
small RAM. More processes can be slower).Too much usage causes thrashing
kept in memory. (system keeps swapping).
Increases CPU utilization. Provides Needs good page replacement algorithms.
memory isolation and protection.
Diagram
Conclusion : Virtual memory is a smart system that uses hard disk as extra RAM.
It helps in running big programs, improves multitasking, and ensures the system does not
crash when RAM is full.
Disk Scheduling
Definition : Disk scheduling is a way used by the Operating System to decide the order in
which disk read/write requests are processed.It is needed because multiple processes
may request data from the hard disk at the same time, and the disk head takes time to
move.
Goal : Reduce seek time (time taken to move the disk arm).
Increase efficiency and speed of disk operations.
2) SSTF (Shortest Seek Time First) : Picks the request that is closest to the current head
position. Faster than FCFS but may cause starvation (some requests wait too long).
Example: Head at 50, requests = [40, 20, 60]
Go to 60 (closest), then 40, then 20.
3) SCAN (Elevator Algorithm) : Head moves in one direction serving requests, then
reverses at the end. Like an elevator going up and down.
Example : Head at 50, moves right to max (say 200), then reverses and serves left-side
requests.
4) C-SCAN (Circular SCAN) : Like SCAN but instead of reversing, the head goes back to
start without serving in reverse. Provides uniform wait time.
5) LOOK : Like SCAN, but the head only goes as far as the last request in each
direction.Doesn’t go to end unless needed.
6) C-LOOK : Like C-SCAN but the head goes only as far as the last request, then jumps
to the start.
Diagram
Fragmentation
Definition : Fragmentation in Operating Systems happens when memory is wasted or
unused due to inefficient memory allocation. It reduces the efficiency of memory usage
and affects performance.
1) External Fragmentation: Occurs when free memory is scattered in small blocks between
allocated blocks. Even if there is enough total memory, a process may not get it because
the memory is not continuous. Happens in Contiguous memory allocation and
Segmentation.
Example : Free memory = 100 MB, but split as 30MB + 20MB + 50MB.
A process needing 60 MB cannot be allocated.
2) Internal Fragmentation : Occurs when allocated memory is slightly larger than needed,
and the unused part is wasted inside the allocated space. Happens in Paging where fixed-
size frames may not fully fit the data.
Example: Page size = 4 KB, a process needs 6 KB.
It will use 2 pages = 8 KB, and 2 KB is wasted.
Goals of RAID:
1. High performance (faster read/write)
2. Fault tolerance (no data loss if a disk fails)
3. Increased storage capacity
Raid Level’s
Diagram
Conclusion : RAID is used to improve speed, security, and reliability of data storage.
Different RAID levels are selected based on cost, performance, and fault tolerance
requirements.
Mutual Exclusion: Only one process must be in the critical section at any time.
Progress: If no process is in the critical section, one of the waiting processes should be
allowed to enter.
Bounded Waiting: A process should get its turn within a finite time.
No Preemption: A process should not be forcibly removed from the critical section.
Busy Waiting (optional): Process waits actively by looping, not sleeping.
Two Methods of Achieving Mutual Exclusion with Busy Waiting
1) Peterson’s Algorithm (for 2 processes): Uses two variables: flag[] and turn.
Each process sets its flag to true and gives turn to the other.
Only one process can enter the critical section.
Key Idea : A process waits in a loop (busy waiting) if the other process has priority.
2) Lock Variables (Software Lock):A simple boolean variable used to indicate if the
critical section is occupied.
If lock = false, a process can enter and sets lock = true.
Other processes keep checking the lock (busy waiting) until it becomes false.
Eg. : While (lock); // busy wait
Lock = true;
// critical section
Lock = false;
Diagram
Conclusion : Mutual Exclusion is necessary to protect shared resources.
With busy waiting, the CPU is actively used while the process waits.
Algorithms like Peterson’s and Lock Variable help achieve this, but they can be inefficient
due to CPU wastage.
File System
Definition : A File System is a part of the Operating System that manages data storage on
devices like hard disks and SSDs. It decides how data is stored, organized, retrieved, and
protected.
Different operating systems use different types of file systems. Some common ones:
a) FAT (File Allocation Table) : Used in older systems and USB drives. Simple and
widely supported. Slower for large drives.
b) NTFS (New Technology File System) : Used in Windows OS . Supports large files, file
permissions, encryption. More secure and reliable.
c) ext3/ext4 (Extended File System) : Used in Linux. ext4 is faster and supports
journaling (for recovery). Supports large volumes and better performance.
d) HFS+ / APFS (Apple File Systems) : Used in macOS. APFS is optimized for SSDs and
supports snapshots.
Diagram
Diagram
File Operations in OS
Definition: File operations are actions performed on files by the Operating System (OS) to
manage data stored on storage devices.
3. Read
Data is read from the file into memory.
OS uses file pointer to know where to start reading.
4. Write
Data is written into a file from memory.
OS updates the file contents and metadata.
5. Append
Adds new data at the end of the file without erasing existing data.
6. Close
After operations are done, the file is closed.
OS saves file info and releases resources.
7. Delete
File is removed from memory/storage.
OS frees up the space.
8. Seek (Reposition)
Changes the file pointer to read/write from a specific position in the file.
Helps in random access.
Diagram
Conclusion : A file system handles how data is stored and retrieved.
Different types are used by different OS. Consistency ensures reliability, and proper
implementation gives performance and safety.