0% found this document useful (0 votes)
14 views32 pages

Operating System Model Answer Paper

It's a Model Answer Paper of Operating System for B.E Level. @Bamu

Uploaded by

a12196472
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views32 pages

Operating System Model Answer Paper

It's a Model Answer Paper of Operating System for B.E Level. @Bamu

Uploaded by

a12196472
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

OS

System calls :
Definition : System calls are special functions used by user programs to request services
from the operating system (like reading files, creating processes, etc.).
Types of System Calls :
1. Process Control : These system calls manage processes (start, end, or wait).
Examples: {i} Exit() – Terminate a process {ii} Wait() – Wait for a process
{iii} Exec() – Replace a process with another {iv} Fork() – Create a new process

2. File Management : Used to work with files — open, read, write, or close them.
Examples: {i} Open(), close() {ii} Read(), write()
{iii} Create(), delete() {iv} Seek() – Move file pointer.

3. Device Management : Used to control hardware devices like printers, keyboards, etc
Examples : {i} Request_device() Ask OS to use a device {ii} Release_device() – Free the
device {iii} Read_device(), write_device()

4. Information Maintenance : Provides useful info about the system or process.


Examples : {i} getpid() – Get process ID {ii} alarm() – Set a timer {iii} gettime(), settime()

5. Communication (Interprocess Communication) : Used when processes need to talk


or share data with each other.
Examples: {i} Pipe() – Create a communication channel {ii} Shmget() – Shared memory
{iii} Send(), receive() – Message passing

Conclusion : System calls help user programs perform important tasks by asking the OS
for help. They are grouped into : Process control , File management ,Device management ,
Info maintenance , Communication
Diagram :
Deadlock :
Definition : A deadlock is a situation in a multiprogramming system where a set of
processes are blocked forever, waiting for each other to release resources. This happens
when the system enters a state where each process in the set is waiting for an event that
can only be caused by another process in the set.

Conditions for Deadlock:


1) Mutual Exclusion: Only one process can use a resource at any given time.

2) Hold and Wait: A process holding at least one resource is waiting for additional
resources.

3) No Preemption: Resources cannot be forcibly taken away from a process holding


them.

4) Circular Wait: A circular chain of processes exists, where each process is waiting for
a resource held by the next process.

Diagram

Deadlock Detection Process:


1) Resource Allocation Graph: In this method, a system graph is maintained, where
each node represents either a process or a resource.

2) Wait-for Graph: This is a simplified version of the resource allocation graph, where
we only consider processes that are waiting for other processes.

3) Banker’s Algorithm: In some systems, algorithms like the Banker’s algorithm can be
used to detect deadlocks by checking if resource allocation leaves the system in an
unsafe state.
Methods for Deadlock Recovery:
1) Process Termination :
Abort all deadlocked processes: This approach simply stops all the processes involved in
the deadlock. After termination, resources can be freed and reassigned.

Abort one process at a time: In some cases, only one process is terminated at a time to
break the deadlock. The system can check which process is causing the deadlock and
remove it from the resource cycle.

2) Resource Preemption:
Pre-empt resources from processes: The system can forcefully take resources away from
one process and allocate them to another process, breaking the circular wait. However,
this method is tricky, as it might cause inconsistency or corruption in data.

Rollback: Some systems allow processes to roll back to a safe state before they entered
the deadlock, so the system can try re-allocating resources in a different order.

Diagram

Mutual Exclusion :
Definition : Mutual Exclusion is one of the four necessary conditions for deadlock. It
means that a resource can only be used by one process at a time. If a process holds a
resource, no other process can use it until the process releases it.

Example :- Printer : If two processes want to print a document, only one process can use
the printer at a time. The other must wait until the printer is available.
Diagram

Deadlock Prevention : A deadlock occurs in a system when two or more processes are
blocked forever, waiting for each other to release resources. Deadlock prevention is a
method of ensuring that deadlock does not occur by eliminating one or more of the
necessary conditions that lead to it.

To prevent deadlock, we must break one of the four necessary conditions.


1) Eliminate Mutual Exclusion:
Description: Some resources, like printers or disk drives, require mutual exclusion to
function properly.
Prevention: We can’t always eliminate mutual exclusion, but virtual resources (like shared
memory) or stateless resources (e.g., read-only data) can be designed to avoid mutual
exclusion.

2) Eliminate Hold and Wait:


Description: A process must hold a resource while waiting for another.
Prevention: to require processes to release all held resources if they need more resources,
and then re-request them after the release.

3) Eliminate No Preemption:
Description: Once a process has acquired a resource, it cannot be forcibly taken away.
Prevention: If a process holding certain resources is preempted, the OS should forcefully
take the resource away and assign it to another process.

4) Eliminate Circular Wait:


Description: A set of processes exists such that each process is waiting for a resource held
by another process in the set.
Prevention: Resource Ordering : Assign a global order to all resource types. Processes must
request resources in increasing order of their assigned numbers. This ensures that circular
waiting is impossible because a process can only wait for resources with higher numbers.

Diagram

Summary : By removing one of the four necessary conditions (Mutual Exclusion, Hold and
Wait, No Preemption, Circular Wait), we can prevent deadlock from occurring. For
instance, forcing processes to request all resources at once (hold and wait) or requiring
resources to be preempted when needed (no preemption) can ensure that deadlock is
avoided.

Different/Various Process States.

Definition of Process : A process is a program in execution. It is an active entity with


program code, data, and execution state.When a process runs, it changes through different
states, managed by the OS.

Various Process States:


New Ready Running Waiting / Terminated Suspended Suspended
Blocked / Exit Ready Blocked

The The The The The The The


process is process process process is process process is process is
being is waiting is waiting for has ready but blocked
created. to be currently an I/O finished swapped and also
assigned using the operation execution out of swapped
to the CPU. or some or has been memory out.
CPU. event to killed (kept on
complete. disk).
It has not It is Only one Example: It is It will go It waits for
yet started ready to process Waiting for removed back to the event
execution. run but can be in user input from the ready state and to be
waiting this state or a file to system. when moved
in the on a load. brought back into
ready single- back into memory.
queue. core memory
CPU.

State Transition Diagram

Conclusion : Understanding process states helps the OS manage CPU scheduling,


memory, and I/O efficiently. Every state transition is controlled by the OS to ensure smooth
multitasking.

Operating System
Definition : An Operating System is a system software that acts as an interface between
the user and the computer hardware. It manages all hardware and s in software resources,
such as CPU, memory, input/output devices, and files.

Functions of OS:
1) Process management : Handles creation, scheduling, and termination of processes.

2) Memory management : Keeps track of each byte in memory. Allocates memory to


processes and frees it after use. Avoids memory wastage and protects memory
access.

3) File management : Manages reading, writing, and storing of files. Maintains file
structure, names, and permissions. Ensures data is stored and retrieved properly.
4) Device management : Controls input/output devices like keyboard, printer, etc.
Uses device drivers for communication.

5) Security and control : Protects system from unauthorized access. Manages


passwords, permissions, and encryption. Ensures data privacy and system safety.

Diagram

Types of Operating Systems:


1) Batch OS : i) No user interaction. ii) Jobs are collected and run in groups.

2) Time-Sharing OS : i) Many users use the system at the same time.


ii) CPU time is shared among tasks.

3) Distributed OS : i) Connects multiple computers to work like one system.


ii) Resources are shared.

4) Real-Time OS : i) Gives quick response.


ii) Used in critical systems like robots or hospital machines.

5) Network OS : i) Manages computers connected in a network. Eg : Windows Server.

6) Mobile OS : Used in smartphones and tablets. Example: Android, iOS.


Diagram

Multiprogramming : It means many programs are in memory at the same time. CPU picks
on It waits for the event and to be moved back into memory.

e program, runs it, & if it’s waiting (like for input), CPU switches to another.
Goal: To keep CPU busy all the time. Improves performance and CPU efficiency.
Example: If one program waits for input, CPU works on another. So time is not wasted.

Diagram :

Function : The Operating System manages all computer resources. It controls processes,
memory, files, and devices. Its main function is to run the system smoothly.

Objective : The objective of an OS is to make the system fast and efficient.


It connects user with hardware. The goal is to complete all tasks without errors or delays.

Use : We use the OS to operate the computer system. It helps run applications, access
files, and manage hardware. Users interact with the system through the OS.
Services: OS provides services like file handling, memory management, and device
control. These services help programs run properly. They ensure communication between
software and hardware

Diagram

Features : OS features include multitasking, security, and user interface. These are the
special qualities of an OS. They make the system user-friendly and powerful.

OS as a Resource Manager
The Operating System (OS) is like the manager of a computer. It controls and manages the
hardware resources like the CPU, memory, devices, and files..

1) CPU Management: OS decides which process gets the CPU’s time to run. It makes
sure no process is left waiting too long.

2) Memory Management: OS keeps track of the computer’s memory. It gives memory


to programs when needed and takes it back when they are done.

3) File Management: OS organizes and stores files on the computer. It makes sure the
right program can access the right file at the right time.

4) Device Management: OS manages devices like printers and keyboards. It makes


sure that programs can use these devices properly.

5) Resource Allocation: OS makes sure different programs do not fight over the same
resources, like the printer or memory.
Diagram

Summary : In simple words, the OS is responsible for sharing all the computer’s resources
in a way. that makes everything work together smoothly.

Process Control Block (PCB)


Definition : The Process Control Block (PCB) is a data structure used by the Operating
System (OS) to keep track of all the information about a process. Each process that is
created by the OS gets its own PCB, which holds essential information required to manage
and control the process.

Components of PCB:
1) Process ID (PID) : Every process is assigned a unique identification number called
the Process ID. This helps the OS differentiate between processes.

2) Process State : It stores the current state of the process, like whether it’s running,
waiting, ready, or terminated. This helps the OS know what the process is doing.

3) Program Counter : The program counter stores the address of the next instruction to
be executed for the process. It helps in keeping track of where the process was in its
execution.

4) Process Priority : The priority of the process may be stored in the PCB, especially in
systems that use priority scheduling for process execution. It helps the OS decide
which process should run next.

5) CPU Registers : These are the CPU registers used by the process during execution. It
includes values like accumulator, general-purpose registers, etc. When a process is
interrupted, the values in these registers are saved in the PCB.
6) Memory Management Information : The OS keeps track of the memory allocated to
the process, such as the base and limit registers. This helps the OS manage
memory for different processes.

7) Accounting Information : It includes details like CPU time used, process priority, and
other statistics related to process. This information helps OS in managing process
scheduling resource allocation.

8) I/O Status Information : This contains information about the I/O devices being used
by the process, such as files opened or devices requested. The OS uses this to
manage I/O operations.

Role of PCB :
1) Process Management: The OS uses the PCB to manage processes efficiently. It
helps in switching between processes (context switching).

2) Context Switching: When the OS switches between processes, the current


process's state is saved in its PCB, and the new process's state is loaded from its
PCB. This allows the process to resume exactly where it left off.
3) Resource Management: By storing information about memory, CPU usage, and I/O
devices in the PCB, the OS can allocate resources effectively.

Conclusion : In simple terms, the Process Control Block (PCB) is like a file where the OS
stores all the important information about a process. Without the PCB, the OS would not
be able to keep track of processes or switch between them properly.

Diagram

Inter-Process Communication (IPC)


Definition: Inter-Process Communication (IPC) is a mechanism that allows processes
(running programs) to communicate with each other and share data. Since processes run
independently in an operating system, IPC is essential for them to work together, exchange
information, or synchronize their actions.
Types of IPC
1) Message Passing: Processes send and receive messages to communicate.

2) Direct Communication: One process sends a message directly to another.

3) Indirect Communication: Messages are sent to a queue & then read by another
process..
Eg : A process A sends a message to process B to request data or perform an action.

4) Shared Memory: Processes can share a section of memory. One process writes to it,
& another reads from it. It’s faster because the processes directly access the
memory.
Eg : Process A writes data to the shared memory, and Process B reads from it.

IPC Methods
1) Pipes: A pipe allows one process to send data to another process. Anonymous
pipes are for related processes (like parent-child). Named pipes allow different
processes to communicate.
Eg : A producer process writes to the pipe, and a consumer process reads from it.

2) Message Queues: Messages are placed in a queue, and anothejr process can
retrieve them This method is used to store data temporarily.
Eg : Process A sends a message to a queue, and process B retrieves it when needed.

3) Semaphores: These are used to control access to shared resources. They make sure
only one process accesses a resource at a time.
Eg : A semaphore ensures that only one process can access a critical section of the code at
a time.

4) Sockets: Used for communication between processes on different computers over


a network. Example: A client-server setup.
Eg : A web browser (client) communicates with a web server using sockets to request and
receive data.

Why IPC is Needed:


• Sharing Data: Processes need to share data with each other.
• Synchronization: It makes sure processes do not interfere with each other.
• Cooperation: Processes need to work together for a task.
• Coordination: IPC is necessary for processes to work together and coordinate tasks.
Eg : one process might need to wait for another process to complete before it can continue.

Advantages of IPC
• Data Sharing: Allows processes to share large amounts of data efficiently.
Resource Sharing: Enables multiple processes to share resources like memory, devices, or
data without conflicts.

Synchronization: Helps processes synchronize and cooperate, avoiding race conditions


and deadlocks.

Diagram

Conclusion : IPC allows processes to communicate, share data, and coordinate without
interfering with synchronization: Helps processes synchronize and cooperate, avoiding
race conditions and deadlocks.ch other. Methods like message passing, shared memory,
and semaphores help make this possible.

Memory management with bitmap and linked list.


Introduction : Memory management is a function of the operating system that keeps track
of each byte in a computer’s memory & allocates memory blocks to programs & processes.
Two common techniques used by OS for memory management are:
Bitmap Method & Linked List Method

Bitmap / Bit Vector Method


What is Bitmap?
A bitmap is a sequence of bits where each bit represents a memory block.
Bit value 0 = block is free
Bit value 1 = block is occupied

How it works:
Memory is divided into fixed-size blocks.
A bit map keeps track of the status of each block.
To allocate memory, the OS searches for consecutive 0s (free blocks).
To free memory, the corresponding bits are set back to 0.

Example :
Bitmap: 1 1 0 0 0 1 0
↑↑↑
Used Used Free…

Advantages Disadvantages
Simple and easy to implement. Searching for large free blocks can be slow.
Fast checking of free/used memory Not flexible for variable-sized memory
blocks. requests.

Linked List Method


Free and allocated memory blocks are stored as nodes in a linked list.
Each node contains: Start address , Length of the block , Status (Free or Allocated)
Pointer to next block

How it works:
When a program requests memory, the OS searches the list for a free block large enough.
The block is split or allocated, and the list is updated.
When memory is freed, blocks may be merged (coalesced) if adjacent blocks are also free.

Example :
[Free: 0-99] → [Allocated: 100-199] → [Free: 200-299]

Advantages Disadvantages
Flexible for variable-size allocations. Searching can be slow.
Supports dynamic memory usage. Needs extra memory for pointers and status info.

Conclusion : Both bitmap and linked list methods are used for tracking memory usage.
Bitmap is best for fixed-size memory allocation.
Linked List is useful for dynamic, variable-size memory allocation.
Choosing the method depends on system design, speed, and flexibility needs.

Diagram
Paging
Definition : Paging is a memory management technique used by the Operating System.
It divides the process into small fixed-size blocks called pages, and divides the main
memory into blocks of the same size called frames.
Pages are loaded into available memory frames, so processes don’t need to be in one
continuous block, which helps reduce memory wastage (called fragmentation).

Types of Paging:

1) Simple Paging: All pages and frames are of equal size. Pages are loaded into any free
memory frame. There is no external fragmentation, but internal fragmentation may
occur.

2) Demand Paging : In this type, only the required pages are loaded into memory when
needed. This saves memory and increases efficiency. Pages not in memory cause a
page fault and are then loaded.

3) Virtual Paging (or Virtual Memory Paging): Uses secondary memory (like a hard disk)
as an extension of RAM. Pages are moved between RAM and disk as needed. It
allows large programs to run even if RAM is small.

Why Paging is Needed?


In real systems, free memory is not continuous.
To avoid external fragmentation, the OS divides both physical and logical memory into
fixed-size blocks.
It ensures efficient memory allocation and allows multiprogramming.

Components of Paging
• Pages : Fixed-size blocks of logical memory.
• Frames : Fixed-size blocks of physical memory (RAM).
• Page Table : A table used by OS to map logical pages to physical frames.
• Logical Address (Virtual Address) : The address used by programs.
• Physical Address: Actual location in RAM where data is stored.

Advantages Disadvantages
No external fragmentation. Page table can be large. Internal fragmentation (last
Efficient memory usage page may be partially empty)
Simple to implement. Supports Extra overhead due to address translation
virtual memory

Basic Paging Techniques (Types):

1) Simple Paging : Fixed-size pages and frames One page table per process. Easy but
page tables can become large
2) Hierarchical Paging : Breaks page table into multiple levels. Reduces size of each
table
3) Used in large address spaces (like 32-bit or 64-bit)
4) Hashed Paging : Uses hash table to map pages
5) Used when address space is very large (like in 64-bit systems)
6) Inverted Paging: One page table for whole system, not per process
7) Each entry contains info about which process/page is stored in that frame saves
memory, but slower lookup

Diagram

Conclusion : Paging helps manage memory better by breaking programs and memory into
fixed-size parts, and the types like simple, demand, and virtual paging help improve
efficiency and memory usage.
Various paging techniques like simple, hierarchical, and inverted paging are used based on
system architecture.

Linked List
Definition : A Linked List is a linear data structure used to store a collection of elements.
Unlike arrays, linked lists do not store elements in continuous memory locations. Instead,
each element (called a node) contains two parts:
Data – The value or information.
Pointer (Link) – The address of the next node in the list.

Types of Linked Lists :

1) Singly Linked List: Each node points to the next node only. The last node points to
NULL (end of the list).
2) Doubly Linked List : Each node has two pointers: one to the next node and one to
the previous node. It allows forward and backward movement.
3) Circular Linked List: In this type, the last node points back to the first node, forming
a circle. It can be singly or doubly circular.

Advantages Disadvantages
Dynamic memory usage (no Slower access (no direct
fixed size). index like arrays).
Easy to insert/delete nodes Extra memory used for
(especially in the middle). storing pointers.

Diagram

Logical Address
Definition : A logical address is the address that the CPU generates when a program is
running. It is also called a virtual address. This address is used by the program, but it is not
the real location in physical memory. The program thinks it is using this address, but the
actual data may be somewhere else. Logical addresses are useful in multi-tasking, as they
give each process its own memory view.

Physical Address
Definition : A physical address is the actual address in RAM (main memory) where the data
or instruction is stored. It is used by the memory unit to fetch or store data. This address is
calculated by the Memory Management Unit (MMU). The user or program cannot see
physical addresses directly. All data in memory finally goes to or comes from a physical
address.

How They Work Together (Address Translation):

The CPU generates a logical address.


The MMU adds a base address (also called relocation address) to it and converts it into a
physical address.
Formula: Physical Address = Base Address + Logical Address
Example : If base = 1000, logical address = 50
Then physical address = 1000 + 50 = 1050
Diagram

Conclusion : The logical address is what the program uses. The physical address is where
the actual data is. MMU converts logical into physical addresses during execution.This
helps in safe, fast, and efficient memory management.

Swapping
Definition : Swapping is a memory management technique used by the operating system
to increase the number of processes that can run. It means moving a process from main
memory (RAM) to the hard disk (secondary memory), and later bringing it back to main
memory when needed.
Why Swapping is Needed?
The main memory is limited and can’t hold all running processes at the same time.
So, when memory is full, and a new process needs to run, the OS swaps out (removes) a
process from RAM to free up space. When the swapped-out process is needed again, it’s
swapped back in.

How Swapping Works (Steps):


A process is running in main memory.
If another process needs memory and RAM is full, the OS selects a process to swap out.
That process is moved to the hard disk (usually in a space called swap space).
Now the new process is loaded into the free memory.
When the old process is needed again, it is swapped back into RAM.

Advantages Disadvantages
Increases multiprogramming: More Time-consuming: Moving data between
processes can be managed at a time. RAM and disk takes time (called swap time).
Efficient use of memory: Frees up RAM Can cause thrashing if swapping happens
space for active processes. too frequently.
Supports big programs even if RAM is Hard disk is slower than RAM, so
small. performance may reduce.

Swapping : Used in older systems more commonly. Now used as part of virtual memory
systems in modern OS like Windows, Linux (via swap files or partitions).

Diagram

Conclusion : Swapping is a technique where processes are moved between RAM and disk
to manage memory efficiently. It helps in handling more processes, but too much swapping
can slow down the system.
Goal of I/O Software in Operating System
Definition : I/O (Input/Output) software controls how a computer interacts with external
devices like keyboard, mouse, printer, disk, etc.

Main Goals of I/O Software:

1. Device Independence:
The same I/O software should work for different devices.
Example: You can print from any printer, not just one brand.

2. Uniform Naming:
Devices should be accessed using simple names (like disk1, printer) instead of hardware
details.

3. Error Handling:
It detects and handles device errors (like paper jam, read/write failure) automatically.

4. Buffering and Caching:


It uses buffers to store data temporarily for smooth transfer.
Caching helps store frequently used data for faster access.

Diagram

Conclusion : The goal of I/O software is to make communication with devices simple,
reliable, and efficient, while hiding hardware complexity from users and applications.
Buddy System Reallocation in Operating System
What is the Buddy System?
The buddy system is a memory allocation method.
It divides memory into blocks of sizes that are powers of 2 (like 4 KB, 8 KB, 16 KB, etc.).
It helps in efficient allocation and deallocation of memory.

Reallocation in Buddy System:


1. Allocation : If a process asks for memory (say 6 KB), the system finds the smallest block
that is power of 2 and ≥ 6 KB, which is 8 KB.
If no 8 KB block is available, a larger block (say 16 KB) is split into two 8 KB “buddies”, and
one is given to the process.

2.Deallocation (Reallocation): When the process finishes, the 8 KB block is freed.


The OS checks if its buddy (the other 8 KB block) is also free.
If yes, both buddies are merged (coalesced) into a 16 KB block again.
This merging continues recursively, reducing fragmentation.

Advantages Disadvantages
Fast Allocation & Deallocation Internal Fragmentation
Easy to Implement: Memory Wastage
Less External Fragmentation: Limited Block Sizes:
Efficient Coalescing: Extra Memory for Management:

Diagram

Conclusion : The buddy system reallocation is a smart way to manage memory by using
splitting and merging of memory blocks in powers of 2, making memory reuse and
management easier and faster.

Virtual Memory
Definition : Virtual Memory is a memory management technique used by the operating
system. It allows a computer to run big programs or many programs at the same time, even
if there is not enough RAM. It uses a part of the hard disk as extra memory. So, the system
thinks it has more RAM than it actually does.
Why Virtual Memory is Needed?

RAM is limited, but programs can be large.


To avoid memory shortage, OS stores part of programs in hard disk and loads only needed
parts into RAM.
It helps in multiprogramming and avoids out-of-memory errors.

How Virtual Memory Works


The OS gives each program a large logical address space (virtual addresses).
When a part of the program is needed, it is loaded into RAM.
The rest stays on the hard disk (called swap space or page file).
If a program needs a part that is not in RAM, a page fault occurs.
The OS swaps the required part from disk to RAM.

Techniques Used in Virtual Memory

1) Paging : Breaks memory into fixed-size blocks called pages (in virtual memory) and
frames (in physical memory). Only required pages are loaded into RAM.
2) Segmentation : Divides memory into logical segments like code, stack, data.
3) Each segment is loaded when needed.
4) Demand Paging : Loads pages only when they are required, not in advance.
5) Saves RAM and increases performance.

Advantages Disadvantages
Can run large applications even with Slower than real RAM (because hard disk is
small RAM. More processes can be slower).Too much usage causes thrashing
kept in memory. (system keeps swapping).
Increases CPU utilization. Provides Needs good page replacement algorithms.
memory isolation and protection.

Diagram
Conclusion : Virtual memory is a smart system that uses hard disk as extra RAM.
It helps in running big programs, improves multitasking, and ensures the system does not
crash when RAM is full.

Disk Scheduling
Definition : Disk scheduling is a way used by the Operating System to decide the order in
which disk read/write requests are processed.It is needed because multiple processes
may request data from the hard disk at the same time, and the disk head takes time to
move.

Goal : Reduce seek time (time taken to move the disk arm).
Increase efficiency and speed of disk operations.

Why Disk Scheduling is Important


Disk is slower than CPU and RAM.
Efficient scheduling helps in better performance of the system.
Improves response time, throughput, and reduces waiting time.

Types of Disk Scheduling Algorithms.


1) FCFS (First-Come First-Serve) Requests are handled in the order they arrive.
Simple but may be inefficient if requests are far apart.
Example: Requests = [10, 50, 20]
If head is at 0 → Moves: 0→10→50→20
Total movement = 10+40+30 = 80

2) SSTF (Shortest Seek Time First) : Picks the request that is closest to the current head
position. Faster than FCFS but may cause starvation (some requests wait too long).
Example: Head at 50, requests = [40, 20, 60]
Go to 60 (closest), then 40, then 20.

3) SCAN (Elevator Algorithm) : Head moves in one direction serving requests, then
reverses at the end. Like an elevator going up and down.
Example : Head at 50, moves right to max (say 200), then reverses and serves left-side
requests.

4) C-SCAN (Circular SCAN) : Like SCAN but instead of reversing, the head goes back to
start without serving in reverse. Provides uniform wait time.

5) LOOK : Like SCAN, but the head only goes as far as the last request in each
direction.Doesn’t go to end unless needed.

6) C-LOOK : Like C-SCAN but the head goes only as far as the last request, then jumps
to the start.
Diagram

Conclusion: Disk scheduling is needed to efficiently manage multiple disk requests.


Different algorithms like FCFS, SSTF, SCAN, LOOK etc., are used based on system needs.
Modern OS often use a mix of these for best performance.

Fragmentation
Definition : Fragmentation in Operating Systems happens when memory is wasted or
unused due to inefficient memory allocation. It reduces the efficiency of memory usage
and affects performance.

Types of Fragmentation : There are two main types of fragmentation.

1) External Fragmentation: Occurs when free memory is scattered in small blocks between
allocated blocks. Even if there is enough total memory, a process may not get it because
the memory is not continuous. Happens in Contiguous memory allocation and
Segmentation.
Example : Free memory = 100 MB, but split as 30MB + 20MB + 50MB.
A process needing 60 MB cannot be allocated.

2) Internal Fragmentation : Occurs when allocated memory is slightly larger than needed,
and the unused part is wasted inside the allocated space. Happens in Paging where fixed-
size frames may not fully fit the data.
Example: Page size = 4 KB, a process needs 6 KB.
It will use 2 pages = 8 KB, and 2 KB is wasted.

Occurrence in Paging: Paging causes internal fragmentation, not external.


Since all pages and frames are of fixed size, any extra space in a frame that is not filled by
data is wasted inside the frame.

Occurrence in Segmentation: Segmentation causes external fragmentation.


Segments are of variable size, so free memory gets scattered.
A new segment may not get memory even if enough is available in total, because it’s not in
one block.
Diagram
1) Segmentation.

External Fragmentation & Internal Fragmentation

Conclusion : Fragmentation is a common memory problem in operating systems. Paging


avoids external fragmentation but causes internal. Segmentation avoids internal
fragmentation but causes external. A good memory management system aims to minimize
both.
Linux & Windows File System Difference

RAID and its level in detail.


Definition : RAID stands for Redundant Array of Independent (or Inexpensive) Disks.
It is a data storage technique used to combine multiple physical disks into one logical unit
to improve performance, reliability, and fault tolerance.

Goals of RAID:
1. High performance (faster read/write)
2. Fault tolerance (no data loss if a disk fails)
3. Increased storage capacity

Raid Level’s
Diagram

Conclusion : RAID is used to improve speed, security, and reliability of data storage.
Different RAID levels are selected based on cost, performance, and fault tolerance
requirements.

What is Mutual Exclusion?


Definition : Mutual Exclusion is a concept in Operating System that ensures only one
process can enter the critical section at a time. The critical section is a part of the program
where the process accesses shared resources like variables, files, etc. If multiple
processes access it at the same time, it can cause data inconsistency or corruption.

What is Busy Waiting?


Definition : Busy Waiting is when a process continuously checks a condition (like a lock) to
enter the critical section. The CPU keeps the process running in a loop, using resources,
without doing any useful work.

Requirements for Mutual Exclusion with Busy Waiting

Mutual Exclusion: Only one process must be in the critical section at any time.
Progress: If no process is in the critical section, one of the waiting processes should be
allowed to enter.
Bounded Waiting: A process should get its turn within a finite time.
No Preemption: A process should not be forcibly removed from the critical section.
Busy Waiting (optional): Process waits actively by looping, not sleeping.
Two Methods of Achieving Mutual Exclusion with Busy Waiting
1) Peterson’s Algorithm (for 2 processes): Uses two variables: flag[] and turn.
Each process sets its flag to true and gives turn to the other.
Only one process can enter the critical section.
Key Idea : A process waits in a loop (busy waiting) if the other process has priority.

2) Lock Variables (Software Lock):A simple boolean variable used to indicate if the
critical section is occupied.
If lock = false, a process can enter and sets lock = true.
Other processes keep checking the lock (busy waiting) until it becomes false.
Eg. : While (lock); // busy wait
Lock = true;
// critical section
Lock = false;

Diagram
Conclusion : Mutual Exclusion is necessary to protect shared resources.
With busy waiting, the CPU is actively used while the process waits.
Algorithms like Peterson’s and Lock Variable help achieve this, but they can be inefficient
due to CPU wastage.

File System
Definition : A File System is a part of the Operating System that manages data storage on
devices like hard disks and SSDs. It decides how data is stored, organized, retrieved, and
protected.

Types of File Systems

Different operating systems use different types of file systems. Some common ones:
a) FAT (File Allocation Table) : Used in older systems and USB drives. Simple and
widely supported. Slower for large drives.
b) NTFS (New Technology File System) : Used in Windows OS . Supports large files, file
permissions, encryption. More secure and reliable.
c) ext3/ext4 (Extended File System) : Used in Linux. ext4 is faster and supports
journaling (for recovery). Supports large volumes and better performance.
d) HFS+ / APFS (Apple File Systems) : Used in macOS. APFS is optimized for SSDs and
supports snapshots.

Diagram

File System Consistency


Consistency means the file system is in a correct and stable state.

Why it’s important?


If a system crashes (power failure or software bug), the file system may become corrupted.

How to maintain consistency:


Journaling: Changes are first recorded in a journal (log) before applying to the disk.
Helps recover after crash.
Consistency Check Tools:
Tools like fsck (Linux) or chkdsk (Windows) are used to scan and fix errors.

File System Implementation :


Implementing a file system involves:
Directory Structure : How files are organized (single level, two level, tree, etc.).
File Control Block (FCB) : Contains metadata of the file – name, size, location, permissions.
Allocation Methods : How space is allocated on disk:
i) Contiguous Allocation – files stored in one block (fast, but causes fragmentation).
ii) Linked Allocation – each file is a linked list of blocks (no fragmentation, but slow).
Indexed Allocation – uses an index block to store addresses of file blocks.
Mounting : Attaching a file system to a directory structure so it becomes accessible.

Diagram

File Operations in OS
Definition: File operations are actions performed on files by the Operating System (OS) to
manage data stored on storage devices.

Types of File Operations:


1. Create
A new file is created with a name.
OS allocates space and metadata.
2. Open
OS opens an existing file to use it.
Loads file information into memory (like location, access mode).

3. Read
Data is read from the file into memory.
OS uses file pointer to know where to start reading.

4. Write
Data is written into a file from memory.
OS updates the file contents and metadata.

5. Append
Adds new data at the end of the file without erasing existing data.

6. Close
After operations are done, the file is closed.
OS saves file info and releases resources.

7. Delete
File is removed from memory/storage.
OS frees up the space.

8. Seek (Reposition)
Changes the file pointer to read/write from a specific position in the file.
Helps in random access.

Diagram
Conclusion : A file system handles how data is stored and retrieved.
Different types are used by different OS. Consistency ensures reliability, and proper
implementation gives performance and safety.

You might also like