Operating System Basics
Operating System Basics
An operating system is system software that manages computer hardware, software resources, and
provides various services for computer programs. It serves as an intermediary between users and
the computer hardware, facilitating the execution of applications and managing system resources.
The main functions of an operating system include process management, memory management, file
system management, device management, and user interface. It provides a platform for application
software to run, ensures resource allocation, and facilitates communication between hardware
components.
A process is an independent program in execution, with its own memory space, while a thread is a
lightweight, independent unit of execution within a process. Multiple threads within a process share
the same resources but have their own program counter and register values.
04. What are the differences between multiprogramming, multitasking, and multiprocessing?
Multitasking: Performing multiple tasks concurrently, allowing users to run multiple applications
simultaneously.
A context switch is the process of saving the state of a running process or thread and restoring the
state of another. It allows the operating system to switch between processes, providing the illusion
of concurrent execution.
06. What are the differences between a monolithic kernel and a microkernel?
Monolithic Kernel: All operating system components, such as device drivers and file systems, run in a
single address space.
Microkernel: Only essential functions, like process scheduling and inter-process communication, run
in the kernel space. Other services run as user-level processes.
Process creation involves allocating resources, initializing process control block (PCB), and loading
the program into memory. Termination involves releasing resources and reclaiming memory.
08. What is the difference between preemptive and non-preemptive scheduling?
Preemptive Scheduling: The operating system can suspend a currently running process to start or
resume another.
Non-preemptive Scheduling: Once a process starts, it runs until it completes or voluntarily releases
the CPU.
09. What are system calls, and how are they different from normal function calls?
System calls are interfaces for applications to request services from the operating system. They are
different from normal function calls as they involve a switch from user mode to kernel mode to
access privileged instructions.
Kernel mode allows unrestricted access to hardware and privileged instructions, reserved for the
operating system. User mode has restricted access, and normal application programs run in this
mode.
Process Management:
Process scheduling involves selecting processes from the ready queue and allocating the CPU to
them. It aims to optimize resource utilization and system performance.
12. What are the different scheduling algorithms used in operating systems?
Common scheduling algorithms include First-Come-First-Serve (FCFS), Shortest Job Next (SJN),
Priority Scheduling, Round Robin, and Multilevel Queue Scheduling.
14. What is a context switch, and how does it affect the performance of a system?
16. Explain the dining philosophers problem and how it can be solved.
The dining philosophers problem is a classic synchronization problem where philosophers must
avoid deadlock and starvation while sharing common resources (chopsticks). Solutions involve using
techniques like semaphore or mutex to control access.
A critical section is a portion of code that, when executed, must be exclusive to one process to
prevent data inconsistency. It is protected using synchronization mechanisms like semaphores or
mutex locks.
The reader-writer problem involves multiple processes trying to read or write to a shared resource.
Solutions include prioritizing readers or writers, using semaphores or mutex locks to control access.
19. Describe the process of process communication using inter-process communication (IPC).
IPC allows processes to communicate and synchronize. Methods include message passing, shared
memory, and synchronization primitives like semaphores.
20. What are the different IPC mechanisms available in operating systems?
IPC mechanisms include message passing, shared memory, pipes, sockets, and semaphores.
Memory Management:
Virtual memory is a memory management technique that uses both RAM and disk space to create
an illusion of a larger memory space. It allows running processes to use more memory than is
physically available.
Paging is a memory management scheme that allows the operating system to move pages of a
process in and out of physical memory. It provides better memory utilization and eliminates external
fragmentation.
23. What is a page fault, and how is it handled by the operating system?
A page fault occurs when a program accesses a page that is not currently in RAM. The operating
system handles it by bringing the required page into memory, updating page tables, and allowing the
program to continue.
Memory allocation involves reserving a block of memory for a process, and deallocation involves
releasing that memory when it is no longer needed.
Thrashing occurs when a system spends more time swapping pages than executing instructions. The
working set model helps prevent thrashing by keeping track of the pages a process is actively using.
26. Describe the different page replacement algorithms, such as LRU, FIFO, and Optimal.
Page replacement algorithms decide which page to replace when a page fault occurs. LRU (Least
Recently Used), FIFO (First-In-First-Out), and Optimal are common algorithms with different
strategies for page replacement.
27. What is the purpose of a page table, and how is it used in virtual memory management?
A page table is used to map virtual addresses to physical addresses. It is a data structure maintained
by the operating system to facilitate translation between virtual and physical memory.
Demand paging only loads pages into memory when they are needed, reducing the initial loading
time and improving overall system efficiency.
29. What is a segmentation fault, and how is it handled by the operating system?
A segmentation fault (segfault) occurs when a program attempts to access a restricted area of
memory. The operating system typically terminates the offending process to prevent further issues.
Process swapping involves moving an entire process, including its memory image, from main
memory to the disk and vice versa. It is done to free up memory for other processes.
File Systems:
A file system is a method of organizing and storing computer files and the data they contain.
Components include directories, files, file control blocks (FCBs), and data blocks.
32. Explain the different types of file systems, such as FAT, NTFS, and ext4.
File Allocation Table (FAT), New Technology File System (NTFS), and Extended File System (ext4) are
examples of file systems, each with its own structure and features.
File allocation involves assigning disk blocks to a file, and deallocation involves releasing these blocks
when a file is deleted or modified.
34. What is a file control block (FCB) or an inode, and how is it used in file systems?
FCB or inode contains metadata about a file, including file attributes, location, and ownership
information. It is used by the file system to manage files.
35. Explain the concepts of file descriptors and file descriptor tables.
A file descriptor is a unique identifier for an open file in a process. The file descriptor table is a data
structure that manages open files for a process.
36. What is a file allocation table (FAT), and how does it work?
FAT is a file system structure that keeps track of the status of each cluster in use on a disk. It enables
the operating system to locate and retrieve files.
37. Describe the differences between sequential, direct, and indexed file allocation methods.
File buffering involves temporarily storing data in memory before writing it to or reading it from a
file. It improves I/O performance by reducing the number of physical disk accesses.
39. What is a symbolic link, and how does it work in file systems?
A symbolic link is a reference to another file or directory. It works by storing the pathname of the
target file or directory, allowing for indirect access.
File permission management involves setting and controlling access rights to files and directories,
specifying which users or groups can read, write, or execute.
Device Management:
41. What is a device driver, and what is its role in an operating system?
A device driver is software that enables communication between the operating system and
hardware devices. It serves as an interface, translating high-level operating system commands into
device-specific commands.
Device allocation involves assigning a device to a process, and deallocation involves releasing the
device when it is no longer needed.
43. What are the different types of device scheduling algorithms used in operating systems?
Device scheduling algorithms determine the order in which processes access I/O devices. Examples
include FCFS (First-Come-First-Serve) and SSTF (Shortest Seek Time First).
When a device generates an interrupt, the operating system transfers control to the interrupt
service routine (ISR), which manages the interrupt and executes the necessary actions.
45. What is a device control block (DCB), and how is it used in device management?
A device control block is a data structure that contains information about a specific I/O device, such
as its status, location, and mode of operation. It is used by the operating system to manage devices.
47. What is a device register, and how does it relate to device management?
A device register is a hardware register within a device that the operating system uses to
communicate with and control the device. It plays a crucial role in managing device operations.
Polling: The operating system regularly checks the status of a device to determine if it needs
attention.
Interrupt-driven I/O: The device interrupts the CPU when it needs attention, allowing the CPU to
perform other tasks until the interrupt occurs.
A device queue is a queue that manages the order in which processes request and release access to
a device. It ensures fair and efficient use of the device.
Device management involves coordinating the use of hardware devices by the operating system,
including device allocation, interrupt handling, and communication with device drivers. It ensures
efficient and reliable interaction between software and hardware components.