2.
Essential Properties of System Types
i) Time-Sharing Systems
Essential Properties:
Multitasking: Allows multiple users to share the CPU simultaneously by rapidly switching
between processes.
Interactive: Provides a responsive environment where users can interact with their programs in
real-time.
Resource Sharing: Manages resources like CPU, memory, and I/O devices efficiently among
multiple users.
Context Switching: Rapidly switches between processes to give the illusion of parallel
execution.
Fairness: Aims to provide fair allocation of CPU time to all users.
Use Cases: General-purpose operating systems, server environments, interactive applications.
ii) Multi-Processor Systems
Essential Properties:
Parallel Processing: Uses multiple CPUs to execute tasks concurrently, significantly improving
performance.
Shared Memory: Processors often share a common memory space, allowing for efficient
communication.
Increased Throughput: Can handle a larger workload compared to single-processor systems.
Fault Tolerance: In some configurations, if one processor fails, others can continue operating.
Complex Scheduling: Requires advanced scheduling algorithms to distribute tasks among
processors.
Use Cases: High-performance computing, servers, scientific simulations, real time systems.
iii) Distributed Systems
Essential Properties:
Distributed Resources: Resources (hardware, software, data) are spread across multiple
computers connected by a network.
Transparency: Users and applications should ideally be unaware of the distributed nature of the
system.
Scalability: Can easily add or remove nodes to accommodate changing workloads.
Fault Tolerance: Failure of one node does not necessarily bring down the entire system.
Communication: Relies heavily on network communication between nodes.
Use Cases: Cloud computing, large-scale databases, web services, peer-to-peer networks.
3. System Calls
Definition: System calls are the interface between user-level applications and the operating
system kernel. They provide a way for applications to request services from the OS, such as file
I/O, memory allocation, and process management.
Examples:
read(): Reads data from a file or device.
write(): Writes data to a file or device.
fork(): Creates a new process.
exec(): Replaces the current process with a new program.
open(): Opens a file or device.
close(): Closes a file or device.
Advantages of a Unified File/Device Interface:
Abstraction: Simplifies programming by providing a consistent interface for accessing different
types of resources.
Flexibility: Allows applications to work with various devices without needing to know the
specifics of each device.
Code Reusability: Code written for file I/O can often be reused for device I/O.
Disadvantages:
Performance Overhead: The generic interface may introduce overhead for certain devices that
could benefit from specialized operations.
Limited Functionality: Some device-specific features may not be accessible through the generic
interface.
Security Risks: If not implemented carefully, a unified interface can create security
vulnerabilities.
4. FCFS Scheduling (First-Come, First-Served)
Concept: FCFS is a non-preemptive scheduling algorithm that executes processes in the order
they arrive in the ready queue.
Advantages:
Simple to implement.
Fair in the sense that processes are served in the order they arrive.
Disadvantages:
Convoy Effect: A long process can block all subsequent processes, leading to poor CPU
utilization.
Not optimal for short processes: Short processes may have to wait a long time if a long process
arrives first.
Average waiting time can be high.
5. System Programs, Loader, and Linker
System Programs: Utilities that provide a convenient environment for program development and
execution. Examples: compilers, debuggers, text editors.
Loader: A program that loads executable files into memory for execution.
Linker: A program that combines multiple object files and libraries into a single executable file.
Importance: These components are essential for the software development lifecycle, enabling
developers to create, compile, and run programs.
Diagram:
Source Code -> Compiler -> Object Files -> Linker -> Executable File -> Loader -> Memory ->
CPU
Explanation:
The compiler translates source code into object files, which contain machine code.
The linker combines object files and libraries to create an executable file.
The loader loads the executable file into memory, and the CPU executes the program.
6. Process Control Block (PCB)
Definition: A data structure used by the operating system to store information about a process.
Contents:
Process ID (PID)
Program counter (PC)
CPU registers
Memory management information
I/O status information
Scheduling information
Process state
Process Transition States:
New: Process is being created.
Ready: Process is waiting to be assigned to a CPU.
Running: Process is being executed by the CPU.
Waiting (Blocked): Process is waiting for an event (e.g., I/O completion).
Terminated: Process has finished execution.
Block Diagram:
New -> Ready -> Running -> Waiting -> Ready -> Running -> Terminated.
7. Unicast and Multicast IPC
Unicast IPC: One-to-one communication between two processes.
Multicast IPC: One-to-many communication, where a single message is sent to multiple
processes.
8. Message Passing in IPC
Concept: A form of inter-process communication where processes exchange messages to
communicate.
Synchronous Message Passing:
Sender and receiver are blocked until the message is delivered.
Provides strong synchronization.
Examples: Rendezvous in Ada.
Asynchronous Message Passing:
Sender sends the message and continues execution without waiting for the receiver.
Receiver can retrieve messages at its own pace.
Provides weaker synchronization.
Examples: Mailboxes, message queues.
Comparison:
Synchronous: Higher synchronization, lower concurrency.
Asynchronous: Lower synchronization, higher concurrency.
9. System Calls (Repeated Question)
See answer to question 3.
10. Multitasking vs. Multiprogramming & Round Robin
Multitasking: Allows multiple processes to share a single CPU by rapidly switching between
them, giving the illusion of parallel execution.
Multiprogramming: Allows multiple processes to reside in memory simultaneously, increasing
CPU utilization by overlapping CPU and I/O operations.
Round Robin Scheduling:
A preemptive scheduling algorithm that assigns a fixed time quantum to each process.
Processes are executed in a circular manner.
If a process does not complete within its time quantum, it is moved to the back of the ready
queue.
Steps:
Processes arrive and are placed in the ready queue.
A time quantum is defined.
The CPU is assigned to the first process in the ready queue.
If the process completes within the time quantum, it is terminated.
If the process does not complete, it is preempted and moved to the back of the ready queue.
Steps 3-5 are repeated until all processes are completed.
11. Round Robin Scheduling and Average Waiting Time
● Process Information:
○ P1: Arrival Time (AT) = 0, Burst Time (BT) = 5
○ P2: AT = 1, BT = 3
○ P3: AT = 2, BT = 1
○ P4: AT = 3, BT = 2
○ P5: AT = 4, BT = 3
● Time Quantum (TQ): Let's assume a TQ of 2 for this example.
● Gantt Chart:
○ 0-2: P1
○ 2-4: P2
○ 4-5: P3
○ 5-7: P4
○ 7-9: P5
○ 9-11: P1
○ 11-12: P2
○ 12-13: P5
○ 13-14: P1
● Completion Time (CT):
○ P1: 14
○ P2: 12
○ P3: 5
○ P4: 7
○ P5: 13
● Turnaround Time (TAT) = CT - AT:
○ P1: 14 - 0 = 14
○ P2: 12 - 1 = 11
○ P3: 5 - 2 = 3
○ P4: 7 - 3 = 4
○ P5: 13 - 4 = 9
● Waiting Time (WT) = TAT - BT:
○ P1: 14 - 5 = 9
○ P2: 11 - 3 = 8
○ P3: 3 - 1 = 2
○ P4: 4 - 2 = 2
○ P5: 9 - 3 = 6
● Average Waiting Time: (9 + 8 + 2 + 2 + 6) / 5 = 27 / 5 = 5.4
12. Essential Properties of Systems
● i) Batch Processing Systems:
○ Job-oriented: Processes jobs in batches without user interaction.
○ Sequential processing: Jobs are executed in the order they are submitted.
○ High throughput: Efficient for large volumes of similar jobs.
○ Offline processing: Input data is prepared offline and submitted as a batch.
○ Low resource utilization: Resources may be idle between batches.
● ii) Multi-User Systems:
○ Simultaneous access: Multiple users can access and use the system concurrently.
○ Resource sharing: Resources (CPU, memory, peripherals) are shared among
users.
○ Time-sharing: CPU time is allocated to users in time slices.
○ Security and protection: Mechanisms to protect user data and prevent
interference.
○ User management: Features for managing user accounts and permissions.
● iii) Distributed Systems:
○ Multiple nodes: Consists of multiple independent computers (nodes) connected by
a network.
○ Resource sharing: Resources are distributed across the network.
○ Concurrency: Multiple nodes can execute tasks concurrently.
○ Fault tolerance: System can continue operating even if some nodes fail.
○ Scalability: System can be expanded by adding more nodes.
○ Communication: Nodes communicate using message passing.
13. System Calls
● Definition: System calls are the interface between user-level programs and the operating
system kernel. They provide a way for programs to request services from the OS, such as
file I/O, process management, and memory allocation.
● Types:
○ Process Control: fork(), exec(), wait(), exit().
○ File Management: open(), read(), write(), close().
○ Device Management: request device(), release device().
○ Information Maintenance: get time(), get date().
○ Communication: create connection(), send message(), receive message().
○ Memory Management: allocate memory(), free memory().
● Diagram:
+-----------------+
| User Program |
+-----------------+
|
| System Call
V
+-----------------+
| OS Kernel |
+-----------------+
|
| System Call Implementation
V
+-----------------+
| Hardware |
+-----------------+
14. Services of the Operating System
● For Users:
○ User Interface: Provides a way for users to interact with the system (GUI, CLI).
○ Program Execution: Loads and executes programs.
○ File System Manipulation: Creates, deletes, reads, writes, and manages files.
○ Communication: Enables communication between processes and users.
○ Error Detection: Detects and handles errors.
● For the System:
○ Resource Allocation: Allocates resources (CPU, memory, I/O devices).
○ Accounting: Tracks resource usage.
○ Security and Protection: Protects system resources and user data.
○ I/O Operations: Manages I/O devices.
15. Inter-Process Communication (IPC)
● Definition: IPC is the mechanism that allows processes to communicate and synchronize
their actions.
● Methods:
○ Shared Memory: Processes share a region of memory.
○ Message Passing: Processes exchange messages through the OS.
○ Pipes: unidirectional data flow between related processes.
○ Sockets: for network communication.
● Diagram:
+---------+ +---------+
| Process A | <---> | Process B |
+---------+ +---------+
| |
| IPC Mechanism |
V V
+-----------------------+
| Operating System |
+-----------------------+
16. (Repeated Question) Services of the Operating System
● See answer for question 14.
17. Operating System Structures
● Monolithic Structure:
○ All OS components are integrated into a single kernel.
○ Simple and efficient, but difficult to maintain.
○ Diagram: All os modules inside one kernel.
● Layered Structure:
○ OS is organized into layers, each layer using services from the layer below.
○ Easier to debug and modify.
○ Diagram: Layers one on top of the other, with hardware at the bottom layer.
● Microkernel Structure:
○ Only essential functions are in the kernel; other services are in user space.
○ Highly modular and reliable.
○ Diagram: Small kernel with user level servers.
● Modular Structure:
○ Core kernel with dynamically loadable modules.
○ Flexible and customizable.
○ Diagram: core kernel with modules that can be added and removed.
18. Mode-bit and Dual-Mode Operation
● Mode-bit: A hardware bit that indicates the current execution mode (kernel mode or user
mode).
● Dual-Mode Operation:
○ Kernel Mode (0): OS has full access to hardware and memory.
○ User Mode (1): User programs have limited access.
○ Protects the OS from user programs.
● Diagram:
+---------------------------------+
| Mode-bit (0 or 1) |
+---------------------------------+
|
V
+-----------------+ +-----------------+
| Kernel Mode (0) | | User Mode (1) |
+-----------------+ +-----------------+
| |
| Full Access | Limited Access |
V V
+---------------------------------+
| Hardware and Memory |
+---------------------------------+
● 19. Describe the structure of operating system with a supporting
diagram.
● The structure of an operating system can be viewed as a layered
architecture, where each layer provides services to the layer
above it. Common structures include:
● Simple Structure (Monolithic):
● All OS components reside in the kernel.
● Fast but difficult to maintain.
● Example: Early versions of MS-DOS.
● Layered Structure:
● OS is divided into layers, each with specific functionality.
● Layer N uses services of layer N-1.
● Easier debugging and modularity.
● Example: Some versions of UNIX.
● Microkernel Structure:
● Only essential services (e.g., process management, memory
management) in the kernel.
● Other services (e.g., file system, device drivers) run as
user-level processes.
● Increased reliability and security.
● Example: QNX, macOS (XNU).
● Modular Structure:
● Kernel has a set of core components and dynamically loadable
modules.
● Flexibility and extensibility.
● Example: Modern Linux.
● Supporting Diagram (Layered Structure Example):
● +-----------------+
● | User Programs |
● +-----------------+
● | System Programs |
● +-----------------+
● | File System |
● +-----------------+
● | Device Drivers |
● +-----------------+
● | Memory Manager |
● +-----------------+
● | Process Manager |
● +-----------------+
● | Kernel |
● +-----------------+
● | Hardware |
● +-----------------+
●
●
● 20. Explain in detail the multithreading model, its advantages and
disadvantages with suitable illustration.
● Multithreading allows multiple threads of execution to exist
within a single process.
● Multithreading Models:
● Many-to-One:
● Multiple user-level threads map to a single kernel thread.
● Thread management is done by the thread library in user space.
● If one thread blocks, the entire process blocks.
● Example: Some older versions of Solaris.
● One-to-One:
● Each user-level thread maps to a separate kernel thread.
● Provides better concurrency.
● Overhead due to creating and managing kernel threads.
● Example: Linux, Windows.
● Many-to-Many:
● Multiple user-level threads map to multiple kernel threads.
● Combines the advantages of the other two models.
● Allows for greater concurrency while reducing overhead.
● Example: Solaris, Windows.
● Illustration (Many-to-Many):
● User Threads: T1 T2 T3 T4
● \ / \ /
● Kernel Threads: K1 K2 K3
●
●
● Advantages:
● Responsiveness: A blocked thread does not block the entire
process.
● Resource Sharing: Threads within a process share resources,
reducing overhead.
● Economy: Creating and managing threads is less expensive than
processes.
● Utilization of Multiprocessor Architectures: Multiple threads can
run concurrently on different processors.
● Disadvantages:
● Complexity: Multithreaded programming can be more complex.
● Synchronization: Requires careful synchronization to avoid race
conditions and deadlocks.
● Debugging: Debugging multithreaded applications can be
challenging.
● 21. Describe the following concepts in detail:
● Context Switching:
● The process of storing and restoring the state of a CPU so that
execution can be resumed from the same point later.
● Occurs when the OS switches from one process to another.
● Involves saving the CPU registers, program counter, and process
state.
● Overhead is incurred due to the time taken to save and restore the
context.
● Schedulers:
● OS components that determine which process or thread gets to run
on the CPU.
● Types:
● Long-Term Scheduler (Job Scheduler): Selects processes from the
job queue and loads them into memory.
● Short-Term Scheduler (CPU Scheduler): Selects which process from
the ready queue should be executed next.
● Medium-Term Scheduler (Swapper): Removes processes from memory
(swapping) to reduce the degree of multiprogramming.
● 22. What is the User-OS interface? Explain Command-Line Interface
(CLI), Graphical User Interface (GUI), and system calls with
suitable examples.
● The User-OS interface is the means by which users interact with
the operating system.
● Command-Line Interface (CLI):
● Text-based interface where users type commands.
● Powerful and efficient for experienced users.
● Example:
● ls -l (Linux/macOS) to list files in detail.
● dir (Windows) to list files.
● Graphical User Interface (GUI):
● Uses windows, icons, menus, and pointers (WIMP).
● User-friendly and intuitive.
● Example: Windows desktop, macOS Finder.
● System Calls:
● Interface between user-level programs and the OS kernel.
● Provide access to OS services.
● Example:
● open(): Opens a file.
● read(): Reads data from a file.
● write(): Writes data to a file.
● fork(): Creates a new process.
● 23. Illustrate the Process Control Block (PCB) in detail.
● The Process Control Block (PCB) is a data structure used by the OS
to store information about a process.
● PCB Components:
● Process State: Running, waiting, ready, etc.
● Program Counter: Address of the next instruction to be executed.
● CPU Registers: Accumulators, index registers, stack pointers, etc.
● CPU Scheduling Information: Process priority, scheduling queues
pointers.
● Memory Management Information: Memory allocated to the process,
page tables.
● Accounting Information: CPU time used, time limits.
● I/O Status Information: I/O devices allocated, open files.
● Illustration:
● +-----------------------------------+
● | Process State |
● +-----------------------------------+
● | Program Counter |
● +-----------------------------------+
● | CPU Registers |
● +-----------------------------------+
● | CPU Scheduling Information |
● +-----------------------------------+
● | Memory Management Information |
● +-----------------------------------+
● | Accounting Information |
● +-----------------------------------+
● | I/O Status Information |
● +-----------------------------------+
●
●
● 24. Discuss types of Multithreading models in detail.
● I have already explained the Multithreading models in detail in
answer 20. Please refer to that answer for the explanation of
Many-to-One, One-to-One, and Many-to-Many models.
● 25. Explain process creation and termination with suitable
examples.
● Process Creation:
● A new process is created by an existing process (parent process).
● The parent process can create child processes.
● The fork() system call is commonly used to create a new process.
● Example (C using fork()):
● #include <stdio.h>
● #include <unistd.h>
●
● int main() {
● pid_t pid;
●
● pid = fork();
●
● if (pid < 0) {
● fprintf(stderr, "Fork Failed");
● return 1;
● } else if (pid == 0) {
● // Child process
● printf("Child Process: PID = %d\n", getpid());
● } else {
● // Parent process
● printf("Parent Process: PID = %d, Child PID = %d\n", getpid(),
pid);
● wait(NULL); // Wait for child to finish
● }
●
● return 0;
● }
●
●
● Process Termination:
● A process terminates when it completes its execution or encounters
an error.
● The exit() system call is used to terminate a process.
● The parent process can terminate a child process.
● Example:
● A user closing a program window.
● A program encountering a fatal error (e.g., division by zero).
● Using the kill command in linux.
●