0% found this document useful (0 votes)
43 views5 pages

Key Concepts in Operating Systems

The document covers various concepts related to operating systems, including process management, memory management, and synchronization techniques. It explains key terms such as I/O bound processes, context switching, semaphores, and scheduling algorithms, along with their roles and implications. Additionally, it discusses memory allocation strategies, fragmentation, and the advantages of distributed operating systems.

Uploaded by

maheshnile92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views5 pages

Key Concepts in Operating Systems

The document covers various concepts related to operating systems, including process management, memory management, and synchronization techniques. It explains key terms such as I/O bound processes, context switching, semaphores, and scheduling algorithms, along with their roles and implications. Additionally, it discusses memory allocation strategies, fragmentation, and the advantages of distributed operating systems.

Uploaded by

maheshnile92
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

I/O bound process: A process temporarily from memory to Thread library is a collection of

that spends more time optimize CPU and memory APIs for creating and managing
performing I/O operations than utilization. threads.
computations. CPU - I/O burst cycle is the Synchronization: Coordination of
Purpose of `fork () ` system call: alternating pattern of CPU processes to prevent conflicts,
To create a new process that is a execution and I/O wait in a ensure correct results.
copy of the calling process. process's lifecycle. Physical address space: The set
Bootstrap Loader: A small Response time is the time of all physical memory
program that loads the operating between submitting a request addresses accessible by the CPU.
system kernel into memory and receiving the first response. Page: A fixed-size block of
during the boot process. Semaphore: A synchronization virtual memory.
Context switch: The act of saving tool used to manage access to Booting: The process of starting
the state of one process and shared resources in concurrent a computer and initializing the
loading the state of another for processing. operating system.
CPU execution.
“Priority scheduling suffers from The critical section problem
starvation”, Justify: True. Low- occurs when multiple processes
priority processes may be Page table is a data structure access shared resources
indefinitely delayed if higher- used to map virtual memory simultaneously, leading to data
priority processes keep addresses to physical memory inconsistency it requires:-
executing. addresses. Mutual exclusion: Only one
Mutual Exclusion is a property Segmentation: A memory process accesses the critical
ensuring that only one process management technique dividing section at a time.- Progress:
accesses a critical section at a a process into variable-sized Processes outside the critical
time. segments. section shouldn't block others.-
Race condition: A situation Bootstrapping: The process of Bounded waiting: Limits the time
where multiple processes starting a computer by loading processes wait for access.
access shared data the operating system. The dispatcher assigns CPU
simultaneously, leading to POSIX pthread: A standard API control to the process chosen by
unpredictable results. for creating and managing the scheduler. It performs: -
Limit register is a hardware threads in Unix-based systems. Context switching, - Switching to
register that specifies the Role of dispatcher: To hand over user mode, - Jumping to the
maximum size of a process's CPU control to the process program's next instruction.
address space. selected by the scheduler.
Frame is a fixed-size block of Solutions to critical section
physical memory in a paging problems:
system. - Semaphores, - Mutex locks, - Benefits of virtual memory: -
List the advantages of open- Monitors. Enables execution of programs
source OS: Page hit: When the required page larger than physical memory. -
-Cost-effective, - Customizable, is already in main memory. Increases multitasking by using
- secure and Transparent. Kernel:The core of the operating paging and swapping. - Provides
Shell is a command interpreter system that manages hardware memory isolation, improving
that allows user interaction with and system resources. stability and security.
the operating system. Ready queue: A queue of Two advantages of
Thread is a lightweight process processes ready for CPU multithreading:
that shares resources with other execution. - Resource sharing: Threads
threads in the same process. Two types of semaphores: - within a process share memory
Types of system calls: - Process Binary semaphore, - Counting and resources, reducing
control, semaphore. overhead. - Responsiveness:
- File management, - Device Virtual memory is a memory Allows concurrent execution,
management management technique that making applications faster.
- Communication. uses secondary storage as if it - Process management system
Role of medium-term scheduler: were main memory. calls: `fork()`, `exec()`, `exit()`,
To remove processes `wait()`
- Device manipulation system fragmentation and enables Which three requirements must
calls: `open()`, `read()`, `write()`, virtual memory. be satisfied while designing
`ioctl()` solutions to the critical section
- Preemptive scheduling: CPU problem: 1. Mutual Exclusion:
An operating system (OS) is can be taken from a running Only one process can execute in
software managing hardware process (e.g., Round Robin). the critical section at any time. 2.
and software resources. - Non-preemptive scheduling: Progress: If no process is in the
Objectives: - Efficient resource Process holds the CPU until it critical section, other processes
utilization, - User-friendly finishes (e.g., FCFS). should decide who enters next.
environment - Secure and - Preemptive scheduling is 3. Bounded Waiting: There must
reliable system operation. responsive but complex, while be a limit on how long a process
Compare LFU and MFU with two non-preemptive is simpler but waits to enter the critical section.
points: can delay high-priority
- LFU (Least Frequently Used): processes.
Removes the least accessed Advantages of distributed
page. Suited for stable access operating systems: - Resource
patterns. - MFU (Most sharing: Access to remote
Frequently Used): Removes the resources increases efficiency.
most accessed page. Assumes - Fault tolerance: System
older pages are less useful. remains operational despite
Multilevel queue scheduling component failures. A process is a program in
divides processes into separate - Scalability: Easy to add more execution. It requires resources
queues based on priority or type nodes for larger workloads. like CPU, memory, I/O devices,
(e.g., system processes, Functions of memory and registers. A program
interactive processes). - Each management: becomes a process when it is
queue can have its own - Allocation and deallocation of loaded into memory with its
scheduling algorithm. - Higher- memory. Process Control Block (PCB)
priority queues are served - Address translation between created.
before lower-priority queues. virtual and physical memory. - Process States: 1. New: The
Managing fragmentation and process is being created. 2.
Purpose of scheduling handling page faults. Ready: The process is loaded
algorithms: into memory and waiting for CPU
Scheduling algorithms allocate - Independent processes: allocation. 3. Running: The CPU
CPU efficiently, optimizing: - CPU Operate without sharing data or is executing the process
utilization, resources. No interference instructions. 4. Waiting: The
- Throughput, - Response time. occurs. process is waiting for an I/O
They ensure fairness and - Dependent processes: Share operation or event to complete. 5.
minimize waiting time. data /resources and need Terminated: The process has
Producer and consumer synchronization to avoid completed execution.
problems: A conflicts. State Transitions: - New →
synchronization issue where: - Types of schedulers and explain Ready: Process admitted into
Producer: Adds items to a short-term schedulers in detail: memory. - Ready → Running:
shared buffer. - Consumer: Types of Schedulers: CPU scheduler allocates CPU. -
Removes items from the buffer. - Long-term scheduler: Selects Running → Waiting: Process
Solution: Semaphores or processes for execution. - waits for an I/O event. -
mutexes ensure mutual Medium-term scheduler: Swaps Waiting → Ready: I/O event
exclusion and prevent buffer processes in/out of memory. - completes.
underflow /overflow. Short-term scheduler: Quickly - Running → Terminated:
Paging divides memory into selects the next process for CPU Process finishes execution.
fixed-size blocks called frames execution.
and processes into pages. - Short-term Scheduler: Assigns
Pages are mapped to frames the CPU to the highest-priority
using a page table. - It avoids ready process, ensuring efficient
utilization and responsiveness.
------------------------------ to control access to shared
----------- resources in concurrent
Fragmentation is a memory systems. Dining Philosopher’s
management issue where free 2. One-to-One: - Each user Problem: - Five philosophers
memory blocks are too small or thread maps to a kernel thread. alternately think and eat, sharing
scattered to be used effectively. - Advantages: True parallelism. five chopsticks. - Problem arises
Types of Fragmentation: This model provides more when all philosophers try to pick
1. Internal Fragmentation: - concurrency than the many to up chopsticks simultaneously,
Occurs when fixed-size memory one model. - Disadvantages: causing a deadlock. Solution
blocks allocate more space than High resource usage. It reduces Using Semaphores: - Use one
needed. - Example: Allocating 10 the performance of system semaphore per chopstick.-
KB for a 7 KB process wastes 3 Ensure a philosopher picks up
KB. both chopsticks before eating
2. External Fragmentation: - and releases them afterward.
Happens when free memory is ------------------------------ Types of Schedulers: 1. Long-
fragmented into small, non- ------- term Scheduler: Admits
contiguous blocks. - Example: A Logical and physical address processes into the ready queue
process requiring 40 KB cannot binding with a diagram. based on system capacity. 2.
be allocated even if total free - Logical Address: Generated by Medium-term Scheduler: Swaps
space exceeds 40 KB but is the CPU during program processes in and out of memory
scattered. execution. to optimize multitasking. 3.
Solutions: - Compaction: - Physical Address: Actual Short-term Scheduler: Allocates
Combines free memory into memory address used to access CPU to processes in the ready
contiguous blocks. - Paging and data in RAM. queue. Short-term Scheduler: -
Segmentation: Allocates Operates frequently and selects
memory in fixed-sized pages or the next process to execute. -
logical segments. Uses algorithms like FCFS,
Round Robin, and Priority
Scheduling. - Aims to maximize
CPU utilization and minimize
Address Binding Methods: 1.
waiting time.
Compile-time: Logical and
physical addresses are the same
PCB with all its fields: The
if memory is fixed at compile
Process Control Block (PCB)
time.
stores information about a
2. Load-time: Logical addresses
process, including: 1. Process ID:
are mapped to physical
Unique identifier. 2. State:
addresses when the program is
A thread is the smallest unit of Current process state (new,
loaded. 3. Run-time: MMU
execution within a process. ready, running, waiting,
dynamically maps logical to
Threads share resources like terminated).
physical addresses.
memory, code, and data but 3. Program Counter: Address of
execute independently. the next instruction. 4. Registers:
Multithreading Models: CPU register values for context
1. Many-to-One: - Maps multiple switching. 5. Memory
user threads to a single kernel Management Info: Details like
thread. page tables or segment tables. 6.
- Advantages: Simple I/O Status: List of I/O devices
implementation. One kernel allocated to the process.
thread controls multiple 7. Accounting Info: CPU usage,
userthreads time limits, etc.
- Disadvantages: No true Reader-writer problem in brief:
parallelism as only one kernel The reader-writer problem
thread is active. A semaphore is a occurs when multiple processes
synchronization primitive used access shared data: - Readers:
Can read concurrently without ------------------------------ Disadvantages: 1. Complexity:
conflicts. ----------- Managing distributed nodes is
- Writers: Require exclusive A Layered Operating System of challenging. 2. Security Risks:
access to avoid inconsistencies. OS divides system functionality Data transfer between nodes
Solution Using Semaphores: - into layers, each performing can be vulnerable to attacks. 3.
Mutex: Ensures mutual specific tasks. Network Dependency: System
exclusion. - Read count: Tracks Structure: 1. Hardware Layer: performance depends on the
the number of active readers. - Manages physical resources like network's reliability.
Writer Semaphore: Ensures only CPU, memory, and I/O devices. 2. ------------------------------
one writer accesses the shared Kernel Layer: Provides core OS -----------
resource at a time. functionalities like process Swapping is a memory
management and memory management technique where
management. 3. System Calls processes are temporarily
Layer: Provides APIs for user moved from main memory to
applications to interact with the secondary storage (disk) to free
kernel. 4. User Interface Layer: up memory for other processes.
. Includes graphical interfaces Steps in Swapping: 1. Move to
The bounded buffer problem and command-line interpreters Disk: An idle or low-priority
involves a producer and a for user interaction. process is moved from RAM to
consumer sharing a fixed-size Advantages: - Simplifies disk (swap space). 2. Bring Back
buffer. - Producer: Add items to debugging and development. - to Memory: When the process
the buffer but must avoid Each layer communicates only becomes active, it is loaded back
overflow. - Consumer: Removes with adjacent layers. into RAM.
items from the buffer but must Disadvantages: - Overhead due
avoid underflow. to strict modularity. - Slower
Solution Using Semaphores: performance compared to
1. Empty Semaphore: Tracks free monolithic systems.
buffer spaces. 2. Full Semaphore:
Tracks filled buffer spaces. 3.
Mutex Semaphore: Ensures
mutual exclusion during buffer
operations. Advantages: - Increases
The Memory Management Unit memory utilization.
(MMU) is hardware that handles - Supports multitasking.
the translation of logical
addresses (generated by the
CPU) to physical addresses
(used by memory).
- Working: 1. CPU generates a
logical address. 2. MMU A Distributed Operating System
translates the logical address (DOS) manages a group of
into a physical address using independent computers to
mapping techniques like paging appear as a single system to
or segmentation. users. Resources are distributed
- Benefits: 1. Enables virtual but accessible transparently.
memory, allowing programs to Advantages: 1. Resource Sharing: Explain first fit, best fit, worst fit,
use more memory than Access to remote resources like and next fit algorithms: These
physically available. 2. Provides files, printers, or databases. 2. algorithms are used for
memory protection by isolating Fault Tolerance: System remains allocating memory to processes
processes. operational even if one node in contiguous memory allocation.
Example: A logical address fails. 1. First Fit: - Allocates the first
`0x1A2B` is translated by the 3. Scalability: Easy to add or available memory block large
MMU to physical address remove nodes. enough for the process.
`0xB2A1` for accessing RAM.
- Advantage: Simple and fast. resilient. 4. Cost: Higher initial
- Disadvantage: Leads to cost for client-server; lower for
fragmentation. P2P as resources are shared.
2. Best Fit: - Allocates the
smallest available block that can
fit the process.
- Advantage: Reduces wasted
space.
- Disadvantage: Slower due to
searching.
3. Worst Fit: - Allocates the
largest available block. - Segmentation is a memory
Advantage: Leaves larger blocks management technique dividing
for future processes. - a process into segments of
Disadvantage: Inefficient for variable sizes, such as code,
smaller processes. stack, and data segments.
4. Next Fit: - Starts searching Working: - Each segment has a
from the last allocated block. - segment table with base (start
Advantage: Reduces search time. address) and limit (size). - The
- Disadvantage: Can lead to CPU generates a logical address
fragmentation. as ` (segment number, offset) `. -
------------------------------ The MMU maps the logical
----------- address to the physical address
Client-Server computing: 1. using the segment table.
Architecture: Centralized with a Advantages: 1. Logical division
powerful server providing makes debugging easier. 2. No
resources to multiple clients. internal fragmentation as
2. Resource Management: segments are allocated
Centralized control, easier dynamically.
scalability. 3. Data Storage: Disadvantages: 1. External
Centralized on the server. 4. fragmentation can occur. 2.
Security: Centralized security Overhead for maintaining
measures. 5. Examples: Web segment tables.
servers, email servers.
Peer-to-Peer (P2P) Computing:
1. Architecture: Decentralized
with each node acting as both
client and server.
2. Resource Management:
Distributed control, scalable as
more peers join.
3. Data Storage: Distributed
across multiple peers. 4.
Security: Distributed security
measures. 5. Examples: File
sharing networks like Bit Torrent,
block chain.
Key Differences 1. Control:
Centralized in client-server,
decentralized in P2P.
2. Scalability: Easier in P2P as
more peers join. 3. Reliability:
Client-server can be disrupted
by server failure; P2P is more

You might also like