0% found this document useful (0 votes)
16 views18 pages

OS Notes Sem5

This document contains comprehensive notes on various operating system concepts, including definitions and explanations of key terms such as shell, I/O-bound processes, semaphores, threads, and memory management techniques. It also covers process scheduling, synchronization mechanisms, and the distinctions between different types of processes and scheduling algorithms. Additionally, it discusses the advantages of distributed operating systems and the importance of managing memory effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views18 pages

OS Notes Sem5

This document contains comprehensive notes on various operating system concepts, including definitions and explanations of key terms such as shell, I/O-bound processes, semaphores, threads, and memory management techniques. It also covers process scheduling, synchronization mechanisms, and the distinctions between different types of processes and scheduling algorithms. Additionally, it discusses the advantages of distributed operating systems and the importance of managing memory effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Operating System Notes (Sem V)

Q1] What is shell ?

--> A shell in an operating system is a command-line interface or graphical user interface


that allows users to interact with the system's kernel and perform tasks like executing
programs, managing files, and controlling system processes.

Q2] Define the I/O Bound process ?

--> An I/O-bound process is a type of process that spends more time performing
input/output (I/O) operations than using the CPU for computation. These processes are
often limited by the speed of I/O devices (e.g., disk, network, printers) rather than CPU
speed.

Q3] Define the term semaphore.

--> A semaphore is a synchronization mechanism used in operating systems to manage


concurrent processes and prevent race conditions. It is an integer variable that is used to
control access to shared resources by multiple processes in a concurrent system, such as
a multitasking operating system.

There are two main types of semaphores:

[Link] Semaphore (or Mutex): Can have only two values, 0 or 1, and is used for mutual
exclusion, ensuring that only one process can access a resource at a time.

[Link] Semaphore: Can have any non-negative integer value and is used to control
access to a resource that has multiple instances, such as a fixed number of database
connections.

Q4] What is a thread library?

--> A thread library is a collection of functions and routines that provide an application
programmer with an interface for creating, managing, and controlling threads within a
process. It abstracts the complexities of working with threads and provides an easy way to
implement multithreading in applications.
Thread libraries offer functions for:

[Link] and terminating threads

[Link] threads

[Link] threads

[Link] thread attributes

Q5] What is synchronisation?

--> Synchronization in computer science refers to the coordination of multiple processes


or threads to ensure they execute in a specific order without conflicting with each other,
especially when they share resources like memory, files, or hardware. Synchronization is
essential in concurrent systems to prevent issues such as race conditions, deadlocks, or
inconsistent data.

# Common synchronization mechanisms include:

[Link]/Mutexes: Used to enforce mutual exclusion, ensuring only one thread can access
a resource at a time.

[Link]: Counters that regulate access to resources by multiple threads, enabling


both mutual exclusion and signaling between threads.

[Link]: High-level abstraction that provides a lock and condition variables to control
thread execution.

[Link]: Synchronization points where threads must wait until all participating threads
reach the barrier.

Q6] What is physical address space?

--> Physical address space refers to the range of all possible physical memory addresses
that can be accessed by the system's hardware, specifically the processor. It represents
the actual addresses used to access data stored in the system's physical memory (RAM).
The physical address space is determined by the size of the address bus in the processor.
For example, a 32-bit address bus can address up to 4 GB of memory (2³² addresses).
Q7] What is context switching?

--> Context switching is the process of saving the state of a currently running process or
thread and restoring the state of another process or thread to allow it to run. This
mechanism allows the CPU to switch between multiple processes or threads, enabling
multitasking and time-sharing in modern operating systems.

Q] What is page?

--> In an operating system, a page is a fixed-size block of memory that is used in the context
of virtual memory management. Pages are part of how an OS efficiently manages memory
by dividing both physical memory (RAM) and virtual memory into chunks of equal size.

Q] Define the term dispatcher?

--> In an operating system, a dispatcher is a component of the CPU scheduler responsible


for giving control of the CPU to the process selected by the scheduler. It handles the
context switching, where it saves the state of the currently running process and loads the
state of the next process to be executed.

Q] What is booting?

--> Booting is the process of starting up a computer and loading the operating system (OS)
into memory so that the system becomes ready for use. Booting involves a series of steps
that initialize hardware components and load essential software to make the computer
operational.

# There are two main types of booting:

[Link] Booting (Hard Booting): The process of starting a computer from a completely
powered-off state. It begins when the power button is pressed, initiating a sequence of
steps to load the OS.

[Link] Booting (Soft Booting): Restarting a computer that is already powered on, usually
without turning off the power completely (e.g., by pressing "Restart" in the OS). This is done
to reset the system, often to resolve issues or apply changes.

Q] What is thread?
--> A thread in an operating system is the smallest unit of processing that can be scheduled
and executed by the CPU. It is a lightweight process that shares the same memory space
and resources of its parent process but operates independently. Threads allow multiple
tasks to run concurrently within a single application, improving responsiveness and
performance.

Q] List types of system calls.

--> [Link] Control:Create and manage processes (e.g., fork(), exec(), wait(), exit()).

[Link] Management:

Create, open, read, write, and close files (e.g., open(), read(), write(), close(), unlink()).

[Link] Management:

Interact with device drivers (e.g., ioctl(), read(), write() for devices).

[Link] Maintenance:

Get or set process or system information (e.g., getpid(), getppid(), sysinfo()).

[Link]:

Facilitate communication between processes (e.g., pipe(), shmget(), msgget()).

[Link] Management:

Allocate and manage memory (e.g., mmap(), munmap(), brk()).

Q] State role of medium term schedular.

--> The medium-term scheduler (also known as the swapper) plays a crucial role in the
management of processes in an operating system, particularly in systems that employ
multilevel queue scheduling or time-sharing.

Q] What is CPU - I/O burst cycle?

--> The CPU-I/O burst cycle refers to the alternating pattern of execution time that a
process experiences while using the CPU and performing I/O (Input/Output) operations.
This cycle is a key concept in understanding process behavior in operating systems and
how resources are utilized effectively.
Q] What is race condition?

--> A race condition is a situation in concurrent programming where two or more processes
or threads attempt to change shared data at the same time, leading to unpredictable or
erroneous results. The outcome of the execution depends on the relative timing of the
processes or threads, which can vary, making it difficult to reproduce the issue
consistently.

Q] Define response time?

--> Response time refers to the total time taken from when a user makes a request to when
the system responds to that request. In the context of computer systems, it is often
associated with the time taken by a system to process a user's input and produce the
corresponding output.

Q] What is page table?

--> A page table is a data structure used by the operating system to manage the mapping
between virtual addresses and physical addresses in a system that employs paging for
memory management. It is an essential component of the virtual memory system, allowing
processes to use virtual memory addresses that are translated to actual physical
addresses in RAM.

Q] What is segmentation?

--> Segmentation is a memory management technique used in operating systems that


divides a process's memory into variable-sized segments based on the logical divisions of
the program. Unlike paging, which divides memory into fixed-size blocks (pages),
segmentation allows the division of memory into segments of different sizes, reflecting the
program's structure and logical units.

Q] Define bootstrapping.
--> Bootstrapping is the process of starting a computer and loading the operating system
into memory from a powered-off state. The term is derived from the phrase "pulling oneself
up by one's bootstraps," reflecting the idea of starting from a minimal set of resources to
achieve a fully operational state.

Q] Explain posix pthread.

--> POSIX Pthreads (Portable Operating System Interface for Unix Threads) is a standard for
multithreading programming in UNIX-like operating systems. It provides a set of C
programming language types and procedure calls for creating and managing threads,
enabling developers to write applications that can perform multiple tasks concurrently.

Q] List the solutions to critical section problem.

--> The critical section problem arises in concurrent programming when multiple
processes or threads need to access shared resources. To solve this problem, various
synchronization mechanisms can be employed to ensure that only one process or thread
can enter its critical section at a time, thus preventing race conditions.

Q] What do you mean by page hit?

--> A page hit refers to an event in a computer system's memory management where a
requested page is found in the page table and is currently loaded in RAM (main memory).
When a process tries to access data that is already present in physical memory, it results
in a page hit, which means that the system can quickly retrieve the required information
without needing to access slower storage devices like a hard disk or SSD.

Q] What is kernel?

--> The kernel is the core component of an operating system that manages system
resources and facilitates communication between hardware and software components. It
acts as a bridge between applications and the underlying hardware, providing essential
services and ensuring that different parts of the system can operate smoothly.
Q] What is ready queue?

--> The ready queue is a data structure used in operating systems to hold processes that
are ready to be executed by the CPU but are currently waiting for CPU time. It is a crucial
part of process scheduling and helps the operating system manage the execution of
multiple processes efficiently.

Q] What is virtual memory?

--> Virtual memory is a memory management technique used by operating systems that
allows a computer to use more memory than is physically available in RAM. It provides an
abstraction of a large logical memory space, enabling applications to access a larger
address space while using the actual physical memory more efficiently.

Q] Explain system call related to device manipulation.

--> System calls related to device manipulation are functions provided by the operating
system that allow applications to interact with hardware devices, such as disks, printers,
network interfaces, and other peripherals. These system calls enable programs to perform
various operations, such as reading from and writing to devices, controlling device settings,
and managing device states.

Q] Write short note on multilevel queue scheduling.

--> Multilevel Queue Scheduling is a CPU scheduling algorithm that divides the ready
queue into multiple separate queues, each with its own scheduling algorithm. This
approach allows the operating system to prioritize different types of processes based on
their characteristics or requirements. Each queue may have its own scheduling policies,
making the system more efficient in handling diverse workloads.

Q] Explain producer, consumer problem.

--> The Producer-Consumer Problem is a classic synchronization problem in computer


science that involves coordinating processes that produce and consume data using a
shared buffer. It illustrates the challenge of ensuring that producers do not overflow the
buffer and consumers do not consume items that are not available, while maintaining data
consistency in a concurrent environment.

Q] Explain paging in brief.

--> Paging is a memory management technique used by operating systems to manage how
processes are stored and retrieved from physical memory. It allows for efficient use of
memory by dividing both physical memory and a process’s virtual memory into fixed-size
blocks called pages and frames.

Q] Write difference between preemptive and non preemptive scheduling?.

-->

Para PREEMPTIVE NON-PREEMPTIVE SCHEDULING


met SCHEDULING
er

Once resources(CPU Cycle) are


In this resources(CPU Cycle)
Bas allocated to a process, the process
are allocated to a process for a
ic holds it till it completes its burst time or
limited time.
switches to waiting state

Int
Process can be interrupted in Process can not be interrupted until it
err
between. terminates itself or its time is up
upt

Sta If a process having high priority


If a process with a long burst time is
rva frequently arrives in the ready
running CPU, then later coming process
tio queue, a low priority process
with less CPU burst time may starve
n may starve
Ov
It has overheads of scheduling
erh It does not have overheads
the processes
ead

Fle
xibi flexible Rigid
lity

Cos
Cost associated No cost associated
t

Q] What is operating system? List objectives of operating system.

--> An operating system (OS) is system software that acts as an intermediary between
computer hardware and application software. It manages hardware resources, provides
services for applications, and enables users to interact with the system. The operating
system controls all the basic functions of a computer, such as managing memory,
processing tasks, handling input and output devices, and providing a user interface.

# Objectives :

[Link] Management

[Link] Management

[Link] Management

[Link] System Management

[Link] and Access Control

[Link] Management:

[Link] Interface

[Link] Performance

[Link] Detection and Handling


Q] Compare LFU and MFU?

-->

Most Frequently Used Least Frequently Used

Replaces page which is accessed Replaces page which is accessed


maximum number of times. minimum number of times.

Since most frequently page is replaced, Since least frequently page is replaced,
this increases the number of page this decreases the number of page
faults as in future the page has higher faults as in future the page has lower
chances to be accessed again chances to be accessed again.

Q] What is purpose of scheduling algorithm.

--> The purpose of a scheduling algorithm in an operating system is to determine the order
in which processes are assigned to the CPU for execution. Since multiple processes may
compete for CPU time, the scheduling algorithm is responsible for optimizing the
allocation of CPU resources, ensuring efficient system performance, responsiveness, and
fairness.

Q] Write advantages of distributed operating systems.

-->Distributed Operating Systems manage a collection of independent computers that


work together as a single unified system. These systems share resources, tasks, and
processes across multiple nodes (computers) to achieve better efficiency and reliability.
Here are the main advantages of distributed operating systems:

1. Resource Sharing

2. Scalability

3. Reliability and Fault Tolerance

4. Performance Improvement

5. Modularity
6. Increased Availability

7. Flexibility

8. Cost-Effectiveness

9. Geographic Distribution

10. Better Utilization of Resources

Q] List out functions of memory management.

--> Memory management is a critical function of an operating system that handles the
allocation, organization, and management of memory resources for processes running on
a computer. Here are the key functions of memory management:

1. Memory Allocation:

The OS allocates memory to processes when they are created and deallocates it when
they finish execution.

2. Memory Deallocation:

When a process terminates or no longer needs the allocated memory, the OS frees up that
memory space, making it available for other processes.

3. Tracking Memory Usage:

The OS keeps track of which parts of memory are in use, which are free, and which are
reserved.

4. Address Mapping:

Memory management involves translating logical addresses (used by programs) into


physical addresses (used by the hardware).

5. Segmentation and Paging:

The OS may implement segmentation or paging techniques to manage memory more


efficiently.

6. Swapping:
The OS can move processes between main memory and secondary storage (disk) to
optimize the use of physical memory.

7. Memory Protection:

The OS provides mechanisms to ensure that one process cannot access the memory
space of another process.

8. Fragmentation Management:

Memory management includes techniques to minimize and manage fragmentation.

9. Virtual Memory Management:

The OS manages virtual memory, allowing processes to use more memory than is
physically available.

10. Garbage Collection:

Some operating systems implement garbage collection to automatically reclaim memory


that is no longer in use by the application.

Q] Differenciate independent and dependent processes.

-->

Independent Process: Independent Processes are those processes whose task is not
dependent on any other processes.

Dependent Process: This Processes are those processes that depend on other processes
or processes. They work together to achieve a common task in an operating system.

Q] With the help of diagram describe process states.

--> In an operating system, a process transitions through five primary states: New, Ready,
Running, Blocked/Waiting, and Terminated. The Ready state holds processes waiting for
CPU allocation, while the Running state contains the active process. A process may enter
the Blocked state when waiting for I/O operations. Upon completion, it moves back to
Ready or terminates.
Q] What is fragmentation? Explain with all its types.

--> Fragmentation occurs in memory management when free memory space is scattered
and not contiguous, making it difficult to allocate large blocks of memory, even though
enough space exists.

#Types of Fragmentation:

[Link] Fragmentation:

Occurs when fixed-size memory blocks are allocated, and the allocated memory is larger
than required, leaving unused space within the allocated block.

[Link] Fragmentation:

Occurs when free memory is divided into small non-contiguous blocks, making it
impossible to allocate large memory requests despite having sufficient total free space.

#Techniques like compaction and paging are used to minimize fragmentation.

Q] What is thread? Explain any 2 multithreading models in brief with diagram.

--> A thread is the smallest unit of a process that can be scheduled and executed by the
CPU. It represents a single sequence of instructions within a process.

Multithreading is the ability of a CPU or an operating system to execute multiple threads


concurrently within a single process. Each thread runs independently, sharing the
process's resources (like memory), allowing for parallel execution, improved
responsiveness, and efficient CPU utilization.

Two common multithreading models are:

[Link]-to-One Model:

Maps multiple user-level threads to a single kernel thread.


[Link]-to-One Model:

Maps each user-level thread to a separate kernel thread.

Q] Write short note on logical address and physical address binding?

--> [Link] address is the address generated by the CPU during program execution,
which refers to a location in the program's virtual memory. It is also known as a virtual
address. Physical address, on the other hand, refers to the actual location in the
computer's physical memory (RAM).

[Link] binding is the process of mapping a logical address to a physical address. This
mapping occurs either during compile time, load time, or execution time, depending on
how and when the program is loaded into memory.

Q] Explain process state in brief.

--> The process state diagram represents the lifecycle of a process in an operating system.
Processes transition between these states based on scheduling and system events. A
process transitions between five key states:

[Link]: Process is being created.

[Link]: Process is waiting to be assigned to the CPU.

[Link]: Process is executing.

[Link]: Process is waiting for an event (e.g., I/O).

[Link]: Process has finished execution.

Q] Explain reader-writer problem in brief.

--> The Reader-Writer problem addresses process synchronization when multiple readers
and writers access a shared resource (like a file or database). The challenge is to allow
multiple readers to access the resource simultaneously but ensure exclusive access for a
writer. Solutions aim to avoid race conditions, ensuring data integrity by preventing
simultaneous writing while still maximizing concurrency for readers.
Q] Describe PCB with all its fields.

--> A Process Control Block (PCB) is a data structure used by the operating system to store
all the information about a process. Key fields in a PCB include:

[Link] ID (PID): Unique identifier for the process.

[Link] State: Current state (e.g., ready, running, waiting).

[Link] Counter: Address of the next instruction to execute.

[Link] Registers: Register values for process execution.

Memory Management Information: Details about memory allocation.

5.I/O Status Information: Status of I/O devices and operations.

[Link] Information: Priority and scheduling parameters.

Q] Define PCB ?

--> A Process Control Block (PCB) is a data structure maintained by the operating system
to store crucial information about a process. It includes fields like the process ID, state,
program counter, CPU registers, memory management details, I/O status, and scheduling
information, facilitating efficient process management and scheduling.

Q] Which three requirements must be satisfied while designing a solution to

critical section problem? Explain each in detail.

--> [Link] Exclusion: Only one process can be in its critical section at any time,
preventing simultaneous access to shared resources.

[Link]: If no process is in its critical section, the selection of the next process to enter
must depend on their states, ensuring no indefinite waiting.

[Link] Waiting: There must be a limit on the number of times other processes can
enter their critical sections before a waiting process is allowed to enter.

Q] Explain bounded buffer problem in detail.


--> The bounded buffer problem, also known as the producer-consumer problem, involves
a fixed-size buffer shared between a producer and a consumer. The producer adds items to
the buffer, while the consumer removes them. The challenge lies in synchronizing access:
the producer must wait if the buffer is full, and the consumer must wait if it is empty,
ensuring proper coordination and preventing data loss or corruption.

Q] Differentiate between client server and peer to peer computing environments?

-->

Client-Server Network Peer-to-Peer Network

In Client-Server Network, Clients and


In Peer-to-Peer Network, Clients and
server are differentiated, Specific
server are not differentiated.
server and clients are present.

Client-Server Network focuses on While Peer-to-Peer Network focuses


information sharing. on connectivity.

In Client-Server Network, Centralized While in Peer-to-Peer Network, Each


server is used to store the data. peer has its own data.

In Client-Server Network, Server While in Peer-to-Peer Network, Each


respond the services which is request and every node can do both request
by Client. and respond for the services.

Client-Server Network are costlier While Peer-to-Peer Network are less


than Peer-to-Peer Network. costlier than Client-Server Network.

Client-Server Network are more stable While Peer-to-Peer Network are less
than Peer-to-Peer Network. stable if number of peer is increase.
Q] With the help of diagram describe swapping.

--> Swapping is a memory management technique where processes are moved between
main memory and disk storage to optimize memory usage. When memory is full, a process
can be swapped out to the disk, freeing up space for another process. When needed, it can
be swapped back into memory. This allows multiple processes to share limited memory
resources effectively.

Q] Explain?

1. First fit?

--> The First Fit algorithm allocates memory by searching from the beginning of the free
memory list and assigning the first block that is large enough for the requested size. This
approach offers fast allocation but may lead to fragmentation over time as small unused
blocks accumulate.

[Link] fit?

--> The Best Fit algorithm allocates the smallest available memory block that meets a
process's request, minimizing wasted space. It scans the entire list of free blocks to find
the most suitable fit. While it reduces fragmentation, it can be slower due to the exhaustive
search required.

3. Worst fit?

--> The Worst Fit algorithm allocates the largest available memory block to satisfy a
request. By searching the entire list of free blocks, it aims to leave the largest possible
leftover space for future allocations. While it can reduce fragmentation in some cases, it
may lead to inefficient memory use.

4. Next fit? Algorithm.

--> The Next Fit algorithm allocates memory by searching for the first available block that
fits a request, starting from the last allocated position. If it reaches the end of the memory,
it wraps around to the beginning. This approach improves efficiency by reducing the search
time for free blocks.
------X------

You might also like