Assignment set 1 oper
Assignment set 1 oper
SEMESTER - 1
Question 1a) Describe the Six layered approach is the operating system pointwise in brief.
Question 2a) Describe Advantages & Disadvantages of Threads over Multi processes in brief.
Advantages and Disadvantages of Threads over Multi processes
Advantages of Threads:
Less Overhead: Threads use less energy compared to processes because they share resources and
memory, making it easier for them to communicate with each other and switch tasks.
Switching Tasks Faster: Moving from one thread to another is usually quicker than moving from
one process to another, thanks to their shared memory and resources, which makes it easier to
manage and reload data.
Easy Sharing of Resources: Threads in the same process can easily access common resources
like memory and file handles, making communication more straightforward and reducing the
need for duplicate resources.
Better Response: Applications that use multiple threads can tackle different tasks at the same
time. For instance, one thread can be handling user requests while another takes care of
background processes, which makes the app more responsive.
Enhanced Task Handling: The hybrid model allows for the execution of numerous user-space
threads on kernel-space threads, thereby augmenting the system’s capacity to manage parallel
tasks and enhance the performance of applications.
The ultimate goal of this threading strategy is to blend the best features of both user-level and
kernel-level threads to achieve an equilibrium between adaptability and efficiency in applications
that handle multiple threads.
Question 3b) Please describe the Multi-level Queue Scheduling & Multi-level Feedback Queue
Scheduling.
Multi-Level Queue Scheduling:
In Multi-Level Queue Scheduling, the ready queue is segmented into multiple separate queues,
each operating under its unique scheduling strategy. Jobs are allocated to different queues based
on their characteristics, like priority or type (for instance, interactive or batch). Each queue uses
its specific scheduling method, such as First-Come-First-Served (FCFS) or Round Robin. The
system organizes tasks by picking from these queues according to a set priority order. Tasks with
higher priority are dealt with first, ensuring that pressing jobs are completed swiftly. However,
this can lead to delays for lower-priority tasks if the higher-priority queues are frequently
occupied.
Multi-Level Feedback Queue Scheduling:
Multi-Level Feedback Queue Scheduling builds on the fundamental Multi-Level Queue
Scheduling by permitting processes to transition between various priority queues. Initiated in a
high-priority queue, processes move down to a lower one if their CPU requirements exceed the
original allocation. This adjustment feature aids in optimizing CPU usage between quick,
interactive tasks and more demanding, longer processes. The feedback system guarantees that
processes aren't stuck in lower priority for extended periods and can move back up if they spend
enough time in the system. This approach is designed to boost responsiveness and equity by
adjusting to the fluctuating needs of processes.
Assignment set 2
Question 1a) Please Describe three requirements to satisfy the Critical Section Problem.
Three Key Elements for Addressing the Critical Section Problem
The Critical Section Problem deals with the challenge of managing access to shared resources
among processes or threads to avoid conflicts. To successfully tackle this issue, the following
three elements must be in place:
Mutual Exclusion: This element ensures that there's only one process or thread present in the
critical section at any given time. By preventing multiple processes from accessing shared
resources concurrently, mutual exclusion is crucial in avoiding data corruption or
inconsistencies.
Progress: The concept of progress ensures that, if no process is currently active in the critical
section and others wish to join, the system eventually decides which one will proceed. This
ensures that no process is left waiting indefinitely for access to the critical section.
Bounded Waiting: Bounded waiting ensures that the waiting time for a process requesting entry
to the critical section has a set limit. Specifically, once a request is made to enter the critical
section, it should be ensured that access is guaranteed within a certain time frame, preventing
any process from being held up indefinitely.
Question 1b) What are the attractive properties of Semaphore?
The Advantages of Semaphores in Concurrent Programming
Semaphores are a versatile and common synchronization tool in concurrent programming,
known for their benefits such as:
Versatility: Semaphores are able to handle a variety of synchronization tasks, including mutual
exclusion and signaling between processes. Their adaptability makes them ideal for different
synchronization needs.
Ease of Use: Semaphores offer a simple way to synchronize processes through two basic
operations: wait and signal. This simplicity makes them easy to integrate and manage within
concurrent systems.
Deadlock Prevention: When used properly, semaphores can help prevent deadlocks by regulating
access to shared resources, which is key in avoiding the cyclic waiting situation that results in
deadlocks.
Efficient Resource Management: Semaphores allow processes to patiently wait for resources to
be made available, which leads to better resource utilization and improved system performance.
They ensure that resources are shared fairly and efficiently among competing processes.
Question 1c) Brief description of the two options for breaking deadlocks.
Approaches to Resolving Deadlocks
Deadlocks happen when processes are in a perpetual wait for resources possessed by others. To
address deadlocks, there are two primary methods:
Preemption: This strategy involves removing resources from one or more processes and
reallocating them to others, breaking the dependency cycle. Periodically checking for deadlocks
and intervening by reallocating resources is what makes this strategy effective but can be
challenging to implement and might lead to process starvation if resources are frequently shifted.
Process Termination: This method involves either terminating one or more of the processes
involved in the deadlock or selectively ending processes until the deadlock is eliminated. While
it resolves the deadlock, it could result in losing work and requires careful planning to mitigate
negative impacts on system operations and process recovery.
Question 2a) Please describe the two Page table implementation concept in brief.
Concepts on Two Page Tables
Two Page Tables facilitate the mapping of virtual addresses to physical memory addresses.
Single-Level Page Table: This strategy utilizes a single, uniform array where each entry directly
corresponds to a virtual page being mapped to a physical frame. Although straightforward, this
approach may lead to inefficiency in handling expansive address spaces due to the increasing
table size that demands considerable memory. Multi-Level Page Table: This technique employs
a layered system to economize memory usage. Virtual addresses are divided into segments, each
pointing to various levels of page tables. This structured method aids in the efficient
management of vast address spaces by reserving table space only for the necessary sections of
the address space. By segmenting the address mapping into multiple levels, it reduces memory
overhead in comparison to a single large table.
Question 2b) Describe the LRU page replacement algorithm in brief.
LRU Approach for Page Replacement
The Least Recently Used (LRU) approach governs the selection of pages to evict from the
memory upon experiencing a page fault. LRU operates on the notion that the page not accessed
in recent times is most likely to be replaced. This system tracks the usage of pages through
various means, such as counters or lists. When a new page needs to be inserted, LRU identifies
and deletes the page with the longest tenure of inactivity. This strategy prioritizes keeping pages
that are frequently accessed in memory but comes with substantial overhead for monitoring
usage. Implementations can leverage hardware support for counters or use software elements like
stacks and linked lists for organizing the order of page access.
Question 2c) Brief discussion over the Process of Encryption and the two methods.
Methods and Steps for Encryption
Encryption safeguards data by transforming it into an unreadable format through an algorithm
and a key. The process includes: 1) Encryption Algorithm: A mathematical method that converts
plaintext into ciphertext. 2) Key: The information necessary for both encoding and decoding the
data, essential for security. 3) Decryption: Reversing the encryption to bring back the original
data using the correct key. There are two primary methods: Symmetric Encryption employs the
same key for both encryption and decryption, ensuring efficiency at the cost of secure key
exchange (for example, AES); and Asymmetric Encryption utilizes a pair of keys (public and
private) to augment security, particularly in the context of communication over untrusted
networks (for instance, RSA). Each approach strikes a balance between efficiency and security
requirements.
Question 3a) Please describe the two forms of Encryption in a distributed environment.
Types of Encryption in a Distributed Setup
In distributed environments, protecting data is crucial, and encryption offers two primary
approaches:
Symmetric Encryption: This approach utilizes a single key for both encryption and decryption.
Before communicating, both the sender and recipient must exchange this key securely.
Symmetric encryption is fast and efficient, making it an excellent choice for handling large
volumes of data. However, the key distribution and management pose challenges. If the key falls
into the wrong hands, all encrypted data is vulnerable. Well-known symmetric encryption
algorithms are AES (Advanced Encryption Standard) and DES (Data Encryption Standard),
praised for their effectiveness and security.
Asymmetric Encryption, also known as public-key encryption, employs a pair of keys: a public
key for encryption and a private key for decryption. The public key can be freely distributed,
while the private key is kept secret. This technique is particularly beneficial for secure
communications across untrusted networks, as it ensures that data can only be decrypted with the
private key, even if the public key is known. Well-recognized asymmetric encryption methods
include RSA (Rivest-Shamir-Adleman) and ECC (Elliptic Curve Cryptography), providing
strong security but at the cost of slightly slower performance compared to symmetric methods.
Question 3b) In Multiprocessor Classification: -
I. What are the Flynn classified computer systems?
II. What are the are classifications based in multiprocessor systems on memory and access
delays?
I. Flynn's Organization of Computer Systems
Flynn's taxonomy organizes computer systems based on their processing instructions and data
flows:
SISD (Single Instruction Stream Single Data Stream): Traditional single-processor systems that
execute a single instruction on a single data item at a time. This represents the foundational
computing model and is observed in early microprocessors.
SIMD (Single Instruction Stream Multiple Data Stream): A single instruction executes on
multiple data items simultaneously. It's effective for tasks involving data parallelism, such as
vector processors and certain graphics processing units (GPUs), which excel at tasks like image
and signal processing.
MISD (Multiple Instruction Stream Single Data Stream): Multiple instructions are applied to a
single data stream. Although less common, it finds application in fault-tolerant systems where
redundancy guarantees reliability, like certain high-availability computing environments.
MIMD (Multiple Instruction Stream Multiple Data Stream): Multiple instructions operate on
multiple data streams at the same time. This model is characteristic of contemporary
multiprocessor and distributed systems, allowing for diverse tasks to be processed concurrently
and thereby enhancing computational efficiency.