100% found this document useful (1 vote)
45 views

Assignment set 1 oper

Uploaded by

Prakhar Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
45 views

Assignment set 1 oper

Uploaded by

Prakhar Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

SKKIM MANIPAL UNIVERSITY – ONLINE

NAME: PRAKHAR SHARMA

ROLL NUMBER: 2419100419

PROGRAM: MASTER OF COMPUTER APPLICATION (MCA)

SEMESTER - 1

COURSE NAME: Operating System

CODE NAME: OMCA103


Assignment set 1

Question 1a) Describe the Six layered approach is the operating system pointwise in brief.

Multilayered Structure in Operating Systems


Operating systems are organized around a six-layered structure, which provides a clear layout for
understanding the different parts and how they work together. Each layer has a specific role in
the management of system resources and the provision of services:
Hardware Layer:
This basic layer encompasses the physical components of a computer system, like the CPU,
RAM, disk drives, and external devices. It's in charge of processing raw data and engaging
directly with hardware elements.
Kernel Layer:
At the heart of the operating system lies the kernel. It's responsible for managing the computer's
hardware resources and overseeing system activities. This includes important tasks such as
organizing processes, managing memory, and controlling devices to ensure the system runs
smoothly and reliably.
Device Driver Layer:
Device drivers function as go-betweens between the operating system and hardware devices.
They translate the operating system's commands into instructions that the hardware understands,
making sure peripheral devices like printers and graphics cards can be used properly.
System Library Layer:
This layer holds a variety of system libraries that supply key functions and tools for applications.
It allows software to perform certain tasks by interacting with the operating system and hardware
through established calls.
User Interface Layer:
The user interface (UI) layer is the interface through which users engage with the system. It
includes both graphical user interfaces (GUIs) and command-line interfaces (CLIs) for managing
user input and output, making it possible for users to run programs and commands.
Application Layer:
The application layer is the section where applications that are used by the user are located.
Applications, such as word processors and web browsers, depend on system libraries and OS
services to fulfill their roles and offer various functionalities to the user.
Question 1b) Please Write a short note on Spooling and BIOS.

Note on Spooling and Basic Input/Output System (BIOS)


Spooling:
Data management, also known as "Simultaneous Peripheral Operations On-line," is a strategy for
controlling the exchange of information between the central processing unit (CPU) and
peripheral equipment such as printers. It involves temporarily holding data in a queue on the hard
drive before it is forwarded to the output device. This technique allows the CPU to keep
executing other tasks while the data is processed by the printer or other peripheral devices. By
enabling the CPU to work on multiple print tasks in a sequence without interruption, data
management boosts the efficiency of the system. It ensures that the system runs smoothly and
that peripheral devices manage data as soon as they are prepared, which in turn, improves the
overall efficiency and output of the system.

Basic Input/Output System (BIOS):


BIOS is a software component that is in charge of initializing and testing the hardware during the
start-up phase of a computer. It conducts a Power-On Self-Test (POST) to verify the
functionality of the system’s hardware components, including the CPU, memory, and storage
devices, to ensure they are working properly. After the POST is finished, BIOS identifies and
loads the boot loader from the storage device, which then initiates the operating system. BIOS
acts as a fundamental link between the hardware and the operating system, facilitating the
configuration of hardware settings through a setup utility. Stored in non-volatile memory, BIOS
settings are preserved even when the computer is switched off, and can be retrieved for hardware
configuration and troubleshooting.

Question 2a) Describe Advantages & Disadvantages of Threads over Multi processes in brief.
Advantages and Disadvantages of Threads over Multi processes
Advantages of Threads:
Less Overhead: Threads use less energy compared to processes because they share resources and
memory, making it easier for them to communicate with each other and switch tasks.
Switching Tasks Faster: Moving from one thread to another is usually quicker than moving from
one process to another, thanks to their shared memory and resources, which makes it easier to
manage and reload data.

Easy Sharing of Resources: Threads in the same process can easily access common resources
like memory and file handles, making communication more straightforward and reducing the
need for duplicate resources.
Better Response: Applications that use multiple threads can tackle different tasks at the same
time. For instance, one thread can be handling user requests while another takes care of
background processes, which makes the app more responsive.

Cons of Using Threads:


Issues with Synchronization: When threads share memory, it can lead to problems with the order
of events, such as simultaneous access to shared data, which can make coding more complex.
Challenges in Debugging: Debugging multi-threaded programs is often more difficult than
single-threaded ones because issues like data races and deadlocks can be more common.
Competition for Resources: When threads fight over resources, it can slow down the system due
to conflicts and delays.
Security Risks: The shared memory in threads means a vulnerability in one can affect the
operations and data of all threads in the same process.
Question 2b) Describe the points of Combined ULT/KLT Approaches.
Integration of User-Level Threads (ULT) and Kernel-Level Threads (KLT) Strategies
This strategy merges the functionalities of user-space libraries managing User-Level Threads
(ULTs) and the Kernel-Space Threads (KLTs) from the kernel to enhance both performance and
adaptability:
Advanced Adaptability: User-space libraries for ULTs enable seamless customization of thread
operations without interference from the kernel, thus offering greater adaptability in thread
management.
Better Performance Utilization: The kernel-space control of KLTs optimizes the performance of
multi-core processors through superior scheduling and thread management across the available
cores.
Combined Scheduling Advantages: This strategy facilitates the alignment of ULTs with KLTs,
amalgamating the strengths of both kernel-based scheduling and the malleability of user-level
thread management to boost overall performance.
Improved Resource Allocation: By optimizing the usage of multiple ULTs through fewer KLTs,
the system lowers the burden on kernel-based thread management while still leveraging the
efficiency of operating system-managed thread operations.

Enhanced Task Handling: The hybrid model allows for the execution of numerous user-space
threads on kernel-space threads, thereby augmenting the system’s capacity to manage parallel
tasks and enhance the performance of applications.
The ultimate goal of this threading strategy is to blend the best features of both user-level and
kernel-level threads to achieve an equilibrium between adaptability and efficiency in applications
that handle multiple threads.

Question 3a) Describe the Common Scheduling criteria.


Standard Metrics for Scheduling Performance
Standard metrics for scheduling performance evaluate the efficiency of how operating systems
manage their central processing units (CPUs) in executing tasks. These metrics are essential for
assessing the system's ability to handle and process processes effectively. The key metrics
include:
CPU Engagement: This metric tracks the percentage of time the CPU is actively involved in
processing tasks. A high CPU engagement rate, ideally approaching 100%, suggests the efficient
use of the CPU's capacity.
Process Production Rate: Process production rate is the number of tasks completed per unit of
time. A higher rate indicates a more effective scheduling system capable of handling and
completing more tasks within a specified timeframe.
Completion Time: Completion time is the total duration from when a task is started until it
finishes. This includes the waiting period in the queue, processing time by the CPU, and
execution of I/O tasks. Smaller completion times reflect faster processing.
Time in Queue: This is the duration that a process is waiting in the ready queue before it is
granted CPU time. Shorter times in queue lead to a more responsive system, particularly in
applications that require immediate attention.
Interaction Time: Interaction time reflects the time it takes for a system to respond to user
requests. It's vital for interactive systems to maintain high satisfaction among users by providing
quick responses.

Question 3b) Please describe the Multi-level Queue Scheduling & Multi-level Feedback Queue
Scheduling.
Multi-Level Queue Scheduling:
In Multi-Level Queue Scheduling, the ready queue is segmented into multiple separate queues,
each operating under its unique scheduling strategy. Jobs are allocated to different queues based
on their characteristics, like priority or type (for instance, interactive or batch). Each queue uses
its specific scheduling method, such as First-Come-First-Served (FCFS) or Round Robin. The
system organizes tasks by picking from these queues according to a set priority order. Tasks with
higher priority are dealt with first, ensuring that pressing jobs are completed swiftly. However,
this can lead to delays for lower-priority tasks if the higher-priority queues are frequently
occupied.
Multi-Level Feedback Queue Scheduling:
Multi-Level Feedback Queue Scheduling builds on the fundamental Multi-Level Queue
Scheduling by permitting processes to transition between various priority queues. Initiated in a
high-priority queue, processes move down to a lower one if their CPU requirements exceed the
original allocation. This adjustment feature aids in optimizing CPU usage between quick,
interactive tasks and more demanding, longer processes. The feedback system guarantees that
processes aren't stuck in lower priority for extended periods and can move back up if they spend
enough time in the system. This approach is designed to boost responsiveness and equity by
adjusting to the fluctuating needs of processes.

Assignment set 2

Question 1a) Please Describe three requirements to satisfy the Critical Section Problem.
Three Key Elements for Addressing the Critical Section Problem
The Critical Section Problem deals with the challenge of managing access to shared resources
among processes or threads to avoid conflicts. To successfully tackle this issue, the following
three elements must be in place:
Mutual Exclusion: This element ensures that there's only one process or thread present in the
critical section at any given time. By preventing multiple processes from accessing shared
resources concurrently, mutual exclusion is crucial in avoiding data corruption or
inconsistencies.
Progress: The concept of progress ensures that, if no process is currently active in the critical
section and others wish to join, the system eventually decides which one will proceed. This
ensures that no process is left waiting indefinitely for access to the critical section.
Bounded Waiting: Bounded waiting ensures that the waiting time for a process requesting entry
to the critical section has a set limit. Specifically, once a request is made to enter the critical
section, it should be ensured that access is guaranteed within a certain time frame, preventing
any process from being held up indefinitely.
Question 1b) What are the attractive properties of Semaphore?
The Advantages of Semaphores in Concurrent Programming
Semaphores are a versatile and common synchronization tool in concurrent programming,
known for their benefits such as:
Versatility: Semaphores are able to handle a variety of synchronization tasks, including mutual
exclusion and signaling between processes. Their adaptability makes them ideal for different
synchronization needs.
Ease of Use: Semaphores offer a simple way to synchronize processes through two basic
operations: wait and signal. This simplicity makes them easy to integrate and manage within
concurrent systems.
Deadlock Prevention: When used properly, semaphores can help prevent deadlocks by regulating
access to shared resources, which is key in avoiding the cyclic waiting situation that results in
deadlocks.
Efficient Resource Management: Semaphores allow processes to patiently wait for resources to
be made available, which leads to better resource utilization and improved system performance.
They ensure that resources are shared fairly and efficiently among competing processes.
Question 1c) Brief description of the two options for breaking deadlocks.
Approaches to Resolving Deadlocks
Deadlocks happen when processes are in a perpetual wait for resources possessed by others. To
address deadlocks, there are two primary methods:
Preemption: This strategy involves removing resources from one or more processes and
reallocating them to others, breaking the dependency cycle. Periodically checking for deadlocks
and intervening by reallocating resources is what makes this strategy effective but can be
challenging to implement and might lead to process starvation if resources are frequently shifted.
Process Termination: This method involves either terminating one or more of the processes
involved in the deadlock or selectively ending processes until the deadlock is eliminated. While
it resolves the deadlock, it could result in losing work and requires careful planning to mitigate
negative impacts on system operations and process recovery.

Question 2a) Please describe the two Page table implementation concept in brief.
Concepts on Two Page Tables
Two Page Tables facilitate the mapping of virtual addresses to physical memory addresses.
Single-Level Page Table: This strategy utilizes a single, uniform array where each entry directly
corresponds to a virtual page being mapped to a physical frame. Although straightforward, this
approach may lead to inefficiency in handling expansive address spaces due to the increasing
table size that demands considerable memory. Multi-Level Page Table: This technique employs
a layered system to economize memory usage. Virtual addresses are divided into segments, each
pointing to various levels of page tables. This structured method aids in the efficient
management of vast address spaces by reserving table space only for the necessary sections of
the address space. By segmenting the address mapping into multiple levels, it reduces memory
overhead in comparison to a single large table.
Question 2b) Describe the LRU page replacement algorithm in brief.
LRU Approach for Page Replacement
The Least Recently Used (LRU) approach governs the selection of pages to evict from the
memory upon experiencing a page fault. LRU operates on the notion that the page not accessed
in recent times is most likely to be replaced. This system tracks the usage of pages through
various means, such as counters or lists. When a new page needs to be inserted, LRU identifies
and deletes the page with the longest tenure of inactivity. This strategy prioritizes keeping pages
that are frequently accessed in memory but comes with substantial overhead for monitoring
usage. Implementations can leverage hardware support for counters or use software elements like
stacks and linked lists for organizing the order of page access.
Question 2c) Brief discussion over the Process of Encryption and the two methods.
Methods and Steps for Encryption
Encryption safeguards data by transforming it into an unreadable format through an algorithm
and a key. The process includes: 1) Encryption Algorithm: A mathematical method that converts
plaintext into ciphertext. 2) Key: The information necessary for both encoding and decoding the
data, essential for security. 3) Decryption: Reversing the encryption to bring back the original
data using the correct key. There are two primary methods: Symmetric Encryption employs the
same key for both encryption and decryption, ensuring efficiency at the cost of secure key
exchange (for example, AES); and Asymmetric Encryption utilizes a pair of keys (public and
private) to augment security, particularly in the context of communication over untrusted
networks (for instance, RSA). Each approach strikes a balance between efficiency and security
requirements.

Question 3a) Please describe the two forms of Encryption in a distributed environment.
Types of Encryption in a Distributed Setup
In distributed environments, protecting data is crucial, and encryption offers two primary
approaches:
Symmetric Encryption: This approach utilizes a single key for both encryption and decryption.
Before communicating, both the sender and recipient must exchange this key securely.
Symmetric encryption is fast and efficient, making it an excellent choice for handling large
volumes of data. However, the key distribution and management pose challenges. If the key falls
into the wrong hands, all encrypted data is vulnerable. Well-known symmetric encryption
algorithms are AES (Advanced Encryption Standard) and DES (Data Encryption Standard),
praised for their effectiveness and security.
Asymmetric Encryption, also known as public-key encryption, employs a pair of keys: a public
key for encryption and a private key for decryption. The public key can be freely distributed,
while the private key is kept secret. This technique is particularly beneficial for secure
communications across untrusted networks, as it ensures that data can only be decrypted with the
private key, even if the public key is known. Well-recognized asymmetric encryption methods
include RSA (Rivest-Shamir-Adleman) and ECC (Elliptic Curve Cryptography), providing
strong security but at the cost of slightly slower performance compared to symmetric methods.
Question 3b) In Multiprocessor Classification: -
I. What are the Flynn classified computer systems?
II. What are the are classifications based in multiprocessor systems on memory and access
delays?
I. Flynn's Organization of Computer Systems
Flynn's taxonomy organizes computer systems based on their processing instructions and data
flows:
SISD (Single Instruction Stream Single Data Stream): Traditional single-processor systems that
execute a single instruction on a single data item at a time. This represents the foundational
computing model and is observed in early microprocessors.
SIMD (Single Instruction Stream Multiple Data Stream): A single instruction executes on
multiple data items simultaneously. It's effective for tasks involving data parallelism, such as
vector processors and certain graphics processing units (GPUs), which excel at tasks like image
and signal processing.
MISD (Multiple Instruction Stream Single Data Stream): Multiple instructions are applied to a
single data stream. Although less common, it finds application in fault-tolerant systems where
redundancy guarantees reliability, like certain high-availability computing environments.
MIMD (Multiple Instruction Stream Multiple Data Stream): Multiple instructions operate on
multiple data streams at the same time. This model is characteristic of contemporary
multiprocessor and distributed systems, allowing for diverse tasks to be processed concurrently
and thereby enhancing computational efficiency.

II. Categorizations According to Memory and Communication


Multiprocessor systems are further categorized by their memory access and communication
methods:
Shared Memory Systems: All processors share a common memory space. This simplifies
communication between processes but can lead to contention and delays due to simultaneous
access. An example of this is symmetric multiprocessing (SMP) systems, where processors
collectively share a unified memory pool.
Distributed Memory Systems: Each processor has its own local memory and communicates with
others over a network. This reduces contention but introduces latency due to network delays.
Clusters of computers operate under this paradigm, with each node possessing its own memory
and interacting through the network.
These classifications underscore how different architectures affect the performance and
communication within multiprocessor settings.

You might also like