0% found this document useful (0 votes)
20 views74 pages

Operating System

Uploaded by

No One
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
20 views74 pages

Operating System

Uploaded by

No One
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 74

UNIT 1

Operating System as Resource Manager

Let us understand how the operating system works as a Resource Manager.

• Now-a-days all modern computers consist


of processors, memories, timers, network interfaces, printers, and so
many other devices.
• The operating system provides for an orderly and controlled allocation
of the processors, memories, and I/O devices among the various
programs in the bottom-up view.
• Operating system allows multiple programs to be in memory and run at
the same time.
• Resource management includes multiplexing or sharing resources in
two different ways: in time and in space.
• In time multiplexed, different programs take a chance of using CPU.
First one tries to use the resource, then the next one that is ready in the
queue and so on. For example: Sharing the printer one after another.
• In space multiplexing, Instead of the customers taking a chance, each
one gets part of the resource. For example − Main memory is divided
into several running programs, so each one can be resident at the same
time.

Classification of Operating System:

Operating System is a type of software that works as an interface between the


system program and the hardware. There are several types of Operating
Systems in which many of which are mentioned below.

1. Multiprogramming Operating System

In this article, you will learn about the multiprogramming operating system, its
working, advantages, and disadvantages.

What is the Multiprogramming Operating System?

A multiprogramming operating system may run many programs on a single


processor computer. If one program must wait for an input/output transfer in a
multiprogramming operating system, the other programs are ready to use the
CPU. As a result, various jobs may share CPU time. However, the execution
of their jobs is not defined to be at the same time period.
When a program is being performed, it is known as a "Task", "Process",
and "Job". Concurrent program executions improve system resource
consumption and throughput as compared to serial and batch processing
systems.

The primary goal of multiprogramming is to manage the entire system's


resources. The key components of a multiprogramming system are the file
system, command processor, transient area, and I/O control system. As a
result, multiprogramming operating systems are designed to store different
programs based on sub-segmenting parts of the transient area. The resource
management routines are linked with the operating system core functions.

Types of the Multiprogramming Operating System

There are mainly two types of multiprogramming operating systems. These


are as follows:

1. Multitasking Operating System


2. Multiuser Operating System

Multitasking Operating System

A multitasking operating system enables the execution of two or more


programs at the same time. The operating system accomplishes this by
shifting each program into and out of memory one at a time. When a program
is switched out of memory, it is temporarily saved on disk until it is required
again.

Multiuser Operating System

A multiuser operating system allows many users to share processing time on


a powerful central computer from different terminals. The operating system
accomplishes this by rapidly switching between terminals, each of which
receives a limited amount of processor time on the central computer. The
operating system changes among terminals so quickly that each user seems
to have continuous access to the central computer. If there are many users on
a system like this, the time it takes the central computer to reply can become
more obvious.

Working of the Multiprogramming Operating System


Multiple users can accomplish their jobs simultaneously in the
multiprogramming system, and it can be stored in the main memory. When
one program is engaged in I/O operations, the CPU may deliver time to
various programs while sitting in idle mode.

When one application is waiting for an I/O transfer, another is ready to use the
processor at all times, and numerous programs may share CPU time. All jobs
are not run simultaneously, but there could be numerous jobs running on the
processor at the same time, and parts of other processes being executed first,
then another segment, etc. As a result, the overall goal of a multiprogramming
system is to keep the CPU busy until some tasks are available in the job pool.
Thus, the numerous programs can run on a single processor computer, and
the CPU is never idle.D

Examples of Multiprogramming Operating System

There are various examples of multiprogramming operating systems, including


download apps, transfer data, MS-Excel, Google Chrome, Firefox browser,
and many more apps. Other examples are Windows O/S, UNIX O/S,
Microcomputers such as XENIX, MP/M, and ESQview.

Advantages and Disadvantages of Multiprogramming Operating System

There are various advantages and disadvantages of the multiprogramming


operating system. Some of the advantages and disadvantages are as follows:

Advantages

There are various advantages of the multiprogramming operating system.


Some of the advantages are as follows:

1. It provides less response time.


2. It may help to run various jobs in a single application simultaneously.
3. It helps to optimize the total job throughput of the computer.
4. Various users may use the multiprogramming system at once.
5. Short-time jobs are done quickly in comparison to long-time jobs.
6. It may help to improve turnaround time for short-time tasks.
7. It helps in improving CPU utilization and never gets idle.
8. The resources are utilized smartly.
Disadvantages

There are various disadvantages of the multiprogramming operating system.


Some of the disadvantages are as follows:

1. It is highly complicated and sophisticated.


2. The CPU scheduling is required.
3. Memory management is needed in the operating system because all
types of tasks are stored in the main memory.
4. The harder task is to handle all processes and tasks.
5. If it has a large number of jobs, then long-term jobs will require a long
wait.

2. Time Sharing Operating System

Multiprogrammed, batched systems provide an environment where various


system resources were used effectively, but it did not provide for user
interaction with computer systems. Time-sharing is a logical extension
of multiprogramming. The CPU performs many tasks by switches that are so
frequent that the user can interact with each program while it is running. A
time-shared operating system allows multiple users to share computers
simultaneously. With each action or order at a time the shared system
becomes smaller, so only a little CPU time is required for each user. As the
system rapidly switches from one user to another, each user is given the
impression that the entire computer system is dedicated to its use, although it
is being shared among multiple users.
A time-shared operating system uses CPU scheduling and multi-
programming to provide each user with a small portion of a shared computer
at once. Each user has at least one separate program in memory. A program
is loaded into memory and executes, it performs a short period of time either
before completion or to complete I/O. This short period of time during which
the user gets the attention of the CPU is known as time slice, time slot, or
quantum. It is typically of the order of 10 to 100 milliseconds. Time-shared
operating systems are more complex than multiprogrammed operating
systems. In both, multiple jobs must be kept in memory simultaneously, so the
system must have memory management and security. To achieve a good
response time, jobs may have to swap in and out of disk from the main
memory which now serves as a backing store for the main memory. A
common method to achieve this goal is virtual memory, a technique that
allows the execution of a job that may not be completely in memory.
In the above figure the user 5 is active state but user 1, user 2, user 3, and
user 4 are in a waiting state whereas user 6 is in a ready state.
1. Active State – The user’s program is under the control of the CPU.
Only one program is available in this state.
2. Ready State – The user program is ready to execute but it is waiting
for its turn to get the CPU. More than one user can be in a ready
state at a time.
3. Waiting State – The user’s program is waiting for some input/output
operation. More than one user can be in a waiting state at a time.
Requirements of Time Sharing Operating System: An alarm clock
mechanism to send an interrupt signal to the CPU after every time slice.
Memory Protection mechanism to prevent one job’s instructions and data from
interfering with other jobs.
What are some of the key features of a Time-Sharing Operating System?
A Time-Sharing Operating System’s key characteristics include the capacity to
support multiple concurrent users and the capacity to reduce response times
for all users. Additionally, because they permit multiple users to use the
system without needing to purchase individual licenses, time-
sharing operating systems may be more cost-effective for businesses.

What are some benefits of the Time-sharing operating System?

The ability for multiple users to use the system at various terminals
simultaneously is one advantage of using a time-sharing operating system. All
users’ response times can be cut down, and the system’s resources can be
used more effectively.
Additionally, because they permit multiple users to use the system without
needing to purchase individual licenses, time-sharing operating systems may
be more cost-effective for businesses.
Advantages
1. Each task gets an equal opportunity.
2. Fewer chances of duplication of software.
3. CPU idle time can be reduced.
Disadvantages
1. Reliability problem.
2. One must have to take of the security and integrity of user programs
and data.
3. Data communication problem.

3.Real Time Operating System (RTOS)

Real-time operating systems (RTOS) are used in environments where a


large number of events, mostly external to the computer system, must be
accepted and processed in a short time or within certain deadlines. such
applications are industrial control, telephone switching equipment, flight
control, and real-time simulations. With an RTOS, the processing time is
measured in tenths of seconds. This system is time-bound and has a fixed
deadline. The processing in this type of system must occur within the specified
constraints. Otherwise, This will lead to system failure.
Examples of real-time operating systems are airline traffic control systems,
Command Control Systems, airline reservation systems, Heart pacemakers,
Network Multimedia Systems, robots, etc.
The real-time operating systems can be of 3 types –
RTOS
1. Hard Real-Time Operating System: These operating systems
guarantee that critical tasks are completed within a range of time.

For example, a robot is hired to weld a car body. If the robot welds
too early or too late, the car cannot be sold, so it is a hard real-time
system that requires complete car welding by the robot hardly on
time., scientific experiments, medical imaging systems, industrial
control systems, weapon systems, robots, air traffic control systems,
etc.

2. Soft real-time operating system: This operating system provides


some relaxation in the time limit.

For example – Multimedia systems, digital audio systems, etc.


Explicit, programmer-defined, and controlled processes are
encountered in real-time systems. A separate process is changed by
handling a single external event. The process is activated upon the
occurrence of the related event signaled by an interrupt.

Multitasking operation is accomplished by scheduling processes for


execution independently of each other. Each process is assigned a
certain level of priority that corresponds to the relative importance of
the event that it services. The processor is allocated to the highest-
priority processes. This type of schedule, called, priority-based
preemptive scheduling is used by real-time systems.
3. Firm Real-time Operating System: RTOS of this type have to
follow deadlines as well. In spite of its small impact, missing a
deadline can have unintended consequences, including a reduction
in the quality of the product. Example: Multimedia applications.
4. Deterministic Real-time operating System: Consistency is the
main key in this type of real-time operating system. It ensures that all
the task and processes execute with predictable timing all the
time,which make it more suitable for applications in which timing
accuracy is very important. Examples: INTEGRITY, PikeOS.

Advantages:
The advantages of real-time operating systems are as follows-
1. Maximum consumption: Maximum utilization of devices and
systems. Thus more output from all the resources.

2. Task Shifting: Time assigned for shifting tasks in these systems is


very less. For example, in older systems, it takes about 10
microseconds. Shifting one task to another and in the latest systems,
it takes 3 microseconds.

3. Focus On Application: Focus on running applications and less


importance to applications that are in the queue.

4. Real-Time Operating System In Embedded System: Since the


size of programs is small, RTOS can also be embedded systems like
in transport and others.

5. Error Free: These types of systems are error-free.

6. Memory Allocation: Memory allocation is best managed in these


types of systems.

Disadvantages:
The disadvantages of real-time operating systems are as follows-

1. Limited Tasks: Very few tasks run simultaneously, and their


concentration is very less on few applications to avoid errors.
2. Use Heavy System Resources: Sometimes the system resources
are not so good and they are expensive as well.

3. Complex Algorithms: The algorithms are very complex and difficult


for the designer to write on.

4. Device Driver And Interrupt signals: It needs specific device


drivers and interrupts signals to respond earliest to interrupts.

5. Thread Priority: It is not good to set thread priority as these


systems are very less prone to switching tasks.

6. Minimum Switching: RTOS performs minimal task switching.


Comparison of Regular and Real-Time operating systems:
Regular OS Real-Time OS (RTOS)

Complex Simple

Best effort Guaranteed response

Fairness Strict Timing constraints

Average Bandwidth Minimum and maximum limits

Unknown components Components are known

Unpredictable behavior Predictable behavior

Plug and play RTOS is upgradeable


4.Multiprocessing Operating system

In operating systems, to improve the performance of more than one CPU can
be used within one computer system called Multiprocessor operating system.

Multiple CPUs are interconnected so that a job can be divided among them for
faster execution. When a job finishes, results from all CPUs are collected and
compiled to give the final output. Jobs needed to share main memory and they
may also share other system resources among themselves. Multiple CPUs
can also be used to run multiple jobs simultaneously.

For Example: UNIX Operating system is one of the most widely used
multiprocessing systems.

The basic organization of a typical multiprocessing system is shown in


the given figure.

To employ a multiprocessing operating system effectively, the computer


system must have the following things:

o A motherboard is capable of handling multiple processors in a


multiprocessing operating system.
o Processors are also capable of being used in a multiprocessing system.

Advantages of multiprocessing operating system are:


o Increased reliability: Due to the multiprocessing system, processing
tasks can be distributed among several processors. This increases
reliability as if one processor fails; the task can be given to another
processor for completion.
o Increased throughout: As several processors increase, more work can
be done in less
o The economy of Scale: As multiprocessors systems share peripherals,
secondary storage devices, and power supplies, they are relatively
cheaper than single-processor systems.

Disadvantages of Multiprocessing operating System


o Operating system of multiprocessing is more complex and sophisticated
as it takes care of multiple CPUs at the same time.

Types of multiprocessing systems


o Symmetrical multiprocessing operating system
o Asymmetric multiprocessing operating system

Symmetrical multiprocessing operating system:

In a Symmetrical multiprocessing system, each processor executes the same


copy of the operating system, takes its own decisions, and cooperates with
other processes to smooth the entire functioning of the system.
The CPU scheduling policies are very simple. Any new job submitted by a
user can be assigned to any processor that is least burdened. It also results in
a system in which all processors are equally burdened at any time.

The symmetric multiprocessing operating system is also known as a "shared


every-thing" system, because the processors share memory and the Input
output bus or data path. In this system processors do not usually exceed more
than 16.
Characteristics of Symmetrical multiprocessing operating system:

o In this system, any processor can run any job or process.


o In this, any processor initiates an Input and Output operation.

Advantages of Symmetrical multiprocessing operating system:

o These systems are fault-tolerant. Failure of a few processors does not


bring the entire system to a halt.

Disadvantages of Symmetrical multiprocessing operating system:

o It is very difficult to balance the workload among processors rationally.


o Specialized synchronization schemes are necessary for managing
multiple processors.

Asymmetric multiprocessing operating system

In an asymmetric multiprocessing system, there is a master slave relationship


between the processors.

Further, one processor may act as a master processor or supervisor


processor while others are treated as shown below.
In the above figure, the asymmetric processing system shows that CPU n1
acts as a supervisor whose function controls other following processors.

In this type of system, each processor is assigned a specific task, and there is
a designated master processor that controls the activities of other processors.

For example, we have a math co-processor that can handle mathematical


jobs better than the main CPU. Similarly, we have an MMX processor that is
built to handle multimedia-related jobs. Similarly, we have a graphics
processor to handle the graphics-related job better than the main processor.
When a user submits a new job, the OS has to decide which processor can
perform it better, and then that processor is assigned that newly arrived job.
This processor acts as the master and controls the system. All other
processors look for masters for instructions or have predefined tasks. It is the
responsibility of the master to allocate work to other processors.

Operating System Services:


Operating system is a software that acts as an intermediary between the user
and computer hardware. It is a program with the help of which we are able to
run various applications. It is the one program that is running all the time.
Every computer must have an operating system to smoothly execute other
programs. The OS coordinates the use of the hardware and application
programs for various users. It provides a platform for other application
programs to work. The operating system is a set of special programs that run
on a computer system that allows it to work properly. It controls input-output
devices, execution of programs, managing files, etc.
Services of Operating System
1. Program execution
2. Input Output Operations
3. Communication between Process
4. File Management
5. Memory Management
6. Process Management
7. Security and Privacy
8. Resource Management
9. User Interface
10. Networking
11. Error handling
12. Time Management

Program Execution
It is the Operating System that manages how a program is going to be
executed. It loads the program into the memory after which it is executed. The
order in which they are executed depends on the CPU Scheduling Algorithms.
A few are FCFS, SJF, etc. When the program is in execution, the Operating
System also handles deadlock i.e. no two processes come for execution at the
same time. The Operating System is responsible for the smooth execution of
both user and system programs. The Operating System utilizes various
resources available for the efficient running of all types of functionalities.

Input Output Operations


Operating System manages the input-output operations and establishes
communication between the user and device drivers. Device drivers are
software that is associated with hardware that is being managed by the OS so
that the sync between the devices works properly. It also provides access to
input-output devices to a program when needed.
Communication between Processes
The Operating system manages the communication between processes.
Communication between processes includes data transfer among them. If the
processes are not on the same computer but connected through a computer
network, then also their communication is managed by the Operating System
itself.
File Management
The operating system helps in managing files also. If a program needs access
to a file, it is the operating system that grants access. These permissions
include read-only, read-write, etc. It also provides a platform for the user to
create, and delete files. The Operating System is responsible for making
decisions regarding the storage of all types of data or files, i.e, floppy
disk/hard disk/pen drive, etc. The Operating System decides how the data
should be manipulated and stored.
Memory Management
Let’s understand memory management by OS in simple way. Imagine a
cricket team with limited number of player . The team manager (OS) decide
whether the upcoming player will be in playing 11 ,playing 15 or will not be
included in team , based on his performance . In the same way, OS first check
whether the upcoming program fulfil all requirement to get memory space or
not ,if all things good, it checks how much memory space will be sufficient for
program and then load the program into memory at certain location. And thus ,
it prevents program from using unnecessary memory.
Process Management
Let’s understand the process management in unique way. Imagine, our
kitchen stove as the (CPU) where all cooking(execution) is really happen and
chef as the (OS) who uses kitchen-stove(CPU) to cook different
dishes(program). The chef(OS) has to cook different dishes(programs) so he
ensure that any particular dish(program) does not take long time(unnecessary
time) and all dishes(programs) gets a chance to cooked(execution) .The
chef(OS) basically scheduled time for all dishes(programs) to run kitchen(all
the system) smoothly and thus cooked(execute) all the different
dishes(programs) efficiently.
Security and Privacy
• Security : OS keep our computer safe from an unauthorized user by
adding security layer to it. Basically, Security is nothing but just a
layer of protection which protect computer from bad guys like viruses
and hackers. OS provide us defenses like firewalls and anti-virus
software and ensure good safety of computer and personal
information.
• Privacy : OS give us facility to keep our essential information hidden
like having a lock on our door, where only you can enter and other
are not allowed . Basically , it respect our secrets and provide us
facility to keep it safe.
Resource Management
System resources are shared between various processes. It is the Operating
system that manages resource sharing. It also manages the CPU time among
processes using CPU Scheduling Algorithms. It also helps in the memory
management of the system. It also controls input-output devices. The OS also
ensures the proper use of all the resources available by deciding which
resource to be used by whom.
User Interface
User interface is essential and all operating systems provide it. Users either
interface with the operating system through the command-line interface or
graphical user interface or GUI. The command interpreter executes the next
user-specified command.
A GUI offers the user a mouse-based window and menu system as an
interface.
Networking
This service enables communication between devices on a network, such as
connecting to the internet, sending and receiving data packets, and managing
network connections.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-
Output devices, etc. It also ensures that an error does not occur frequently
and fixes the errors. It also prevents the process from coming to a deadlock. It
also looks for any type of error or bugs that can occur while any task. The
well-secured OS sometimes also acts as a countermeasure for preventing any
sort of breach of the Computer System from any external source and probably
handling them.
Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it
should be stop(red)=>(simple queue) , start(yellow)=>(ready
queue),move(green)=>(under execution) and this light (control) changes after
a certain interval of time at each side of the road(computer system) so that the
cars(program) from all side of road move smoothly without traffic.

File System:
A computer file is defined as a medium used for saving and managing data in
the computer system. The data stored in the computer system is completely in
digital format, although there can be various types of files that help us to store
the data.
What is a File System?
A file system is a method an operating system uses to store, organize, and
manage files and directories on a storage device. Some common types of file
systems include:
1. FAT (File Allocation Table): An older file system used by older
versions of Windows and other operating systems.
2. NTFS (New Technology File System): A modern file system used
by Windows. It supports features such as file and folder permissions,
compression, and encryption.
3. ext (Extended File System): A file system commonly used on Linux
and Unix-based operating systems.
4. HFS (Hierarchical File System): A file system used by macOS.
5. APFS (Apple File System): A new file system introduced by Apple
for their Macs and iOS devices.
A file is a collection of related information that is recorded on secondary
storage. Or file is a collection of logically related entities. From the user’s
perspective, a file is the smallest allotment of logical secondary storage.
The name of the file is divided into two parts as shown below:
• name
• extension, separated by a period.
Issues Handled By File System
We’ve seen a variety of data structures where the file could be kept. The file
system’s job is to keep the files organized in the best way possible.
A free space is created on the hard drive whenever a file is deleted from it. To
reallocate them to other files, many of these spaces may need to be
recovered. Choosing where to store the files on the hard disc is the main issue
with files one block may or may not be used to store a file. It may be kept in
the disk’s non-contiguous blocks. We must keep track of all the blocks where
the files are partially located.
Files Attributes And Their Operations
Attributes Types Operations

Name Doc Create

Type Exe Open

Size Jpg Read

Creation Data Xis Write

Author C Append

Last Modified Java Truncate

protection class Delete

Close

File type Usual extension Function


File type Usual extension Function

Read to run machine


Executable exe, com, bin
language program

Compiled, machine
Object obj, o
language not linked

Source code in various


Source Code C, java, pas, asm, a
languages

Commands to the
Batch bat, sh
command interpreter

Textual data,
Text txt, doc
documents

Various word processor


Word Processor wp, tex, rrf, doc
formats

Related files grouped


Archive arc, zip, tar
into one compressed file

For containing
Multimedia mpeg, mov, rm
audio/video information

It is the textual data and


Markup xml, html, tex
documents

It contains libraries of
Library lib, a ,so, dll routines for
programmers

It is a format for printing


Print or View gif, pdf, jpg or viewing an ASCII or
binary file.
File Directories
The collection of files is a file directory. The directory contains information
about the files, including attributes, location, and ownership. Much of this
information, especially that is concerned with storage, is managed by the
operating system. The directory is itself a file, accessible by various file
management routines.

Below are information contained in a device directory.


• Name
• Type
• Address
• Current length
• Maximum length
• Date last accessed
• Date last updated
• Owner id
• Protection information

The operation performed on the directory are:


• Search for a file
• Create a file
• Delete a file
• List a directory
• Rename a file
• Traverse the file system

Advantages of Maintaining Directories


• Efficiency: A file can be located more quickly.
• Naming: It becomes convenient for users as two users can have
same name for different files or may have different name for same
file.
• Grouping: Logical grouping of files can be done by properties e.g.
all java programs, all games etc.
Single-Level Directory
In this, a single directory is maintained for all the users.
• Naming problem: Users cannot have the same name for two files.
• Grouping problem: Users cannot group files according to their
needs.
Two-Level Directory
In this separate directories for each user is maintained.
• Path name: Due to two levels there is a path name for every file to
locate that file.
• Now, we can have the same file name for different users.
• Searching is efficient in this method.

Tree-Structured Directory
The directory is maintained in the form of a tree. Searching is efficient and
also there is grouping capability. We have absolute or relative path name for a
file.
File Allocation Methods
There are several types of file allocation methods. These are mentioned
below.
• Continuous Allocation
• Linked Allocation(Non-contiguous allocation)
• Indexed Allocation
Continuous Allocation
A single continuous set of blocks is allocated to a file at the time of file
creation. Thus, this is a pre-allocation strategy, using variable size portions.
The file allocation table needs just a single entry for each file, showing the
starting block and the length of the file. This method is best from the point of
view of the individual sequential file. Multiple blocks can be read in at a time to
improve I/O performance for sequential processing. It is also easy to retrieve a
single block. For example, if a file starts at block b, and the ith block of the file
is wanted, its location on secondary storage is simply b+i-1.
Disadvantages of Continuous Allocation
• External fragmentation will occur, making it difficult to find contiguous
blocks of space of sufficient length. A compaction algorithm will be
necessary to free up additional space on the disk.
• Also, with pre-allocation, it is necessary to declare the size of the file
at the time of creation.
Linked Allocation(Non-Contiguous Allocation)
Allocation is on an individual block basis. Each block contains a pointer to the
next block in the chain. Again the file table needs just a single entry for each
file, showing the starting block and the length of the file. Although pre-
allocation is possible, it is more common simply to allocate blocks as needed.
Any free block can be added to the chain. The blocks need not be continuous.
An increase in file size is always possible if a free disk block is available.
There is no external fragmentation because only one block at a time is needed
but there can be internal fragmentation but it exists only in the last disk block
of the file.

Disadvantage Linked Allocation(Non-contiguous allocation)


• Internal fragmentation exists in the last disk block of the file.
• There is an overhead of maintaining the pointer in every disk block.
• If the pointer of any disk block is lost, the file will be truncated.
• It supports only the sequential access of files.
Indexed Allocation
It addresses many of the problems of contiguous and chained allocation. In
this case, the file allocation table contains a separate one-level index for each
file: The index has one entry for each block allocated to the file. The allocation
may be on the basis of fixed-size blocks or variable-sized blocks. Allocation by
blocks eliminates external fragmentation, whereas allocation by variable-size
blocks improves locality. This allocation technique supports both sequential
and direct access to the file and thus is the most popular form of file
allocation.
Disk Free Space Management
Just as the space that is allocated to files must be managed, so the space that
is not currently allocated to any file must be managed. To perform any of the
file allocation techniques, it is necessary to know what blocks on the disk are
available. Thus we need a disk allocation table in addition to a file allocation
table. The following are the approaches used for free space management.
1. Bit Tables: This method uses a vector containing one bit for each
block on the disk. Each entry for a 0 corresponds to a free block and
each 1 corresponds to a block in use.
For example 00011010111100110001
In this vector every bit corresponds to a particular block and 0
implies that that particular block is free and 1 implies that the block is
already occupied. A bit table has the advantage that it is relatively
easy to find one or a contiguous group of free blocks. Thus, a bit
table works well with any of the file allocation methods. Another
advantage is that it is as small as possible.
2. Free Block List: In this method, each block is assigned a number
sequentially and the list of the numbers of all free blocks is
maintained in a reserved block of the disk.

Advantages of File System


• Organization: A file system allows files to be organized into
directories and subdirectories, making it easier to manage and locate
files.
• Data protection: File systems often include features such as file
and folder permissions, backup and restore, and error detection and
correction, to protect data from loss or corruption.
• Improved performance: A well-designed file system can improve
the performance of reading and writing data by organizing it
efficiently on disk.
Disadvantages of File System
• Compatibility issues: Different file systems may not be compatible
with each other, making it difficult to transfer data between different
operating systems.
• Disk space overhead: File systems may use some disk space to
store metadata and other overhead information, reducing the amount
of space available for user data.
• Vulnerability: File systems can be vulnerable to data
corruption, malware, and other security threats, which can
compromise the stability and security of the system.

File Protection:
In computer systems, alot of user’s information is stored, the objective of the
operating system is to keep safe the data of the user from the improper
access to the system. Protection can be provided in number of ways. For a
single laptop system, we might provide protection by locking the computer in a
desk drawer or file cabinet. For multi-user systems, different mechanisms are
used for the protection.
Types of Access :
The files which have direct access of the any user have the need of
protection. The files which are not accessible to other users doesn’t require
any kind of protection. The mechanism of the protection provide the facility of
the controlled access by just limiting the types of access to the file. Access
can be given or not given to any user depends on several factors, one of
which is the type of access required. Several different types of operations can
be controlled:
• Read – Reading from a file.
• Write – Writing or rewriting the file.
• Execute – Loading the file and after loading the execution process
starts.
• Append – Writing the new information to the already existing file,
editing must be end at the end of the existing file.
• Delete – Deleting the file which is of no use and using its space for
the another data.
• List – List the name and attributes of the file.
Operations like renaming, editing the existing file, copying; these can also be
controlled. There are many protection mechanism. each of them mechanism
have different advantages and disadvantages and must be appropriate for the
intended application.
Access Control :
There are different methods used by different users to access any file. The
general way of protection is to associate identity-dependent access with all the
files and directories an list called access-control list (ACL) which specify the
names of the users and the types of access associate with each of the user.
The main problem with the access list is their length. If we want to allow
everyone to read a file, we must list all the users with the read access. This
technique has two undesirable consequences:
Constructing such a list may be tedious and unrewarding task, especially if we
do not know in advance the list of the users in the system.
Previously, the entry of the any directory is of the fixed size but now it changes
to the variable size which results in the complicates space management.
These problems can be resolved by use of a condensed version of the access
list. To condense the length of the access-control list, many systems
recognize three classification of users in connection with each file:
• Owner – Owner is the user who has created the file.
• Group – A group is a set of members who has similar needs and
they are sharing the same file.
• Universe – In the system, all other users are under the category
called universe.
The most common recent approach is to combine access-control lists with the
normal general owner, group, and universe access control scheme. For
example: Solaris uses the three categories of access by default but allows
access-control lists to be added to specific files and directories when more
fine-grained access control is desired.
Other Protection Approaches:
The access to any system is also controlled by the password. If the use of
password is random and it is changed often, this may be result in limit the
effective access to a file.
The use of passwords has a few disadvantages:
• The number of passwords are very large so it is difficult to remember
the large passwords.
• If one password is used for all the files, then once it is discovered, all
files are accessible; protection is on all-or-none basis.
Unit II:

CPU Scheduling:-

CPU Scheduling in Operating Systems


Scheduling of processes/work is done to finish the work on time. CPU
Scheduling is a process that allows one process to use the CPU while
another process is delayed (in standby) due to unavailability of any resources
such as I / O etc, thus making full use of the CPU. The purpose of CPU
Scheduling is to make the system more efficient, faster, and fairer.
Whenever the CPU becomes idle, the operating system must select one of
the processes in the line ready for launch. The selection process is done by
a temporary (CPU) scheduler. The Scheduler selects between memory
processes ready to launch and assigns the CPU to one of them.

CPU Scheduling

In the uniprogrammming systems like MS DOS, when a process waits for


any I/O operation to be done, the CPU remains idol. This is an overhead since
it wastes the time and causes the problem of starvation. However, In
Multiprogramming systems, the CPU doesn't remain idle during the waiting
time of the Process and it starts executing other processes. Operating System
has to define which process the CPU will be given.

In Multiprogramming systems, the Operating system schedules the


processes on the CPU to have the maximum utilization of it and this
procedure is called CPU scheduling. The Operating System uses various
scheduling algorithm to schedule the processes.

This is a task of the short term scheduler to schedule the CPU for the number
of processes present in the Job Pool. Whenever the running process requests
some IO operation then the short term scheduler saves the current context of
the process (also called PCB) and changes its state from running to waiting.
During the time, process is in waiting state; the Short term scheduler picks
another process from the ready queue and assigns the CPU to this process.
This procedure is called context switching.

What is saved in the Process Control Block?


The Operating system maintains a process control block during the lifetime of
the process. The Process control block is deleted when the process is
terminated or killed. There is the following information which is saved in the
process control block and is changing with the state of the process.

Why do we need Scheduling?

In Multiprogramming, if the long term scheduler picks more I/O bound


processes then most of the time, the CPU remains idol. The task of Operating
system is to optimize the utilization of resources.

If most of the running processes change their state from running to waiting
then there may always be a possibility of deadlock in the system. Hence to
reduce this overhead, the OS needs to schedule the jobs to get the optimal
utilization of CPU and to avoid the possibility to deadlock.

Process Management in OS

A Program does nothing unless its instructions are executed by a CPU. A


program in execution is called a process. In order to accomplish its task,
process needs the computer resources.
There may exist more than one process in the system which may require the
same resource at the same time. Therefore, the operating system has to
manage all the processes and the resources in a convenient and efficient way.

Some resources may need to be executed by one process at one time to


maintain the consistency otherwise the system can become inconsistent and
deadlock may occur.

The operating system is responsible for the following activities in connection


with Process Management

1. Scheduling processes and threads on the CPUs.


2. Creating and deleting both user and system processes.
3. Suspending and resuming processes.
4. Providing mechanisms for process synchronization.
5. Providing mechanisms for process communication.

Process States

State Diagram
The process, from its creation to completion, passes through various states.
The minimum number of states is five.

The names of the states are not standardized although the process may be in
one of the following states during execution.

1. New

A program which is going to be picked up by the OS into the main memory is


called a new process.

2. Ready

Whenever a process is created, it directly enters in the ready state, in which, it


waits for the CPU to be assigned. The OS picks the new processes from the
secondary memory and put all of them in the main memory.

The processes which are ready for the execution and reside in the main
memory are called ready state processes. There can be many processes
present in the ready state.

3. Running

One of the processes from the ready state will be chosen by the OS
depending upon the scheduling algorithm. Hence, if we have only one CPU in
our system, the number of running processes for a particular time will always
be one. If we have n processors in the system then we can have n processes
running simultaneously.

4. Block or wait

From the Running state, a process can make the transition to the block or wait
state depending upon the scheduling algorithm or the intrinsic behavior of the
process.

When a process waits for a certain resource to be assigned or for the input
from the user then the OS move this process to the block or wait state and
assigns the CPU to the other processes.
5. Completion or termination

When a process finishes its execution, it comes in the termination state. All
the context of the process (Process Control Block) will also be deleted the
process will be terminated by the Operating system.

6. Suspend ready

A process in the ready state, which is moved to secondary memory from the
main memory due to lack of the resources (mainly primary memory) is called
in the suspend ready state.

If the main memory is full and a higher priority process comes for the
execution then the OS have to make the room for the process in the main
memory by throwing the lower priority process out into the secondary memory.
The suspend ready processes remain in the secondary memory until the main
memory gets available.

7. Suspend wait

Instead of removing the process from the ready queue, it's better to remove
the blocked process which is waiting for some resources in the main memory.
Since it is already waiting for some resource to get available hence it is better
if it waits in the secondary memory and make room for the higher priority
process. These processes complete their execution once the main memory
gets available and their wait is finished.

Operations on the Process

1. Creation

Once the process is created, it will be ready and come into the ready queue
(main memory) and will be ready for the execution.

2. Scheduling

Out of the many processes present in the ready queue, the Operating system
chooses one process and start executing it. Selecting the process which is to
be executed next, is known as scheduling.
3. Execution

Once the process is scheduled for the execution, the processor starts
executing it. Process may come to the blocked or wait state during the
execution then in that case the processor starts executing the other
processes.

4. Deletion/killing

Once the purpose of the process gets over then the OS will kill the process.
The Context of the process (PCB) will be deleted and the process gets
terminated by the Operating system.

Scheduling Algorithms in OS (Operating System)

There are various algorithms which are used by the Operating System to
schedule the processes on the processor in an efficient way.

The Purpose of a Scheduling algorithm

1. Maximum CPU utilization


2. Fare allocation of CPU
3. Maximum throughput
4. Minimum turnaround time
5. Minimum waiting time
6. Minimum response time

There are the following algorithms which can be used to schedule the jobs.

1. First Come First Serve

It is the simplest algorithm to implement. The process with the minimal arrival
time will get the CPU first. The lesser the arrival time, the sooner will the
process gets the CPU. It is the non-preemptive type of scheduling.

2. Round Robin

In the Round Robin scheduling algorithm, the OS defines a time quantum


(slice). All the processes will get executed in the cyclic way. Each of the
process will get the CPU for a small amount of time (called time quantum) and
then get back to the ready queue to wait for its next turn. It is a preemptive
type of scheduling.
3. Shortest Job First

The job with the shortest burst time will get the CPU first. The lesser the burst
time, the sooner will the process get the CPU. It is the non-preemptive type of
scheduling.

4. Shortest remaining time first

It is the preemptive form of SJF. In this algorithm, the OS schedules the Job
according to the remaining time of the execution.

5. Priority based scheduling

In this algorithm, the priority will be assigned to each of the processes. The
higher the priority, the sooner will the process get the CPU. If the priority of the
two processes is same then they will be scheduled according to their arrival
time.

6. Highest Response Ratio Next

In this scheduling Algorithm, the process with highest response ratio will be
scheduled next. This reduces the starvation in the system.

Multiple-Processor Scheduling in Operating System:


In multiple-processor scheduling multiple CPU’s are available and
hence Load Sharing becomes possible. However multiple processor
scheduling is more complex as compared to single processor scheduling. In
multiple processor scheduling there are cases when the processors are
identical i.e. HOMOGENEOUS, in terms of their functionality, we can use any
processor available to run any process in the queue.
Why is multiple-processor scheduling important?
Multiple-processor scheduling is important because it enables a computer
system to perform multiple tasks simultaneously, which can greatly improve
overall system performance and efficiency.
How does multiple-processor scheduling work?
Multiple-processor scheduling works by dividing tasks among multiple
processors in a computer system, which allows tasks to be processed
simultaneously and reduces the overall time needed to complete them.
Approaches to Multiple-Processor Scheduling –

One approach is when all the scheduling decisions and I/O processing are
handled by a single processor which is called the Master Server and the
other processors executes only the user code. This is simple and reduces the
need of data sharing. This entire scenario is called Asymmetric
Multiprocessing. A second approach uses Symmetric
Multiprocessing where each processor is self scheduling. All processes
may be in a common ready queue or each processor may have its own private
queue for ready processes. The scheduling proceeds further by having the
scheduler for each processor examine the ready queue and select a process
to execute.

Processor Affinity –

Processor Affinity means a processes has an affinity for the processor on


which it is currently running. When a process runs on a specific processor
there are certain effects on the cache memory. The data most recently
accessed by the process populate the cache for the processor and as a result
successive memory access by the process are often satisfied in the cache
memory. Now if the process migrates to another processor, the contents of
the cache memory must be invalidated for the first processor and the cache
for the second processor must be repopulated. Because of the high cost of
invalidating and repopulating caches, most of the SMP(symmetric
multiprocessing) systems try to avoid migration of processes from one
processor to another and try to keep a process running on the same
processor. This is known as PROCESSOR AFFINITY. There are two types of
processor affinity:
1. Soft Affinity – When an operating system has a policy of attempting
to keep a process running on the same processor but not
guaranteeing it will do so, this situation is called soft affinity.
2. Hard Affinity – Hard Affinity allows a process to specify a subset of
processors on which it may run. Some systems such as Linux
implements soft affinity but also provide some system calls
like sched_setaffinity() that supports hard affinity.

Load Balancing –

Load Balancing is the phenomena which keeps


the workload evenly distributed across all processors in an SMP system.
Load balancing is necessary only on systems where each processor has its
own private queue of process which are eligible to execute. Load balancing is
unnecessary because once a processor becomes idle it immediately extracts
a runnable process from the common run queue. On SMP(symmetric
multiprocessing), it is important to keep the workload balanced among all
processors to fully utilize the benefits of having more than one processor else
one or more processor will sit idle while other processors have high workloads
along with lists of processors awaiting the CPU. There are two general
approaches to load balancing :
1. Push Migration – In push migration a task routinely checks the load
on each processor and if it finds an imbalance then it evenly
distributes load on each processors by moving the processes from
overloaded to idle or less busy processors.
2. Pull Migration – Pull Migration occurs when an idle processor pulls
a waiting task from a busy processor for its execution.

Multicore Processors –

In multicore processors multiple processor cores are places on the same


physical chip. Each core has a register set to maintain its architectural state
and thus appears to the operating system as a separate physical
processor. SMP systems that use multicore processors are faster and
consume less power than systems in which each processor has its own
physical chip. However multicore processors may complicate the scheduling
problems. When processor accesses memory then it spends a significant
amount of time waiting for the data to become available. This situation is
called MEMORY STALL. It occurs for various reasons such as cache miss,
which is accessing the data that is not in the cache memory. In such cases the
processor can spend upto fifty percent of its time waiting for data to become
available from the memory. To solve this problem recent hardware designs
have implemented multithreaded processor cores in which two or more
hardware threads are assigned to each core. Therefore if one thread stalls
while waiting for the memory, core can switch to another thread. There are
two ways to multithread a processor :
1. Coarse-Grained Multithreading – In coarse grained multithreading
a thread executes on a processor until a long latency event such as
a memory stall occurs, because of the delay caused by the long
latency event, the processor must switch to another thread to begin
execution. The cost of switching between threads is high as the
instruction pipeline must be terminated before the other thread can
begin execution on the processor core. Once this new thread begins
execution it begins filling the pipeline with its instructions.
2. Fine-Grained Multithreading – This multithreading switches
between threads at a much finer level mainly at the boundary of an
instruction cycle. The architectural design of fine grained systems
include logic for thread switching and as a result the cost of
switching between threads is small.

Virtualization and Threading –

In this type of multiple-processor scheduling even a single CPU system acts


like a multiple-processor system. In a system with Virtualization, the
virtualization presents one or more virtual CPU to each of virtual machines
running on the system and then schedules the use of physical CPU among
the virtual machines. Most virtualized environments have one host operating
system and many guest operating systems. The host operating system
creates and manages the virtual machines. Each virtual machine has a guest
operating system installed and applications run within that guest.Each guest
operating system may be assigned for specific use cases,applications or
users including time sharing or even real-time operation. Any guest operating-
system scheduling algorithm that assumes a certain amount of progress in a
given amount of time will be negatively impacted by the virtualization. A time
sharing operating system tries to allot 100 milliseconds to each time slice to
give users a reasonable response time. A given 100 millisecond time slice
may take much more than 100 milliseconds of virtual CPU time. Depending on
how busy the system is, the time slice may take a second or more which
results in a very poor response time for users logged into that virtual machine.
The net effect of such scheduling layering is that individual virtualized
operating systems receive only a portion of the available CPU cycles, even
though they believe they are receiving all cycles and that they are scheduling
all of those cycles.Commonly, the time-of-day clocks in virtual machines are
incorrect because timers take no longer to trigger than they would on
dedicated CPU’s. Virtualizations can thus undo the good scheduling-
algorithm efforts of the operating systems within virtual machines.
Unit III:
Memory Management:-
Memory Management in Operating System
The term memory can be defined as a collection of data in a specific format. It
is used to store instructions and process data. The memory comprises a large
array or group of words or bytes, each with its own location. The primary
purpose of a computer system is to execute programs. These programs, along
with the information they access, should be in the main memory during
execution. The CPU fetches instructions from memory according to the value
of the program counter.
To achieve a degree of multiprogramming and proper utilization of
memory, memory management is important. Many memory management
methods exist, reflecting various approaches, and the effectiveness of each
algorithm depends on the situation.

Bare Machine and Resident Monitor


In this article, we are going to talk about two important part of the computer
system, that is Bare machine and Resident monitor. So first let’s study about
them that how they are important for the operating systems.
The Bare Machine and Resident Monitor are not directly related to the
operating system but while we study about memory management these
components are really important to study, so let’s study them one by one and
then their working.
Bare Machine:
So basically Bare machine is logical hardware which is used to execute the
program in the processor without using the operating system. as of now, we
have studied that we can’t execute any process without the Operating system.
But yes with the help of the Bare machine we can do that.
Initially, when the operating systems are not developed, the execution of an
instruction is done by directly on hardware without using any interfering
hardware, at that time the only drawback was that the Bare machines
accepting the instruction in only machine language, due to this those person
who has sufficient knowledge about Computer field are able to operate a
computer. so after the development of the operating system Bare machine is
referred to as inefficient.
Resident Monitor:
In this section, if we talk about how the code runs on Bare machines, then this
component is used, so basically, the Resident Monitor is a code that runs on
Bare Machines.
The resident monitor works like an operating system that controls the
instructions and performs all necessary functions. It also works like job
sequencer because it also sequences the job and sends them to the
processor.
After scheduling the job Resident monitors loads the programs one by one
into the main memory according to their sequences. One most important
factor about the resident monitor is that when the program execution occurred
there is no gap between the program execution and the processing is going to
be faster.
The Resident monitors are divided into 4 parts as:
1. Control Language Interpreter
2. Loader
3. Device Driver
4. Interrupt Processing

These are explained as following below.


1. Control Language Interpreter:
The first part of the Resident monitor is control language interpreter
which is used to read and carry out the instruction from one level to
the next level.
2. Loader:
The second part of the Resident monitor which is the main part of
the Resident Monitor is Loader which Loads all the necessary
system and application programs into the main memory.
3. Device Driver:
The third part of the Resident monitor is Device Driver which is used
to manage the connecting input-output devices to the system. So
basically it is the interface between the user and the system. it works
as an interface between the request and response. request which
user made, Device driver responds that the system produces to fulfill
these requests.
4. Interrupt Processing:
The fourth part as the name suggests, it processes the all occurred
interrupt to the system.

Partitioning in Operating System

Fixed partitioning, also known as static partitioning, is a memory allocation


technique used in operating systems to divide the physical memory into fixed-
size partitions or regions, each assigned to a specific process or user. Each
partition is typically allocated at system boot time and remains dedicated to a
specific process until it terminates or releases the partition.
1. In fixed partitioning, the memory is divided into fixed-size chunks,
with each chunk being reserved for a specific process. When a
process requests memory, the operating system assigns it to the
appropriate partition. Each partition is of the same size, and the
memory allocation is done at system boot time.
2. Fixed partitioning has several advantages over other memory
allocation techniques. First, it is simple and easy to implement.
Second, it is predictable, meaning the operating system can ensure
a minimum amount of memory for each process. Third, it can prevent
processes from interfering with each other’s memory space,
improving the security and stability of the system.
3. However, fixed partitioning also has some disadvantages. It can lead
to internal fragmentation, where memory in a partition remains
unused. This can happen when the process’s memory requirements
are smaller than the partition size, leaving some memory unused.
Additionally, fixed partitioning limits the number of processes that can
run concurrently, as each process requires a dedicated partition.
There are two Memory Management Techniques:
1. Contiguous
2. Non-Contiguous
In Contiguous Technique, executing process must be loaded entirely in the
main memory.
Contiguous Technique can be divided into:
• Fixed (or static) partitioning
• Variable (or dynamic) partitioning

Paging and Segmentation:


Paging:
Paging is a method or technique which is used for non-contiguous memory
allocation. It is a fixed-size partitioning theme (scheme). In paging, both main
memory and secondary memory are divided into equal fixed-size partitions.
The partitions of the secondary memory area unit and main memory area unit
are known as pages and frames respectively.
Paging is a memory management method accustomed fetch processes from
the secondary memory into the main memory in the form of pages. in paging,
each process is split into parts wherever the size of every part is the same as
the page size. The size of the last half could also be but the page size. The
pages of the process area unit hold on within the frames of main memory
relying upon their accessibility.
Segmentation:
Segmentation is another non-contiguous memory allocation scheme like
paging. like paging, in segmentation, the process isn’t divided indiscriminately
into mounted(fixed) size pages. It is a variable-size partitioning theme. like
paging, in segmentation, secondary and main memory are not divided into
partitions of equal size. The partitions of secondary memory area units are
known as segments. The details concerning every segment are hold in a table
known as segmentation table. Segment table contains two main data
concerning segment, one is Base, which is the bottom address of the segment
and another is Limit, which is the length of the segment.

In segmentation, the CPU generates a logical address that contains the


Segment number and segment offset. If the segment offset is a smaller
amount than the limit then the address called valid address otherwise it throws
miscalculation because the address is invalid.

The above figure shows the translation of a logical address to a physical


address.

S.NO Paging Segmentation

In paging, the program is


In segmentation, the program is
1. divided into fixed or mounted
divided into variable size sections.
size pages.
S.NO Paging Segmentation

For the paging operating system For segmentation compiler is


2.
is accountable. accountable.

Page size is determined by Here, the section size is given by


3.
hardware. the user.

It is faster in comparison to
4. Segmentation is slow.
segmentation.

Paging could result in internal Segmentation could result in


5.
fragmentation. external fragmentation.

In paging, the logical address is Here, the logical address is split


6. split into a page number and into section number and section
page offset. offset.

While segmentation also comprises


Paging comprises a page table
the segment table which encloses
7. that encloses the base address
segment number and segment
of every page.
offset.

The page table is employed to Section Table maintains the section


8.
keep up the page data. data.

In segmentation, the operating


In paging, the operating system
9. system maintains a list of holes in
must maintain a free frame list.
the main memory.

10. Paging is invisible to the user. Segmentation is visible to the user.


S.NO Paging Segmentation

In paging, the processor needs In segmentation, the processor


11. the page number, and offset to uses segment number, and offset to
calculate the absolute address. calculate the full address.

It is hard to allow sharing of


Facilitates sharing of procedures
12. procedures between
between the processes.
processes.

In paging, a programmer cannot It can efficiently handle data


13
efficiently handle data structure. structures.

Easy to apply for protection in


14. This protection is hard to apply.
segmentation.

The size of the page needs


There is no constraint on the size of
15. always be equal to the size of
segments.
frames.

A page is referred to as a A segment is referred to as a logical


16.
physical unit of information. unit of information.

Paging results in a less efficient Segmentation results in a more


17.
system. efficient system.
Virtual Demand Paging in Operating System:

The concept of query navigation in the operating system. This concept says
that we should not load any pages into the main memory until we need them,
or keep all pages in secondary memory until we need them.
What is Demand Paging?
Demand paging can be described as a memory management technique that is
used in operating systems to improve memory usage and system
performance. Demand paging is a technique used in virtual memory systems
where pages enter main memory only when requested or needed by the CPU.
In demand paging, the operating system loads only the necessary pages of a
program into memory at runtime, instead of loading the entire program into
memory at the start. A page fault occurred when the program needed to
access a page that is not currently in memory. The operating system then
loads the required pages from the disk into memory and updates the page
tables accordingly. This process is transparent to the running program and it
continues to run as if the page had always been in memory.
What is Page Fault?
The term “page miss” or “page fault” refers to a situation where a referenced
page is not found in the main memory.
When a program tries to access a page, or fixed-size block of memory, that
isn’t currently loaded in physical memory (RAM), an exception known as a
page fault happens. Before enabling the program to access a page that is
required, the operating system must bring it into memory from secondary
storage (such a hard drive) in order to handle a page fault.
In modern operating systems, page faults are a common component of virtual
memory management. By enabling programs to operate with more data than
can fit in physical memory at once, they enable the efficient use of physical
memory. The operating system is responsible for coordinating the transfer of
data between physical memory and secondary storage as needed.
What is Thrashing?
Thrashing is the term used to describe a state in which excessive paging
activity takes place in computer systems, especially in operating systems that
use virtual memory, severely impairing system performance. Thrashing occurs
when a system’s high memory demand and low physical memory capacity
cause it to spend a large amount of time rotating pages between main
memory (RAM) and secondary storage, which is typically a hard disc.

It is caused due to insufficient physical memory, overloading and poor


memory management. The operating system may use a variety of techniques
to lessen thrashing, including lowering the number of running processes,
adjusting paging parameters, and improving memory allocation algorithms.
Increasing the system’s physical memory (RAM) capacity can also lessen
thrashing by lowering the frequency of page swaps between RAM and the
disc.
Pure Demand Paging
Pure demand paging is a specific implementation of demand paging.
The operating system only loads pages into memory when the program needs
them. In on-demand paging only, no pages are initially loaded into memory
when the program starts, and all pages are initially marked as being on disk.
Operating systems that use pure demand paging as a memory management
strategy do so without preloading any pages into physical memory prior to the
commencement of a task. Demand paging loads a process’s whole address
space into memory one step at a time, bringing just the parts of the process
that are actively being used into memory from disc as needed.
It is useful for executing huge programs that might not fit totally in memory or
for computers with limited physical memory. If the program accesses a lot of
pages that are not in memory right now, it could also result in a rise in page
faults and possible performance overhead. Operating systems frequently use
caching techniques and improve page replacement algorithms to lessen the
negative effects of page faults on system performance as a whole.
Working Process of Demand Paging
So, let us understand this with the help of an example. Suppose we want to
run a process P which have four pages P0, P1, P2, and P3. Currently, in the
page table, we have pages P1 and P3.
Therefore, the operating system‘s demand paging mechanism follows a few
steps in its operation.
• Program Execution: Upon launching a program, the operating
system allocates a certain amount of memory to the program and
establishes a process for it.
• Creating page tables: To keep track of which program pages are
currently in memory and which are on disk, the operating system
makes page tables for each process.
• Handling Page Fault: When a program tries to access a page that
isn’t in memory at the moment, a page fault happens. In order to
determine whether the necessary page is on disk, the operating
system pauses the application and consults the page tables.
• Page Fetch: The operating system loads the necessary page into
memory by retrieving it from the disk if it is there.
• The page’s new location in memory is then reflected in the page
table.
• Resuming the program: The operating system picks up where it left
off when the necessary pages are loaded into memory.
• Page replacement: If there is not enough free memory to hold all
the pages a program needs, the operating system may need to
replace one or more pages currently in memory with pages currently
in memory. on the disk. The page replacement algorithm used by the
operating system determines which pages are selected for
replacement.
• Page cleanup: When a process terminates, the operating system
frees the memory allocated to the process and cleans up the
corresponding entries in the page tables.
Advantages of Demand Paging
So in the Demand Paging technique, there are some benefits that provide
efficiency of the operating system.
• Efficient use of physical memory: Query paging allows for more
efficient use because only the necessary pages are loaded into
memory at any given time.
• Support for larger programs: Programs can be larger than the
physical memory available on the system because only the
necessary pages will be loaded into memory.
• Faster program start: Because only part of a program is initially
loaded into memory, programs can start faster than if the entire
program were loaded at once.
• Reduce memory usage: Query paging can help reduce the amount
of memory a program needs, which can improve system
performance by reducing the amount of disk I/O required.
Disadvantages of Demand Paging
• Page Fault Overload: The process of swapping pages between
memory and disk can cause a performance overhead, especially if
the program frequently accesses pages that are not currently in
memory.
• Degraded performance: If a program frequently accesses pages
that are not currently in memory, the system spends a lot of time
swapping out pages, which degrades performance.
• Fragmentation: Query paging can cause physical
memory fragmentation, degrading system performance over time.
• Complexity: Implementing query paging in an operating system can
be complex, requiring complex algorithms and data structures to
manage page tables and swap space.
Deadlocks in Operating System:

A process in operating system uses resources in the following way.


1. Requests a resource
2. Use the resource
3. Releases the resource
A deadlock is a situation where a set of processes are blocked because each
process is holding a resource and waiting for another resource acquired by
some other process.
Consider an example when two trains are coming toward each other on the
same track and there is only one track, none of the trains can move once they
are in front of each other. A similar situation occurs in operating systems when
there are two or more processes that hold some resources and wait for
resources held by other(s). For example, in the below diagram, Process 1 is
holding Resource 1 and waiting for resource 2 which is acquired by process 2,
and process 2 is waiting for resource 1.

Examples Of Deadlock
1. The system has 2 tape drives. P0 and P1 each hold one tape drive
and each needs another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as
follows:
• P0 executes wait(A) and preempts.
• P1 executes wait(B).
• Now P0 and P1 enter in deadlock.
P0 P1

wait(A); wait(B)

wait(B); wait(A)

3. Assume the space is available for allocation of 200K bytes, and the
following sequence of events occurs.

P0 P1

Request Request
80KB; 70KB;

Request 60KB; Request 80KB;

Deadlock occurs if both processes progress to their second request.


Deadlock can arise if the following four conditions hold simultaneously
(Necessary Conditions)
Mutual Exclusion: Two or more resources are non-shareable (Only one
process can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for
resources.
No Preemption: A resource cannot be taken from a process unless the
process releases the resource.
Circular Wait: A set of processes waiting for each other in circular form.

Methods for handling deadlock


There are three ways to handle deadlock
1) Deadlock prevention or avoidance:
Prevention:
The idea is to not let the system into a deadlock state. This system will make
sure that above mentioned four conditions will not arise. These techniques are
very costly so we use this in cases where our priority is making a system
deadlock-free.
One can zoom into each category individually, Prevention is done by negating
one of the above-mentioned necessary conditions for deadlock. Prevention
can be done in four different ways:
1. Eliminate mutual exclusion 3. Allow
preemption
2. Solve hold and Wait 4. Circular wait
Solution
Avoidance:
Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have
to make an assumption. We need to ensure that all information about
resources that the process will need is known to us before the execution of the
process. We use Banker’s algorithm (Which is in turn a gift from Dijkstra) to
avoid deadlock.
In prevention and avoidance, we get the correctness of data but performance
decreases.

2) Deadlock detection and recovery: If Deadlock prevention or avoidance is


not applied to the software then we can handle this by deadlock detection and
recovery. which consist of two phases:
1. In the first phase, we examine the state of the process and check
whether there is a deadlock or not in the system.
2. If found deadlock in the first phase then we apply the algorithm for
recovery of the deadlock.
In Deadlock detection and recovery, we get the correctness of data but
performance decreases.
Recovery from Deadlock
1. Manual Intervention:
When a deadlock is detected, one option is to inform the operator and let them
handle the situation manually. While this approach allows for human judgment
and decision-making, it can be time-consuming and may not be feasible in
large-scale systems.
2. Automatic Recovery:
An alternative approach is to enable the system to recover from deadlock
automatically. This method involves breaking the deadlock cycle by either
aborting processes or preempting resources. Let’s delve into these strategies
in more detail.

Recovery from Deadlock: Process Termination:


1. Abort all deadlocked processes:
This approach breaks the deadlock cycle, but it comes at a significant cost.
The processes that were aborted may have executed for a considerable
amount of time, resulting in the loss of partial computations. These
computations may need to be recomputed later.
2. Abort one process at a time:
Instead of aborting all deadlocked processes simultaneously, this strategy
involves selectively aborting one process at a time until the deadlock cycle is
eliminated. However, this incurs overhead as a deadlock-detection algorithm
must be invoked after each process termination to determine if any processes
are still deadlocked.

Factors for choosing the termination order:


– The process’s priority
– Completion time and the progress made so far
– Resources consumed by the process
– Resources required to complete the process
– Number of processes to be terminated
– Process type (interactive or batch)

Recovery from Deadlock: Resource Preemption:


1. Selecting a victim:
Resource preemption involves choosing which resources and processes
should be preempted to break the deadlock. The selection order aims to
minimize the overall cost of recovery. Factors considered for victim selection
may include the number of resources held by a deadlocked process and the
amount of time the process has consumed.
2. Rollback:
If a resource is preempted from a process, the process cannot continue its
normal execution as it lacks the required resource. Rolling back the process to
a safe state and restarting it is a common approach. Determining a safe state
can be challenging, leading to the use of total rollback, where the process is
aborted and restarted from scratch.
3. Starvation prevention:
To prevent resource starvation, it is essential to ensure that the same process
is not always chosen as a victim. If victim selection is solely based on cost
factors, one process might repeatedly lose its resources and never complete
its designated task. To address this, it is advisable to limit the number of times
a process can be chosen as a victim, including the number of rollbacks in the
cost factor.
3) Deadlock ignorance: If a deadlock is very rare, then let it happen and
reboot the system. This is the approach that both Windows and UNIX take. we
use the ostrich algorithm for deadlock ignorance.
In Deadlock, ignorance performance is better than the above two methods but
the correctness of data.
Safe State:
A safe state can be defined as a state in which there is no deadlock. It is
achievable if:
• If a process needs an unavailable resource, it may wait until the
same has been released by a process to which it has already been
allocated. if such a sequence does not exist, it is an unsafe state.
• All the requested resources are allocated to the process.
Unit IV:
Resource Protection:-
Resource Protection in Operating System

Protection is especially important in a multiuser environment when multiple


users use computer resources such as CPU, memory, etc. It is the operating
system's responsibility to offer a mechanism that protects each process from
other processes. In a multiuser environment, all assets that require protection
are classified as objects, and those that wish to access these objects are
referred to as subjects. The operating system grants different 'access rights' to
different subjects.

System protection in an operating system refers to the mechanisms


implemented by the operating system to ensure the security and integrity of
the system. System protection involves various techniques to prevent
unauthorized access, misuse, or modification of the operating system and its
resources.
There are several ways in which an operating system can provide system
protection:
User authentication: The operating system requires users to authenticate
themselves before accessing the system. Usernames and passwords are
commonly used for this purpose.
Access control: The operating system uses access control lists (ACLs) to
determine which users or processes have permission to access specific
resources or perform specific actions.
Encryption: The operating system can use encryption to protect sensitive
data and prevent unauthorized access.
Firewall: A firewall is a software program that monitors and controls
incoming and outgoing network traffic based on predefined security rules.
Antivirus software: Antivirus software is used to protect the system from
viruses, malware, and other malicious software.
System updates and patches: The operating system must be kept up-to-
date with the latest security patches and updates to prevent known
vulnerabilities from being exploited.
By implementing these protection mechanisms, the operating system can
prevent unauthorized access to the system, protect sensitive data, and
ensure the overall security and integrity of the system.
What is Protection?

Protection refers to a mechanism which controls the access of programs,


processes, or users to the resources defined by a computer system. We can
take protection as a helper to multi programming operating system, so that
many users might safely share a common logical name space such as
directory or files.
Need for Protection:
• To prevent the access of unauthorized users
• To ensure that each active programs or processes in the system
uses resources only as the stated policy
• To improve reliability by detecting latent errors
Role of Protection:
The role of protection is to provide a mechanism that implement policies
which defines the uses of resources in the computer system. Some policies
are defined at the time of design of the system, some are designed by
management of the system and some are defined by the users of the system
to protect their own files and programs. Every application has different
policies for use of the resources and they may change over time so
protection of the system is not only concern of the designer of the operating
system. Application programmer should also design the protection
mechanism to protect their system against misuse. Policy is different from
mechanism. Mechanisms determine how something will be done and policies
determine what will be done. Policies are changed over time and place to
place. Separation of mechanism and policy is important for the flexibility of
the system.

Advantages of system protection in an operating system:

1. Ensures the security and integrity of the system


2. Prevents unauthorized access, misuse, or modification of the
operating system and its resources
3. Protects sensitive data
4. Provides a secure environment for users and applications
5. Prevents malware and other security threats from infecting the
system
6. Allows for safe sharing of resources and data among users and
applications
7. Helps maintain compliance with security regulations and standards
Disadvantages of system protection in an operating system:

1. Can be complex and difficult to implement and manage


2. May slow down system performance due to increased security
measures
3. Can cause compatibility issues with some applications or hardware
4. Can create a false sense of security if users are not properly
educated on safe computing practices
5. Can create additional costs for implementing and maintaining
security measures.

Resource Access matrix and Implementation:

In this article, you will learn about implementing the access matrix in the
operating system. But before discussing the implementation of the access
matrix, you must know about the access matrix in the operating system.

What is Access Matrix in Operating System?

The Access Matrix is a security model for a computer system's protection


state. It is described as a matrix. An access matrix is used to specify the
permissions of each process running in the domain for each object. The rows
of the matrix represent domains, whereas the columns represent objects.
Every matrix cell reflects a set of access rights granted to domain processes,
i.e., each entry (i, j) describes the set of operations that a domain Di process
may invoke on object Oj.

There are various methods of implementing the access matrix in the operating
system. These methods are as follows:

1. Global Table
2. Access Lists for Objects
3. Capability Lists for Domains
4. Lock-Key Mechanism

Global Table

It is the most basic access matrix implementation. A set of ordered


triples <domain, object, rights-set> is maintained in a file. When an
operation M has been performed on an object Oj within domain Di, the table is
searched for a triple <Di, Oj, Rk>. The operation can proceed if this triple is
located; otherwise, an exception (or error) condition has arrived. This
implementation has various drawbacks. The table is generally large and
cannot be stored in the main memory, so additional input and output are
required.

Access Lists for Objects

Every access matrix column may be used as a single object's access list. It is
possible to delete the blank entries. For each object, the resulting list contains
ordered pairs <domain, rights-set> that define all domains for that object and
a nonempty set of access rights.

We may start by checking the default set and then find the access list. If the
item is found, we enable the action; if it isn't, we verify the default set. If M is in
the default set, we grant access. Access is denied if this is not the case, and
an extraordinary scenario arises.

Capability Lists for Domains

A domain's capability list is a collection of objects and the actions that can be
done on them. A capacity is a name or address that is used to define an
object. If you want to perform operation M on object Oj, the process runs
operation M, specifying the capability for object Oj. The simple possession of
the capability implies that access is allowed.

In most cases, capabilities are separated from other data in one of two ways.
Every object has a tag to indicate its type as capability data. Alternatively, a
program's address space can be divided into two portions. The programs may
access one portion, including the program's normal instructions and data. The
other portion is a capability list that is only accessed by the operating system.

Lock-Key Mechanism

It is a compromise between the access lists and the capability lists. Each
object has a list of locks, which are special bit patterns. On the other hand,
each domain has a set of keys that are special bit patterns. A domain-based
process could only access an object if a domain has a key that satisfies one of
the locks on the object. The process is not allowed to modify its keys.

Now, let's take an example to understand the implementation of an access


matrix in the operating system.
Example:

In this example, there are 4 domains and objects in the above matrix, and also
consider 3 files (including F1, F2, and F3) and one printer. Files F1 and F3
can be read by a process running in D1. A process running in domain D4 has
the same rights as D1, but it may also write on files. Only one process running
in domain D2 has access to the printer. The access matrix mechanism is
made up of various policies and semantic features. Specifically, we should
ensure that a process running in domain Di may only access the objects listed
in row i.

The protection policies in the access matrix determine which rights must be
included in the (i j)th entry. We should also choose the domain in which each
process runs. The OS usually decides this policy. The Users determine the
data of the access-matrix entries.

The relationship between the domain and the processes might be static or
dynamic. The access matrix provides a way for defining the control for this
domain-process association. We perform a switch action on an object when
we switch a process from one domain to another. We may regulate domain
switching by containing domains between the access matrix objects. If they
have access to switch rights, processes must be enabled to switch from one
domain (Di) to another domain (Dj).
According to the matrix, a process running in domain D2 can transition to
domains D3 and D4. A process in domain D4 may change to domain D1, and
a process in domain D1 may change to domain D2.
Unit V:
Windows NT:-

Installing Windows NT 4.0


1.- Boot the computer with a DOS floppy that has at least FDISK, FORMAT,
and CD-Rom support.

Use FDISK to create a 2Gb partition on the fixed disk and set the partition
active. When exiting FDISK the computer will reboot to save partition
information.

After restarting, Format C:

2.- Insert Windows NT 4.0 Server CD-Rom and type "D:\i386\winnt /b". This
starts the Installation process.

Press enter at the Windows NT setup screen.

After files have copied you will be asked if you want to reboot. Press Enter

3.- After reboot. Windows NT setup screen. Press enter.

Press enter.

Page down, page down. Press F8 to accept the license agreement.

When setup lists the computer information press enter to accept it.

4.- Choose the highlighted partition and press enter.

Convert the partition to NTFS and press enter.

Press "C" to convert the file system.

Verify the path to install files (D:\WINNT) and press enter.

5.- Let the system do its, oh so exhaustive examination of your computer.


Press enter.

Remove all disks and press enter to restart.

Insert Windows NT 4.0 Server CD-Rom and click [OK] when prompted.
6.- Click [Next] at Windows NT Server version 4.0 setup.

At the name and organization window, enter what you want. Be Imaginative.
Click [Next].

Enter the CD key (040-0048126) and click [Next].

7.- Select "Per server" and increase to 10 connections. Click [Next].

Enter "NT2" for Computer name (NetBIOS name) and specify the Server type
"BDC" and click [Next].

Enter the administrator account password and confirm, then click [Next].

Select "no" when asked if you would like to create an emergency repair disk.
Click [Next].

8.- Select components you wish to install. Click [Next].

When the Windows NT setup screen appears again, click [Next].

Select "Wired to the network" When asked for a connection option and click
[Next].

Uncheck the "Install the Microsoft Internet Information Server (IIS)" and click
[Next].

9.- When asked for NIC drivers, insert driver disk #1 in floppy A: and click
[Have Disk].

Select "3com Fast Etherlink/ Etherlink XL PCI Bus Master NIC (3C905B-TX)
and click [OK].

Select the Protocols to be installed (NetBEUI, IPX/SPX, TCP/IP) and click


[Next].

10.- Insert disk #2 when prompted and click [OK].

Select "No" when asked if a DHCP server will be used.

11.- When the TCP/IP properties window appears provide the following:

a. IP address = 172.16.102.3
b. Subnet Mask = 255.255.254.0

c. Default gateway = 172.16.102.1

(NOTE YOUR IP ADDRESS WILL MOST LIKELY BE DIFFERENT)

12.- Click [OK].

Click [Next] to start the network.

Enter the domain (TM2) and click [Next].

Click [Finish]

13.- Select time zone and date and click [OK]

Click [OK] to accept display adapter.

Click [OK] to save settings.

Click [OK]

14.- When the "Installation Complete" screen appears. Remove all disks and
click [Restart Computer].

When prompted, press ctrl+alt+del to login.

PDCs and BDCs in Windows NT:

Windows NT Server organizes groups of computers into domains so that all


the machines in a domain can share a common database and security policy.
Domain controllers are systems that run NT Server and share the centralized
directory database that stores user account and security information for one
domain. When users log on to a domain account, the domain controllers
authenticate the users' username and password against the information in the
directory database. (You might know the directory database as the security
domain database or as the SAM database.)

During NT Server installation, you must designate the role that servers will
play in a domain. NT gives you three choices for this role: PDC, BDC, and
member server (i.e., a standalone server). You create a domain when you
designate a PDC. PDCs and BDCs are crucial elements in domain theory and
practice. To maintain control of and get the most out of the domains you
establish in your NT network, you need to understand what PDCs and
BDCs are, how to synchronize the directory database from a PDC to the
BDCs in its domain, how to promote a BDC to a PDC when the PDC is offline,
how to determine the optimum number of BDCs for a domain, and how to
manage trust relationships between the PDCs of separate domains.

PDCs and BDCs: What's the Difference?


Although NT 4.0 and NT 3.51 domains can contain multiple servers, only one
server in a domain can be a PDC. The PDC stores domain accounts and
security information in the master copy of the directory database, which the
PDC maintains. When you make changes to user accounts or security
information, the PDC records the changes on the directory database master
copy. A PDC is the only domain server that directly receives these changes. In
other words, PDCs store a read-write copy of the directory database.

A domain can have multiple BDCs. Each BDC in a domain maintains a read-
only copy of the PDC's master directory database. You can't make changes to
a BDC's copy of the directory database. Because directory database
duplication occurs between the PDC's master directory database and the
BDCs' directory database copies, you can promote any BDC in a domain to
the PDC if the original PDC fails or you must shut it down for maintenance.
BDCs also help share the load of authenticating network logons.

Having at least one BDC in a domain is crucial. If the PDC fails, you can keep
the domain functioning by promoting the BDC to PDC. Promoting the BDC
ensures that you can make changes to the directory database and propagate
those changes throughout the network. BDC promotion also guarantees
access to network resources and keeps the directory database accessible to
the domain. If the directory database isn't accessible to the domain, users
can't log on and become authenticated to the domain. Computers can't
identify themselves to the domain and therefore can't create the secure
channel necessary for communication between machines in the domain.
Group accounts won't have access to resources in the domain. In short,
without a BDC to promote to PDC, you'll have a lot of explaining to do when
your network comes to a halt.
Standalone Server

A standalone server is a server that runs alone and is not a part of a group. In
fact, in the context of Microsoft Windows networks, a standalone server is one
that does not belong to or is not governed by a Windows domain. This kind of
server is not a domain member and functions more as a workgroup server, so
its use makes more sense in local settings where complex security and
authentication may not be required.

Features and Benefits

Standalone servers can be as secure or as insecure as needs dictate. They


can have simple or complex configurations. Above all, despite the hoopla
about domain security, they remain a common installation.

If all that is needed is a server for read-only files, or for printers alone, it may
not make sense to effect a complex installation. For example, a drafting office
needs to store old drawings and reference standards. Nobody can write files
to the server because it is legislatively important that all documents remain
unaltered. A share-mode read-only standalone server is an ideal solution.

Another situation that warrants simplicity is an office that has many printers
that are queued off a single central server. Everyone needs to be able to print
to the printers, there is no need to effect any access controls, and no files will
be served from the print server. Again, a share-mode standalone server
makes a great solution.

Background

The term standalone server means that it will provide local authentication and
access control for all resources that are available from it. In general this
means that there will be a local user database. In more technical terms, it
means resources on the machine will be made available in either share mode
or in user mode.

No special action is needed other than to create user accounts. Standalone


servers do not provide network logon services. This means that machines that
use this server do not perform a domain logon to it. Whatever logon facility the
workstations are subject to is independent of this machine. It is, however,
necessary to accommodate any network user so the logon name he or she
uses will be translated (mapped) locally on the standalone server to a locally
known user name. There are several ways this can be done.

Samba tends to blur the distinction a little in defining a standalone server. This
is because the authentication database may be local or on a remote server,
even if from the SMB protocol perspective the Samba server is not a member
of a domain security context.

Through the use of Pluggable Authentication Modules (PAM) (see the chapter
on PAM) and the name service switcher (NSS), which maintains the UNIX-
user database, the source of authentication may reside on another server. We
would be inclined to call this the authentication server. This means that the
Samba server may use the local UNIX/Linux system password database
(/etc/passwd or /etc/shadow), may use a local smbpasswd file, or may use an
LDAP backend, or even via PAM and Winbind another CIFS/SMB server for
authentication.

Windows NT User Accounts

Windows NT requires users to log on with a valid username and password.


NT compares the username and password the user enters with those in the
user accounts database. If the names and passwords match, NT lets the user
log on. NT can store the user accounts database locally, on the user's
computer. These locally validated accounts are called workgroup accounts,
because you can use the accounts to set up multiple NT computers in a
workgroup, or peer-to-peer, relationship. In this case, users log on to their
workgroup computers. Alternatively, NT can check usernames and passwords
against an accounts database on a central domain controller. For NT to use a
central domain controller, you must first implement the NT domain model. In
this model, NT manages the accounts database from a central point, the
Primary Domain Controller (PDC). In the domain model, the accounts are
called domain accounts, and users log on to the domain.

Windows NT System Policies

NT system policies are useful for managing user and machine Registry
changes in the enterprise. They help systems administrators centralize
configuration control in large and small NT environments. They also ease
problems associated with desktop configuration management, such as
delivering icons to your users' desktops. However, NT system policies can be
difficult to configure, can cause widespread damage, and can become
unmanageable if you're not careful.

The Privileges of Policy


NT system policies let you deliver user- and machine-specific Registry
changes each time a user logs on. You can use the System Policy Editor
(SPE) and templates that define which Registry keys your policies affect to
create policy files that perform various functions. The default implementation
for a system policy is to use the SPE to create an ntconfig.pol file, and copy
the file to the replication directory in your domain controller infrastructure. The
replicator service then replicates this directory to all other domain controllers,
and makes it available via a Netlogon share. If you implement a single or
multiple master domain model, you must replicate ntconfig.pol in the master
account, or authenticating domain. The ntconfig.pol file has no effect in the
resource domain. Even if the computer is registered in the resource domain,
the authentication or master account domain delivers policies when a user
logs on. You can change the policy file's name and the location where NT
workstations look for policy files. You can also have multiple policy files that
various NT workstations in a domain can use.

Web Server

A web server is software and hardware that uses HTTP (Hypertext Transfer
Protocol) and other protocols to respond to client requests made over the
World Wide Web. The main job of a web server is to display website content
through storing, processing and delivering webpages to users. Besides HTTP,
web servers also support SMTP (Simple Mail Transfer Protocol) and FTP (File
Transfer Protocol), used for email, file transfer and storage.

Web server hardware is connected to the internet and allows data to be


exchanged with other connected devices, while web server software controls
how a user accesses hosted files. The web server process is an example of
the client/server model. All computers that host websites must have web
server software.

Web servers are used in web hosting, or the hosting of data for websites and
web-based applications -- or web applications.

How do web servers work?


Web server software is accessed through the domain names of websites and
ensures the delivery of the site's content to the requesting user. The software
side is also comprised of several components, with at least an HTTP server.
The HTTP server is able to understand HTTP and URLs. As hardware, a web
server is a computer that stores web server software and other files related to
a website, such as HTML documents, images and JavaScript files.

When a web browser, like Google Chrome or Firefox, needs a file that's
hosted on a web server, the browser will request the file by HTTP. When the
request is received by the web server, the HTTP server will accept the
request, find the content and send it back to the browser through HTTP.

More specifically, when a browser requests a page from a web server, the
process will follow a series of steps. First, a person will specify a URL in a web
browser's address bar. The web browser will then obtain the IP address of the
domain name -- either translating the URL through DNS (Domain Name
System) or by searching in its cache. This will bring the browser to a web
server. The browser will then request the specific file from the web server by
an HTTP request. The web server will respond, sending the browser the
requested page, again, through HTTP. If the requested page does not exist or
if something goes wrong, the web server will respond with an error message.
The browser will then be able to display the webpage.

Multiple domains also can be hosted on one web server.

Examples of web server uses


Web servers often come as part of a larger package of internet- and intranet-
related programs that are used for:

• sending and receiving emails;


• downloading requests for File Transfer Protocol (FTP) files; and
• building and publishing webpages.

Many basic web servers will also support server-side scripting, which is used
to employ scripts on a web server that can customize the response to the
client. Server-side scripting runs on the server machine and typically has a
broad feature set, which includes database access. The server-side scripting
process will also use Active Server Pages (ASP), Hypertext Preprocessor
(PHP) and other scripting languages. This process also allows HTML
documents to be created dynamically.

Dynamic vs. static web servers


A web server can be used to serve either static or dynamic content. Static
refers to the content being shown as is, while dynamic content can be updated
and changed. A static web server will consist of a computer and HTTP
software. It is considered static because the sever will send hosted files as is
to a browser.

Dynamic web browsers will consist of a web server and other software such
as an application server and database. It is considered dynamic because the
application server can be used to update any hosted files before they are sent
to a browser. The web server can generate content when it is requested from
the database. Though this process is more flexible, it is also more
complicated.

Leading web servers include Apache, Microsoft's Internet Information Services


(IIS) and Nginx -- pronounced engine X. Other web servers include Novell's
NetWare server, Google Web Server (GWS) and IBM's family of Domino
servers.

Considerations in choosing a web server include how well it works with the
operating system and other servers; its ability to handle server-side
programming; security characteristics; and the publishing, search engine and
site-building tools that come with it. Web servers may also have different
configurations and set default values. To create high performance, a web
server, high throughput and low latency will help.

Web server security practices


There are plenty of security practices individuals can set around web server
use that can make for a safer experience. A few example security practices
can include processes like:

• a reverse proxy, which is designed to hide an internal server and act


as an intermediary for traffic originating on an internal server;
• access restriction through processes such as limiting the web host's
access to infrastructure machines or using Secure Socket Shell
(SSH);
• keeping web servers patched and up to date to help ensure the web
server isn't susceptible to vulnerabilities;
• network monitoring to make sure there isn't any or unauthorized
activity; and
• using a firewall and SSL as firewalls can monitor HTTP traffic while
having a Secure Sockets Layer (SSL) can help keep data secure.
DNS?

The Domain Name System (DNS) is the phonebook of the Internet. Humans
access information online through domain names, like nytimes.com or
espn.com. Web browsers interact through Internet Protocol (IP) addresses.
DNS translates domain names to IP addresses so browsers can load Internet
resources.

Each device connected to the Internet has a unique IP address which other
machines use to find the device. DNS servers eliminate the need for humans
to memorize IP addresses such as 192.168.1.1 (in IPv4), or more complex
newer alphanumeric IP addresses such as 2400:cb00:2048:1::c629:d7a2 (in
IPv6).

What is the Need of DNS?


Every host is identified by the IP address but remembering numbers is very
difficult for people also the IP addresses are not static therefore a mapping is
required to change the domain name to the IP address. So DNS is used to
convert the domain name of the websites to their numerical IP address.
Types of Domain
There are various kinds of domain:
1. Generic domains: .com(commercial), .edu(educational),
.mil(military), .org(nonprofit organization), .net(similar to
commercial) all these are generic domains.
2. Country domain: .in (India) .us .uk
3. Inverse domain: if we want to know what is the domain name of
the website. Ip to domain name mapping. So DNS can provide both
the mapping for example to find the IP addresses of
geeksforgeeks.org then we have to type
Organization of Domain
It is very difficult to find out the IP address associated with a website
because there are millions of websites and with all those websites we should
be able to generate the IP address immediately, there should not be a lot of
delays for that to happen organization of the database is very important.

Root DNS Server


• DNS record: Domain name, IP address what is the validity? what is
the time to live? and all the information related to that domain
name. These records are stored in a tree-like structure.
• Namespace: Set of possible names, flat or hierarchical. The
naming system maintains a collection of bindings of names to
values – given a name, a resolution mechanism returns the
corresponding value.
• Name server: It is an implementation of the resolution mechanism.
DNS = Name service in Internet – A zone is an administrative unit, and a
domain is a subtree.
Dynamic Host Configuration Protocol (DHCP):
DHCP stands for Dynamic Host Configuration Protocol. It is the critical feature
on which the users of an enterprise network communicate. DHCP helps
enterprises to smoothly manage the allocation of IP addresses to the end-user
clients’ devices such as desktops, laptops, cellphones, etc. is an application
layer protocol that is used to provide:
Subnet Mask (Option 1 - e.g., 255.255.255.0)
Router Address (Option 3 - e.g., 192.168.1.1)
DNS Address (Option 6 - e.g., 8.8.8.8)
Vendor Class Identifier (Option 43 - e.g.,
'unifi' = 192.168.1.9 ##where unifi = controller)
DHCP is based on a client-server model and based on discovery, offer,
request, and ACK.
Why Use DHCP?
DHCP helps in managing the entire process automatically and centrally.
DHCP helps in maintaining a unique IP Address for a host using the server.
DHCP servers maintain information on TCP/IP configuration and provide
configuration of address to DHCP-enabled clients in the form of a lease offer.
Components of DHCP
The main components of DHCP include:
• DHCP Server: DHCP Server is basically a server that holds IP
Addresses and other information related to configuration.
• DHCP Client: It is basically a device that receives configuration
information from the server. It can be a mobile, laptop, computer, or
any other electronic device that requires a connection.
• DHCP Relay: DHCP relays basically work as a communication
channel between DHCP Client and Server.
• IP Address Pool: It is the pool or container of IP Addresses
possessed by the DHCP Server. It has a range of addresses that can
be allocated to devices.
• Subnets: Subnets are smaller portions of the IP network partitioned
to keep networks under control.
• Lease: It is simply the time that how long the information received
from the server is valid, in case of expiration of the lease, the tenant
must have to re-assign the lease.
• DNS Servers: DHCP servers can also provide DNS (Domain Name
System) server information to DHCP clients, allowing them to resolve
domain names to IP addresses.
• Default Gateway: DHCP servers can also provide information about
the default gateway, which is the device that packets are sent to
when the destination is outside the local network.
• Options: DHCP servers can provide additional configuration options
to clients, such as the subnet mask, domain name, and time server
information.
• Renewal: DHCP clients can request to renew their lease before it
expires to ensure that they continue to have a valid IP address and
configuration information.
• Failover: DHCP servers can be configured for failover, where two
servers work together to provide redundancy and ensure that clients
can always obtain an IP address and configuration information, even
if one server goes down.
• Dynamic Updates: DHCP servers can also be configured to
dynamically update DNS records with the IP address of DHCP
clients, allowing for easier management of network resources.
• Audit Logging: DHCP servers can keep audit logs of all DHCP
transactions, providing administrators with visibility into which
devices are using which IP addresses and when leases are being
assigned or renewed.

Windows Internet Naming Service

The Windows Internet Naming Service (WINS) converts NetBIOS host names
into IP addresses. It allows Windows machines on a given LAN segment to
recognize Windows machines on other LAN segments.

The service WINS resolves NetBios names into IP addresses and is therefore
an elementary component of a Windows network. Given this fact, this service
should already be installed on the Windows Server located in the network.

Router:

Routers allow devices to connect and share data over the Internet or an intranet.
A router is a gateway that passes data between one or more local area networks
(LANs). Routers use the Internet Protocol (IP) to send IP packets containing data
and IP addresses of sending and destination devices located on separate local
area networks. Routers reside between these LANs where the sending and
receiving devices are connected. Devices may be connected over multiple router
“hops” or may reside on separate LANs directly connected to the same router.

Once an IP packet from a sending device reaches a router, the router identifies
the packet’s destination and calculates the best way to forward it there. The router
maintains a set of route-forwarding tables, which are rules that identify how to
forward data to reach the destination device’s LAN. A router will determine the
best router interface (or next hop) to send the packet closer to the destination
device’s LAN. Once a device sends an IP packet, routers determine that packet’s
best route over the Internet or intranet to reach its destination most efficiently and
in accordance with quality-of-service agreements.

Three basic router types are deployed today:

• Access routers: An access router connects subscribers to their provider’s


network so they can reach the Internet or private networks. Wireless and
wired access routers support these networks to enable compute devices to
connect to Wi-Fi and Ethernet LANs.
• Edge routers: Edge routers logically define subscriber services, apply
policy, meter services, and otherwise manage subscriber sessions. Edge
routers typically support multiple edge services, including business,
residential, video, mobile, and data center edge functionality for potentially
hundreds of thousands of subscribers.
• Core routers: Core routers forward packets across the Internet or private
network backbones to interconnect communication networks. These
routers must efficiently forward packets at high speed while preventing
bottlenecks and packet loss.

Routers provide the essential building blocks network operators need to build
robust networks. Operators can use routers to configure performance metrics with
sophisticated routing algorithms and create traffic engineering policies to alleviate
network congestion and maintain quality of service for subscribers.

You might also like