Operating System
Operating System
LECTURE NOTES
ON
OPERATING SYSTEM
Compiled by
CONTENTS
UNIT-1
INTRODUCTION
INTRODUCTION:
Operating system is a system software that acts as an intermediary between the user of a
computer and computer hardware.
It is considered as the brain of the computer.
It controls the internal activities of the comp. hardware and provides the user interface.
This interface enables a user to utilize the hardware resources very efficiently.
It is the first program that gets loaded into the computer memory through the process
called “booting”.
COMPONENTS OF COMPUTER SYSTEM:
In general, we can divide a computer system into the following four components.
Hardware
Operating system
Application programs
Users
As we can see in the figure the user interacts with the application programs.
The application program do not access the hardware resources directly.
HARDWARE resources include I/O devices, primary memory, secondary memory(hard
disk, floppy disk etc.)and the microprocessor.
So the operating System is required to access and use these resources.
The application programs are programmed in such a way that they can easily
communicate with the resources.
An operating System is the first program that is loaded into into the computers main
memory, when acomputer is switched on.
Some popular operating System are windows9x(95,98), Linux, Unix, Windows xp. vista etc.
The operating system is responsible for the following functions related to process
management.
i. Process creation (loading the prog. From secondary storage to main
memory)
ii. Process scheduling
iii. Provide mechanism for process synchronization
iv. Provide mechanism for deadlock handling
v. Process termination
5) Peripheral or I/O device Management:
Keep track of resources (device, channels, control units) attached to the system.
Communication between these devices and CPU is observed by operating system.
An operating system will have device drivers to facilitate I/O functions
involving device likekeyboard, mouse, monitor, disk, FDD, CD-ROM, printer
etc.
Allocation and De allocation of resources to initiate I/O operation.
Other management activities are
i. Spooling
ii. Caching
iii. Buffering
iv. Device driver interface
6) File Management:
A file is a collection of related information or record defined by the user.
The operating system is responsible for various activities of file management are
i. Creation and deletion of files
ii. Creation and deletion of directories
iii. Manipulation of files and directories
iv. Mapping files onto secondary storage
7) Secondary storage Management:
It is a larger memory used to store huge amount of data. Its capacity is much
larger than primary memory. E.g. floppy disk, hard disk etc.
The operating system is responsible for handling all the devices that can be done
by thesecondary storage management.
The various activities are:
i. Free space management
ii. Storage allocation (allocation of storage space when new files have to
bewritten).
iii. Disk scheduling (scheduling the request for memory access)
8) Protection/Security Management:
If a computer system has a multiple processor, then the various processes must
beprotected of one another’s activities.
Protection refers to mechanism for controlling user access of programs or
processes or user to resources defined by the computer system.
9) Error detection and Recovery:
Error may occur during execution like divide by zero by a process, memory
access violation, deadlock, I/O device error or a connection failure.
The operating system should detect such errors and handles them.
Operating System
Hardware
Advantage:
The CPU has to handle only one application program at a time so that process management
is easy in this environment.
Disadvantage:
As the operating system is handling one application at a time most of the CPU time is wasted.
Multi user OPERATING SYSTEM:
In a multi-user operating system , multiple number of users can access different resources
of a computer at a time.
This system provides access with the help of a network. Network generally consists of
various personal computers that can and receive information to multi user mainframe
computer system.
Hence, the mainframe computer acts as the server and other personal computer act as the
client for that server.
Ex: UNIX, Window 2000
Advantage:
In a batch processing operating system interaction between the user and processor is
limited or there is no interaction at all during the execution of work.
Data and programs that need to be processed are bundled and collected as a ‘batch’.
These jobs are submitted to the computer through the punched card. then the job with
similar needs executed simultaneously.
Advantage:
It is simple to implement.
Disadvantage:
Lack of interaction between user and the program.
Multiprogramming OPERATING SYSTEM:
In a multiprogramming operating System several user
can execute multiple jobs by using a single CPU at the
same time.
The operating System keeps several program or job in
the mainmemory.
When a job is submitted to the system in a magnetic
disk or job pool.
Then some of the jobs are transferred to the main
memory according to the size of the main memory.
The CPU execute only one job which is selected by
theoperating System.
When the job requires any I/O operation, then CPU
switches to next job in the main memory i.e CPU do not have to wait for the completion
ofI/O operation of that job.
When the I/O operation of that job is completed then the CPU switches to the next
jobafter the the execution of the current job.
E,g.UNIX, Windows 95 etc
Advantage:
CPU utilization is more i.e the most of the time the CPU is busy.
Disadvantage:
The user can’t directly interact with the system.
Time sharing Operating System:
This is the Logical extension of multiprogramming system.
The CPU is multiplexed among several jobs that are kept in memory and on disk (the
CPU is allocated to a job only if the job is in memory).
Here the CPU can execute more than one job simultaneously by switching among
themselves.
The switching process is very fast so that the user can directly interact with the system
during the execution of the program.
This system stores multiple jobs in the main memory and CPU execute all the jobs in
asequence.
Generally CPU time is divided into no. of small interval known as time slice period.
Every process has to execute for the time slice period; then the CPU switch over to
next process.
The switching process is very fast,so it seems that several processes are executed
simultaneously.
In above figure the user 5 is active but user 1, user 2, user 3, and user 4 are in waiting state whereas
user 6 is in ready status.
As soon as the time slice of user 5 is completed, the control moves on to the next ready user i.e.
user 6. In this state user 2, user 3, user 4, and user 5 are in waiting state and user 1 is in ready state.
The process continues in the same way and so on.
Advantage:
CPU utilization is more i.e the most of the time the CPU is busy.
Disadvantage:
The operating system is more complex due to memory management, Disk management etc.
Advantage:
Improved Reliability:-As the system consists of multiple processor, failure of one
processor does not disturb the computer system. The other processor in the system
continues the task.
Improved throughput:-throughput is defined as the total no of jobs which are executed
by the CPU in one second. As this system use multiple processor all the workload divided
between the different processor.
Economical:-in this system different processor share the clock, bus, peripheral and
memory between them. Due to this reason the system are more economical than multiple
single processor system.
This is of 2 types:
Hard real time operating system
Soft real time operating system
Advantages:
It facilitates the sharing of hardware and software resources between different processors.
It increases reliability as failure of one node does not affect the entire network.
It increases the computational speed of computer by sharing the workload into different
nodes.
It enable different users to communicate with each other using email.
The shell read the commands, what you typed at command line and interprets them
and sends request to execute a program. That’s why shell is called as command line
interpreter.
Hardware:
Computer hardware refers to the physical parts or components of a computer such as
the monitor, mouse, keyboard, computer data storage, hard drive disk (HDD), system
unit (graphic cards, sound cards, memory, motherboard and chips), etc. all of which are
physical objects that can be touched
UNIT-2
PROCESS MANAGEMENT
PROCESS:
Process Program
i) A process is the set of executable i) It is a set of instruction written in
instruction, those are the machine programming language.
code.
ii) Process is dynamic in nature. ii) Program is static in nature.
iv) A process resides in main memory. iv) A program resides in secondary storage.
Process in Memory:-
When a process is executed, it changes its state. The current activity of that process is known asProcess
state.A process has different states. They may be
New state:
When the request is made by the user, the process is created.
⇒.
PROCESS SCHEDULING
The objective of multiprogramming is to have some process running at all times, to maximize
CPU utilization.
The objective of time sharing is to switch the CPU among processes so frequently that users can
interact with each program while it is running.
This purpose can be achieved by keeping the CPU busy at all the times.
So, when two or more processes compete for the CPU at the same time, a choice has to be made.
This procedure of determining the next process to be executed on the CPU is called as Process
Scheduling.
The module of the operating system that makes this decision is called as Scheduler.
Process scheduling consists of three sub functions:
I. Scheduling Queue
II. Scheduler
III. Context Switching
I. Scheduling Queue
The operating system maintains several queues for efficient management of processes. These are as follows:
1.Job Queue:
When the process enters into the system, they are put into a job queue.
This queue consists of all processes in the system on a mass storage device such as hard disk.
2.Ready Queue:
From the job queue, the processes which are ready for execution are shifted to the main memory.
In the main memory the processes are kept in the ready queue.
In other words, the ready queue contains all those processes that are waiting for the CPU.
3.Device Queue:
Device queue is a queue for which a list of processes waiting for a particular I/O device.
Each device has its own device queue.
When a process required some I/O operation, it is then taken out of the ready queue and kept
under the device queue.
4.Suspended Queue: It stores the list of suspended process.
Queuing Diagram:
The process could issue an I/O request and then be placed in an I/O queue.
The process could create a new subprocess and wait for its termination.
The process could be removed forcibly from the CPU as a result of an interrupt, and again put
back in the ready queue.
II. Scheduler:
The module of the operating system that makes the decision of process scheduling is known as
Scheduler.
Their main task is to select the jobs to be submitted into the system and to decide which process
to run.
CPU SCHEDULING
Basic Concept:
The objective of multiprogramming is to improve the productivity of the computer. It can be done by
maximizing the CPU utilization. That means some process running at all times and this is happened by
switching the CPU among processor.
But in a unipolar system only one process may run at a time and other processes must wait
until the CPU is free and can be rescheduled.
Scheduling is a fundamental operating system function. Almost all computer resources are
scheduled before use. The CPU is one of the primary computer resource. Thus its scheduling is to control
the operating System design.
CPU-I/O Burst Cycle
The success of CPU scheduling depends on an observed property of processes.
Process execution consists of a cycle of CPU execution and I/O wait.
Processes alternate between these two states. Process execution begins with a CPU burst.
That is followed by an I/O burst, which is followed by another CPU burst, then another I/O
burst, and so on.
The final CPU burst ends with a system request to terminate execution.
Non-Preemptive Scheduling:
In this case once the CPU has been allocated to a process, the process keeps the CPU until it releases the
CPU by terminating or by switching to the waiting state. That is when it is (process)is computed or required
any I/O operation.
Preemptive Scheduling:
In this case CPU can be released forcefully. Under this scheduling the process has to leave the CPU
forcefully on the basis of criteria like running to ready an d waiting state to ready state(i.e. when interrupt
occur or due to completion of time slice period).
DISPATCHER:
Dispatcher is the module that gives control of the CPU to the process selected by the short-
term scheduler.
The time it takes for the dispatcher to stop one process and start another running is known as
the dispatch latency.
This function involves the following:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that program.
Scheduling Criteria:
There are several CPU scheduling algorithm. But we have to select one which is suitable for our
system.
There are some criteria based on which CPU scheduling algorithm select the next process to
execute.
CPU utilization. We want to keep the CPU as busy as possible. Conceptually, CPU
utilization can range from 0 to 100 percent. In a real system, it should range from 40 percent
(for a lightly loaded system) to 90 percent (for a heavily used system).
Throughput: it can be defined for a system as “the no. of jobs completed per unit time”.
Turnaround time: The interval of time from submission of a process to the time of
completion. It is the total time spent by a process within the system.
Turnaround time= time spent in the ready queue + time spent in execution + time spent in
I/O operation.
OR
Turn Around time = Completion time – Arrival time
Operating System 17 Abhaya Kumar Panda
KIIT POLYTECHNIC
Waiting time: It is the sum of the periods spent waiting in the ready queue .[that means
CPU scheduling affects only the amount of time that a process spends waiting in the ready
queue , but does not affect the amount of time during which process executes or does I/O].
Waiting time = Turn Around time – Burst time
It is the amount of time during which process is in the ready queue.
Response time: It is the amount of time , process takes to start responding(first response
after submission)
Response Time = Time at which process first gets the CPU – Arrival time
SCHEDULING ALGORITHM
The Scheduling algorithm decides which of the process in ready queue is to be attending the CPU. There
are various scheduling algorithms:
Let the process arrives in the order p1, p2, P3, p4,p5.
Process Arrival Time CPU Burst
P1 0 20
P2 4 2
P3 6 40
P4 8 8
P5 10 4
Find out the Average Turn Around Time(ATAT) and Avg. Waiting Time(AWT).
Solution:
The result of execution shown in GANTT CHART:
P1 P2 P3 P4 P5
0 20 22 62 70 74
Waiting time:
P1=0
P2=20-4=16
P3=22-6=16
P4=62-8=54
P5=70-
10=60
Hence the AWT(Average Waiting Time)=(0+16+16+54+60)/5=29.2
Turn Around Time(TAT):
P1=20-0=20
P2=22-4=18
P3=62-6=56
P4=70-8=62
P5=74-
10=64
Hence Average TAT=(20+18+56+62+64)/5=44
Disadvantage:
The user having small job has to wait for a long time.
This algorithm is particularly troublesome for tie sharing system because each user needs to get a
share of the CPU at regular time intervals.
Advantage:
FCFS scheduling is very simple to implement and understand.
Shortest Job First Scheduling(SJF)
In this type of scheduling when the CPU is available , it is assigned to the process that has the
smallest next CPU burst.
If two processes have the same length next CPU burst, FCFS scheduling is used to break the tie.
It is also known as shortest next CPU burst.
SJF algorithm may be either preemptive or non preemptive.
The choice arises when a new process arrives at the ready queue while a previous process
is executing.
The new process may have a shortest next CPU burst than the currently executing process.
A preemptive SJF algorithm will pre-empt the currently executing process where as a
non premptive SJF algorithm will allow the currently running process to finish its CPU
burst.
P1 P3 P2 P4
0 15 20 30 38
W.T.
P1=0
P2=20- 4=16
P3=15-6=9
P4=30-8=22
A.W.T=0+16+9+22=47/4=11.75
T.A.T
P1=15- 0=15
P2=30-4=26
P3=20-6=14
P4=38-8=30
A.T.A.T=15+26+14+30=85/4=21.25
Internal Priority:
In Priority Scheduling a priority value is assigned to each of the process in the ready queue. The priority
value can be assigned either internally or externally. The factor for assigning internal priority is:
Burst time
Memory Requirement
I/O devices
No. Of files
External Priority:
The factor for assigning the external priority value are:
Important of process
Amount of fund given
Political pressure
The priority scheduling may be preemptive or non-preemptive.the major problem with the priority
scheduling is indefinite blocking or starvation. The solution to the problem is aging. Aging is a
technique which gradually increases the priority value of the process that waits in the ready queue
for a long time.
Problem:
Process B.T Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 2
Gantt Chart:
P2 P4 P1 P3
0 1 2 12 14
W.T
P1=2 P2=0 P3=12 P4=1
A.W.T= (2+0+12+1)/4=3.75
T.A.T
P1=12 P2=1 P3=14 P4=2
A.T.A.T=(12+1+14+2)/4=7.25
But the CPU pre-empts among the ready process in every time slice period, which are in the
ready queue.
In case of FCFS scheduling the ready queue is a FIFO queue. But in RR scheduling the
readyqueue is a circular queue.
Round Robin Scheduling is a purely preemptive scheduling algorithm. Because after every time
slice period CPU will switch over to the next process in the ready queue.
Process A.T. B.T.
P1 00 20
P2 10 10
P3 15 15
P4 15 10
CPU Time=5ms
P1 P1 P2 P3 P4 P1 P2 P3 P4 P1 P3
0 5 10 15 20 25 30 35 40 45 50 55
W.T:
P1=(25-10)+(45-30)=30
P2=30-15=15
P3=(35-20)+(50-40)=20
P4=(20-15)+(40-25)=20
A.W.T=(30+15+20+20)/4=21.25
T.A.T
P1=50
P2=(35-10)=25
P3=(55-15)=40
P4=(45-15)=30
A.T.A.T=(50+25+40+30)/4=36.25
Multilevel Queue scheduling
This algorithm partitions the ready queue into several separate queues.
Processes are permanently assigned to one queue based on some criteria such as memory size,
process priority.
Each queue has its own scheduling algorithm.
Foreground queue may be in RR and background queue may be in FCFS.
Again there is a scheduling algorithm to select a queue among many queues.
If priority queue is applied .then no process in the lowest priority queue can be executed till
there is a process in the higher priority queue.
If Round Robin scheduling is applied, then each queue gets CPU for a certain amount of time.
Again that time will be divided among the processes in that queue.
Highest
System Processes
CPU
Interactive Processes
System Processes
Lowest
Interprocess Communication(IPC)
Overview :
Independent process:-
It is defined as a process that does not share any data and does not communicate with other process.
In other words we can say that modification made to an independent process does not affect the
functioning of other processes.
Co-operating process:-
These processes are used for resource sharing and to speed up a computation procedure.
Interprocess Communication(IPC)
Interprocess communication is the mechanism provided by the operating system that allows processes to
communicate with each other.
Processes are classified into 2 categories. They are:
Independent process: An independent process is not affected by the execution of other processes.
Cooperating process: a co-operating process can be affected by other executing processes.
Computation speedup: If we want a particular task to run faster, we must break it into subtasks, each of which will
be executing in parallel with the others. Such a speedup can be achieved only if the computer has multiple
processing elements (such as CPUS or I/O channels).
Modularity: We may want to construct the system in a modular fashion, dividing the system functions into separate
processes or threads.
Convenience: Even an individual user may have many tasks on which to work at one time. For instance, a user may
be editing, printing, and compiling in parallel.
UNIT-3
MEMORY MANAGEMENT
One of the major functions of operating system is memory management. It controls the
Allocation and de-allocation of physical memory.
Which part of the memory is currently used by which process.
Decide which processes are to be loaded into memory.
Free space management.
Dynamic allocation/de-allocation of memory to executing processes etc.
So 0 to 99 KB is the logical address space but 240 to 340 KB is the physical address space.
Physical address space = logical address space + content of relocation register.
The mapping between logical and physical addresses are done at run-time by the
memory management unit (MMU).
SWAPPING:-
Swapping is the method to improve main memory utilization.
When a process is executed it must be in the main memory.
A process can be swapped out temporarily to secondary memory or hard disk or backing
memory and then again brought back to secondary memory for execution. This technique
is known as “Swapping”.
The basic operation of swapping is
o Swap-out (roll-out)
o Swap-in (roll-in)
Operating System 25 Abhaya Kumar Panda
KIIT POLYTECHNIC
The main memory must accommodate both operating system and various user processes.
Generally, the main memory is divided into 2 partitions.
o Operating system.
o Application program/user processes.
The operating system place in either low memory or high memory.
Commonly the operating system is loaded in low memory.
Generally, there are two methods used for partitioning the memory allocation.
o Contiguous memory allocation
o Non-Contiguous memory
allocation Contiguous Memory Allocation:-
It is again divided into two parts.
o Single partition allocation.
o Multiple partition allocation.
Single Partition Allocation:-
In this memory allocation method, the operating system reside in the low memory.
And the remaining part/space will be treated as a single partition.
This single partition is available for user space/application program.
Only one job can be loaded in this user space is the main memory consisting of only one
process at a time, because the user space treated as a single partition.
Advantage:-
i. It is very simple.
ii. It does not require expertise to understand.
Disadvantage:-
i. Memory is not utilized property.
ii. Poor utilization of processor (waiting for I/O).
Multiple Partition Allocation:-
This method can be implemented in 3 ways. These are:
o Fixed equal multiple partition.
o Fixed variable multiple partition.
o Dynamic multiple partition.
Fixed equal multiple partition:-
i. In this memory management scheme the operating system occupies the low memory and
rest of main memory is available for user space.
ii. The user space is divided into fixed partitions. The partition size depends upon the operating system.
iii. A partition of main memory is wasted within a partition is said to be “Internal
Fragmentation” and the wastage of an entire partition is said to be ”External
Fragmentation”.
iv. There is one problem with this method is memory utilization is not efficient which
causes internal and external fragmentation.
Advantages:-
This scheme supports multiprogramming.
Efficient utilization of CPU & I/O devices.
Simple and easy to implement.
Disadvantages:-
This scheme suffers from internal as well as external fragmentation.
Since, the size of partitions are fixed, the degree of multiprogramming is also fixed.
Advantage:-
i. Supports multiprogramming.
ii. Smaller memory loss (expected).
iii. Simple & easy to
implement. Disadvantage:-
i. Suffers from internal as well as external fragmentation.
Advantage:-
1. Partition changed dynamically. So no internal fragmentation.
2. Efficient memory and CPU
utilization. Disadvantage:-
1. Suffers from external
fragmentation. Partition Selection
Algorithms:-
Whenever a process arrives and there are various holes large enough to accommodate it, the
operating system mayuse one of the following algorithm to select a partition for the process.
o First fit:- In this algorithm, the operating system scans the free storage list and allocates
the first partition that is large enough for that process.
Advantage:-
1. This algorithm is fast because very little search is
involved. Disadvantage:-
1. Memory loss may be high.
o
Best fit:- In this algorithm the operating system scans the free storage list and allocate
the smallest partition that is big enough for the process.
Advantage:-
1. Memory loss will be smaller than the first fit.
Disadvantage:- Search time will be larger as compared to first fit.
o Worst-fit:- In this algorithm the operating system scans the entire free storage list and
allocate the largest partition to the process.
Disadvantage:-Maximum interval fragmentation.
Compaction:-
Compaction is a technique of collecting all the free spaces together in one block, so that
other process can use this block or partition.
There are large no. of small chunks of free memory that may be scattered all over the
physical memory and individual each of chunks may not big enough to accommodate even
a small program.
So, compaction is a technique by which the small chunk of free spaces are made contiguous
to each other into a single free partition, that may be big enough to accommodate some
other processes.
Ex-Collect all the fragmentation together in one block and now the figure is:-
Internal fragmentation
When the fragment remains unutilized inside a larger memory partition already allocated to a program.
Both lead to poor memory utilization.
To overcome this problem the memory is allocated in such a way that parts of a single
process may be placed in non-contiguous areas of physical memory. This type of allocation
is known as Non-contiguous allocation.
The two popular schemes in Non-contiguous allocation are paging & segmentation.
Paging
Disadvantage
The scheme may suffer “page break”.
If the number of pages are high,it is difficult to maintain page table.
Segmentation
In case of paging the user’s view of memory is different from physical memory.
User don’t think that memory is a linear array of byte, some containing instruction and
some containg data.
But he view the memory as a collection of variable sized segments and there is no
ordering ofsegments.
Segment is a memory management technique that supports user’s view of memory.
Example:-
PAGE FAULT
When the processor need to execute a particular page, that page is not available in
main memory then an interrupt occurs to the operating system called as page fault.
When the page fault is happened, the page replacement will be needed. The word page
replacement means to select a victim page in the main memory.
Replace the page with the required page from backing store or secondary memory.
STEPS FOR HANDLING PAGE FAULT
To access a page the operating system first check the page table to know whether the reference is
valid or not.
If invalid, an interrupt occur to operating system called page fault.
Then operating system search for free frame in the memory.
Then the desire page is loaded from disk to allocate free frame.
When the disk read is complete the page table entry is modified by setting the valid bit.
The the execution of the process starts where it was left.
Difference. Between Paging and segmentation
Paging Segmentation
The main memory partitioned into frames The main memory partitioned into
or blocks segments.
The logical address space divided into pages The logical address space is divide into
by compiler or memory management unit. segments specified by the programmer.
It may suffer from page break or internal This scheme suffer from external
fragmentation fragmentation
The operating system maintain page map Segment map table is used for
table for mapping between frames and mapping.
pages.
It doesn’t support user view of memory It support user view of memory.
The processor uses page no. and offset to The processor uses the segment no. and
calculate absolute address displacement to calculate the absolute
address.
VIRTUAL MEMORY
Virtual memory is a technique which allows the execution of a process, even the logical address
space is a greater than the physical memory.
Ex:let the program size or logical address space is 15 MB., but the available memory is 12MB. So,
the 12MB is loaded in main memory and remaining 3MB is loaded in the secondary memory.
When the 3MB is needed for execution then swap out the 3MB from main memory to secondary
memory and swap in 3MB from secondary memory to main memory.
Advantages:
Large programs can be written, as virtual space available is huge compared to physical memory.
More physical memory available, as programs are stored on virtual memory, so they occupy
very less space on actual physical memory.
DEMAND PAGING
The criteria of this scheme is “a page is not loaded into the main memory from secondary
memory,until it is needed”.
So,a page is loaded into the main memory by demand, so this scheme is called as “Demand
Paging”.
For ex: Assume that the logical address space is 72 KB. The page and frame size is 8KB. So
the logical address space is divided into 9 pages, i.e numbered from 0to 8.
The available main memory is 40 KB.i.e.5 frames are available. The remaining 4 pages
are loaded in the secondary storage.
Whenever those pages are required ,the operating System swap-in those pages into main memory.
In the above figure the mapping between pages and frames done by page map table.
In demand paging the PMT consisting of 3 fields i.e. page no., frame no. and valid/invalid bit. If
a page resides in the main memory the v/I bit set to valid.
Otherwise the page resides in the secondary storage and the bit set to Invalid.
The page numbers 1,3,4,6 are loaded in the secondary memory. So those bits are set to invalid.
Remaining pages resides in the main memory, so those bits are set to valid.
The available free frames in main memory is 5,so 5 pages are loaded, remaining frames are
used by other process(UBOP).
UNIT-4
DEVICE MANAGEMENT
Magnetic Disk Structure:
In modern computers, most of the secondary storage is in the form of magnetic disks. Hence, knowing
the structure of a magnetic disk is necessary to understand how the data in
the disk is accessed by the computer.
Platter:- Each disc has a flat circular shape named as platter. The diameter of platter is from range
1.811 to 2.2511 inches.
Cylinder:- Set of tracks that are at one composition makes up a cylinder. There
Read/write head:- This is present just above each surface of every platter.
Disc arm:- The heads are attached to a disc arm that moves all the heads as a unit.
Operating System 38 Abhaya Kumar Panda
KIIT POLYTECHNIC
The big jump from 183 to 37 could be avoided if somehow 14, 37 and 122, 124 are served together.
This indicates the problem with the FCFS algorithm which is larger head movement.
SSTF SCHEDULING
The main idea of the Shortest-Seek-Time-First algorithm is to service all the requests close to
the current position of the head before moving far away to service other requests.
Example:
Considering our previous sequence of disk blocks access.
Queue = 98, 183, 37, 122, 14, 124, 65, 67
There is a substantial improvement compared to FCFS algorithm. The total head movement is
as follows.
65 – 53 = 12 37 – 14 = 23 124 – 122 = 2
67 – 65 = 2 98 – 14 = 84 183 – 124 = 59
67 – 37 = 30 122 – 98 = 24
Total Head Movement = 236 cylinders.
But suppose 14 and 183 are a queue and a request near 14 came, it will be served and next
one is also close to 14 came, it will be served first and this will lead to starvation of 183 in the
queue.
SSTF is an improvement but not the optimal algorithm.
SCAN SCHEDULING
In this algorithm, the disk arm works like an elevator starting at one end servicing all the way up
to the other end and then start from the other end in reverse order.To use the SCAN algorithm, we
need to know two information.
1. Direction of Scan
2. Starting point
Let’s consider our example and suppose the disk start at 53 and move in the direction of 0.
C-SCAN ALGORITHM
In this algorithm, the head from one end to the other servicing request along the way, however, it
does not do a reverse trip and go to the beginning directly as if it is a circular queue.
Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41, 122, 14, 124, 65,
67. The LOOK scheduling algorithm is used. The head is initially at cylinder number 53 moving
towards larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to
199.
Device management
Shared:-
These are devices that can be shared between several processes.
Considering an example like a hard disk, it is shared, but interleaving between different
processes’ requests.
All conflicts for device need to be resolved but predetermined policies to decide which request is
handled first.
Virtual:-
These are devices are a combination of Dedicated and Shared Devices.
So a printer is a dedicated device, but using the spooling (queues) means it can be shared.
A print job isn’t sent straight to the printer, instead it goes to the disk (spool) until it is fully
prepared with all the necessary printer sequences and formatting, then it goes to the printer, ensuring
that the printers (and all I/O devices) are used efficiently.
In order to answer these question, I/O traffic controller use one of the following data base, i.e.,
i) Unit control block (UCB)
ii) Central Unit Control Block (CUCB)
iii) Channel control Block (CCB)
I/O scheduler:-
If there are more I/O request, pending, then, available path is necessary to choose, which, I/O request is
satisfied, first. Then, the process of scheduling, is applied here, and it is known as I/O scheduler.
Each type of I/O device has own device handler algorithm like FCFS, SSTF, SCAN.
Spooling:
Race condition:-
Race condition is a situation, where, several process, access and manipulate, same data, concurrently and the
execution depend on a particular order.
UNIT- 5
DEAD LOCKS
System Model:
A system consists of a finite number of resources to be distributed among a number of competing processes. The
resources are partitioned into several types each of which consists of a number of identical instances. A process
may utilized a resources in the following sequence
1) Request:-process request for a resource through a system call. If the resource is not available it will wait.
Example: system calls open( ), malloc( ), new( ), and request( ).
2) Use:- After getting the resource, the process can make use of it by performing the execution.
Example: prints to the printer or reads from the file.
3) Release:- After the completion of the task the resource is not required by that process, in that it should be
released.
Example: close( ), free( ), delete( ), and release( ).
Resources such as->CPU, memory, I/O devices etc. When a process requests for a resources then if it is free then
it will be allocated to that process. But if the resource is busy with other process then the previous processhas to
wait till that resource is free.
Deadlock: Deadlock is a situation where a set of processes are blocked because each process is holding a resource
and waiting for another resource acquired by some other process.
For example, in the below diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is acquired
by process 2, and process 2 is waiting for resource 1.
A deadlock situation can arise if the following four condition hold simultaneously in the system.
1. Mutual exclusion
2. Hold and wait
3. No pre-emption
4. Circular wait
1) MUTUAL EXCLUSION:- At least one resources must be held in a non-sharable mode. That
meansonly one process can use that resource at a time.
EX: printer is non-sharable But HDD is a sharable resource.
So that if the resource is not free then the requesting process has to wait till the resource is released bythe
other process.
2)HOLD AND WAIT:-There must be aprocess which is already holding using one resource
andrequesting(waiting) for another resource which is currently held by another waiting process.
3)NO PREEMPTION:-Resources cannot be pre-empted. That means a resource can’t released by the
process unless until it has completed its task. I.e. printer will be released only when printing work is
finished.
4)CIRCULAR WAIT:-Suppose there are n-processes {P0, P1, P2, ………Pn-1} they all are
waitingprocesses.
______________________
If all above 4 conditions are satisfied in a system, then deadlock may occur but if anyone of the condition
(criteria) is not satisfied then deadlock will never occur.
A diagrammatic represent to det. The existence of deadlock in the system by using a graph
named as RAG.
It is a directed graph.
The bullet symbol within the resource is known as instances of that resource.
i) Request edge.
REQUEST EDGE:- Whenever a process request for resources then it is called a request edge.
ASSIGNMENT EDGE:- Whenever a resource is allocated to a process the request edge is converted to
an assignment edge from the instance of the resource to the process.
NOTE:-If the RAG contains NO CYCLE , then there is NO DEADLOCK in the system.
- If the RAG contains a CYCLE, then there MAY BE A DEADLOCK in the system.
- If the resources have exactly one instance then a cycle indicates a deadlock.
- If the resources have multiple instance per resource then a cycle indicates that “there may be
deadlock”.
Process wait for graph(PWFG) :- A process wait for graph can be obtained by
removing/collapsing resource symbols in the RAG.
The following example shows the resource allocation graph with a deadlock.
P1 -> R1 -> P2 -> R3 -> P3 -> R2 -> P1
P2 -> R3 -> P3 -> R2 -> P1
DEADLOCK AVOIDANCE
-> To avoid deadlock it is required to keep the additional information of the process i.e. the operating
System will begiven prior information about the process such that
-> If all the process is execute in the sequence then system will never enter into a deadlock state.
SAFE STATE: - A state is safe if the system can allocate the available resources to each process in
same order and still avoid deadlock.
P0 10 05 5
P1 04 02 2
P2 09 02 7
Free= 12-9 = 3
So, the safe sequence is <P1, P0, and P2>. If the system will always remain in safe state then deadlock will
never occur. So, when a process requests for a resource that is currently available the system must decide
whether that resource will be allocated or the process will wait. The request will be granted only if the
allocation leaves the system in a safe state.
This method is used only when the system contain one type of resource having multiple instances.
We can use this algorithm for deadlock avoidance if the system contains different types of resources
but each is having single instances.
In this graph, beside assignment edge and request edge, third edge known as “claim edge” isadded.
A claim edge from process Pi to Ri indicates that the process Pi may request for resource Rj in
FUTURE. Claim edge is similar to request edge but it is represented as dashed line,
If a process request for resource Rj then that request only be granted if by converting the
requesting edge Pi->Rj to the assignment edge Rj->Pi does not form a cycle.
If there are two claimedges for the same resource then it can avoid. If only one of the processes is
allocated the resource R1 then a deadlock can arise.
The resource allocation graph allocation is not applicable to a resource having multiple instances of
each resource type.
This algorithm used for system having multiple resources along with multiple instances.
Banker’s algorithm is less efficient the RAG.
This name was choose because it could be used in a banking system to ensure that bank
never allocates its available cash such that it can no longer satisfy the needs of all its
customers.
When a new process enters into the system
- It must declare maximum number of instances of each resource type that it may need.
- This number should be less than the total number of resources.
Safety algorithm
The algorithm for finding out whether or not a system is in safe state. It can be described as
follow.
STEP 1:- work is a vector of length
m. Finish is a vector of length n.
Work=
available.
Finish[i] =false.
For i=1, 2, 3----------- n.
STEP 2:- find an i such that
Finish[i] =false.
Need i <=
work.
If no such i then go to step 4.
allocationFinish[i] = true.
Go to step 2.
STEP 4:- If finish[i] = true for all I, then system is in safe state.
The complexity of the algorithm is 0(m*n^2)i.e. the algorithm may require on order of m*n^2
operation to decide whether a state is safe.
DEADLOCK DETECTION
If a system does not employ either a deadlock prevention or a deadlock a avoidance algorithm, thena
deadlock situation may occur.
An algorithm that will check whether the deadlock has occurred into the system (deadlock
detection).
If all the resources have only single instance, then we can detect a deadlock state by using “wait-for-
graph” (WFG).
It is similar to RAG but only difference is that here the vertices are only processes.
There is an edge from Pi to Pj if there is an edge from Pi to R and also on edge from R to Pj.
A system is in deadlock state if the wait-for-graph contains a cycle so, we can detect thedeadlocks
with cycles.
In this fig. there is two cycles one is P1 to P2 to P1. Second are P2 to P3 to P2.So the system
consisting of two deadlocks.
Multiple/several instances of Resource Type:-
The wait for graph method is not applicable to several instance of a resource type.
So, we need another method to resolve this problem. The algorithm used is known as “deadlock
detection algo”.
This algorithm like “banker’s algo” and it uses several data structures.
-Available: - A vector of length ‘m’ indicates the number of available resources of each type.
-Allocation: - An n*m matrix defines the no. of resources of each type currently allocated to each
process.
-Request: - An n*m matrix indicates the current request of each process. If request [I,j]=k, then
the process Pi is requesting k more instances of resource type Rj.
Detection algorithm:-
STEP 1:-
(work=available)
Available I, for i=1, 2, 3, 4---------
If Allocation i! = 0
Then finish[i]
=false
Otherwise finish[i] =true.
STEP 2:- Find an index such that
B finish[i] =false
Ij request[i] <=available or
workFinish [i] = true
Go to step-2
STEP 4:- If finish [i] =false, for some i, then the system is in deadlock state. I.e. process Pi is
deadlocked.
RECOVERY FROM DEADLOCK
When the detection algorithm detects that deadlock exists in the system then there are two
methods for breaking a deadlock.
- One solution is simply to abort one by one process to Break the circular wait.
- Second solution is to pre-empt some resources from one or more of the deadlock process.
PROCESS TERMINATION:-
This method used to recover from deadlock. We use one of two methods for process termination.
- Abort all deadlocked process
- Abort one by one process until all cycle is eliminating.
i. Abort all deadlock processes: - It means that release all the processes in the deadlocked state and
start the allocation from the starting point.
- It is an expensive method.
ii. Abort one by one process until the deadlock cycle is broken:- In this method first abort the one
of the processes in the deadlocked state and allocated the resources (resources from abort process)
to some other process in the DL state.
- Then check whether the deadlock braked or not.
- If YES, then it is ok i.e. deadlock is eliminated. If NO, abort the process from the deadlock state
then check.
- Continue this process until we recover from deadlock.
- This also expensive method but better than first one.
- In this method there is an overhead because every time the DL detection algorithm is invoked
after each process is aborted.
- Ex. End task in windows.
- There are some features which determines the which process to be aborted::
I. Priority of the process.
II. How long the process is computed and how long time it is needed to completion.
III. How many resources the process has currently used.
IV. How many more resources it needs for completion.
Selecting a victim:- Select a victim resource from the DL state, and pre-empt that one.
Rollback: - When a resource will be pre-empted from a process, then naturally the process will go into the
waiting state. So we must roll back the process to some safe state so that it will be started fromsame
state again or not from the beginning. I.e. roll back the processes and resources up to some safe state, and
restart it from that state.
- This method requires the system to keep more information about the state of all the running processes.
Starvation: - How to ensure that starvation will not occur? It should be kept in mind that resources should
not be pre-empted from etc. same process again and again; otherwise that process will not be completed
for a long period of time.
- That is a process can a resources can be picked as a victim only a finite number of times, not morethan
that, otherwise it create a starvation.
Unit-6
File Management
File is a primary resource in which we can store information and can retrieve theinformation
when it is required.
There can be a numeric data file, an alphabetic data file or alphanumeric and binarydata files.
In general terms a file is sequence of bits, bytes, lines or records.
All computer applications need to store and retrieve information. As computers canstore
information on various storage media, in the same way, the operating System provides a logical
view of information storage on various secondary storage media like magnetic disks, magnetic
tapes and optical disks etc.
This uniform logical storage unit is called as file. So a file is the collection of related information,
which is stored on secondary storage.
FILE ATTRIBUTES
A file has different attributes. The attributes may vary from one operating System to other.
*Name- A name is usually a string of characters. A symbolic name which is in human readable
form.
*Identifier- it is usually a number and is a unique tag that identifies the file within the file system.it
is a unique identification of a file which is internal to the system.
*Type- Normally expressed as an extension to the file name. It indicates the type of file.
*Protection-It specifies the access control information .it controls who can do reading
,writing, executing and so on.
*Time and Date-it specifies time of creation and file created date.
*User identification-this is useful for protection and security and last usage monitoring.
File System
B. Directory structure, which organizes and provides information about all the file in thefile
system.
FILE ORGANIZATION
File organization refers to the manner in which the records of a file are organized on the secondary
storage.
Basically file is a set of logical records . It is allocated a disk space in terms of physical blocks.
Sequential
Direct
Indexed
Partitioned
Sequential:-In this method, information or record that is stored in a file is processed ina sequence
.i.e the records are stored strictly in the same order as they occur physically in the file.
Direct:- The records are stored in any order as suited for application.The
Indexed:- In this method, an index is created for the file. This index contains thepointers(physical
address) for various blocks or records.
FILE OPERATION
To define a file in a proper manner ,there are different operations are performed on files.
To allow storage and retrieval of information from a file different system provide different
operations.
*Read:- To read a file ,first of all we search the directories for the file.
If file is found, the system needs to keep a read pointer to the location in the file where the next
read is to tae place.
*Delete:- When the file is no longer required ,it is needed to delete the occupied diskspace.
To delete a file ,we search the directory for the named file and when the file is found ,we release
all file space so that other files can reuse this space and erase the directory entry.
Truncate:- When the user wants to erase the contents of a file, but wants to retain it’s attribute.
It is not necessary to delete the file and then recreate it. It is possible by doing truncate operation.
i.e to truncate a file , remove the contents only, but the attributes areas it is.
Close:- When all the accesses are finished, the attributes and disk addresses are no longer needed,
you need to close the file in order to release the internal table space.
Append:-
Rename:-
It frequently happens that a user needs to change the name of an existing file.
This operation allows you to rename an existing file.
FILES TYPES
When designing a file system, we need to consider whether or not the operating system
would recognize and support file types. A common technique for implementing file types is
to include the types as the part of the file names.
Generally , the name of the file split into two parts:- 1-name 2-extension (which is
usually separated by 0).
The file type is depending on extension of the file.
The following section describes different types of files with their extension and
function
Files are used to store data. The information present in the file an be accessed by various
access methods. Different system uses different access methods
.Following are the most commonly used access methods:
Sequential access
Direct access
Indexed sequential access.
This method is simplest among all methods. Information in the file is processedin
order, one record after the other.
Magnetic tapes are supporting this type of file accessing
Ex-a file consisting of 100 records , the current position of read/write head is 45threcord,
suppose we wants to read the 75th record, then it access sequentially from 45,46
70,71,72,73,74,75.
So, the read/write head transverse all the records between 45 to 75.
0 45 75 100
Sequential files are typically used in batch application and parallel
processing.
Direct access:-
Direct access is also called relative access. In this method records can
read/write randomly without any order.
The direct access method is based on disk model of files because, diskallows
random access to any file block.
Ex:- a disk containing of 256 blocks. The current position of r/w head is 55th
block, suppose we want 200th block. Then we can access 200th block directly
without any restrictions. Another example is suppose a CD containing 10 songs,
at the present we are listening the song no.3 and wewant to listen song no. 7,
then we can shift from song no.3 to 7 without any restrictions.
The main disadvantage in the sequential file is , it takes more time to access arecord. To
overcome this problem, we can use this method.
In this method (ISF), the records are stored sequentially for efficient processing. But, they
can be accessed directly using Index or key field. Keys are the pointer which contains
adder of various blocks.
Records are organized in sequence based on a key field.
Suppose a file consisting of 60,000 records, the master index divided thetotal
index into 6 blocks.
Each block consisting of a pointer to secondary index.
The secondary index divide the 10000 records into 10 indexes.
Each index consisting of a pointer to its original location . 1- A
key field
2- A pointer field
Suppose we want to access the 55,550th record, then the file managementsystem (FMS)
access the index that is 50000 to 60000.
This block (50000 to 60000) consisting of a pointer, this pointer points to the 6thindex
of the secondary index.
This index points to the original location of the records from 55000 to 56000.
From this it follows the sequential method
That’s why is method is said to be indexed sequential file. so this method isneither
purely sequential nor purely direct access.
Generally indexed files are used in air line reservation system and payrollsystem.
File directories:-
The directory contains information about the files including attributeslocations
and ownership.
Sometimes the directories consisting of sub-directories also.
The directory is itself a file and it is owned by the operating system.
It is accessible by various file management units.
Directory structure:-
Sometimes the file system consisting a millions of files when no. Of files
increases, then it is very difficult to manage the files.
To manage these files:-
First group these files.
Then load one group of file in to one partition.
This each partition is called “directory”.
A directory structure provides a mechanism for organizing many files in thefile
system.
Different operations on the file directories:-
Search for a file:-search the directory structure for required file.
Create a file:- whenever we create a file then we should make an entry in the
directory.
Delete a file:- when file no longer needed, then we remove is from the directory.
List a directory:-we can see the list of files in the directory.
Operating System 65 Abhaya Kumar Panda
KIIT POLYTECHNIC
Rename a file:- whenever we want to change the name of file then we canchange.
Traverse a file:- if we need to access every directory and every file with in the
directory structure we can traverse the file system.
There are different types of directory structures are available they are:-
Single level directory
Two level directory
Tree structured directory
Acyclic directory
General graph directory
Single level directory:-
It is simplest of all directory structure.
In this directory system , only one directory is there and is consist of allfiles
Al files contained in the same directory name.
Directory
File n
File1 File2 File3
Disadvantage:-
This structure have some significant limitations even for a single user, because if the no.
Of files increases, then it is difficult to keep track of the file and also quite difficult to
remember the names of all the files.
As these files are in same directory, therefore these files will have unique names.
Two-level directory:-
The problem is single level directory is different users may be accidentally usethe
same name for their files.
To avoid this problem each user need a private directory, so that name chosenby one
user don’t interface with the name chosen by different user.
Two-level structure is divided into two levels of directories:-1-
master directory (root directory)
2-sub-directory (user directory)
Root directory
A B B A B C
Here root directory is first level directory. It consists of entries of ”user directory”.
User level directories are user1, user2, user3 and it contains A, B, C files.
Root directory
Sub-directory
A X Sub-directory
Sub-directory Sub-directory
Sub-directory B C
A B
File implementation:-
When we add links to an existing tree structured directory the tree structure is
destroyed, resulting a simple graph structure.
The primary advantage of this structure is traversing is easy and file sharing also
possible.
Files are normally stored in the disks so the main problem is how to allocate to thesefiles so
that disk space is utilized effectively and files can be accessed quickly.
Contiguous allocation:-
In this method each files occupies a set of contiguous blocks on the disks.
Ex:- a disks consisting of 1Kb blocks. A 100kb file would be allocated to 100
consecutive blocks. With 2kb blocks, it would be allocated 50 consecutive blocks.
The file ‘mail’ in the above figure starts from the block 19 with length = 6 blocks. Therefore, it occupies
19, 20, 21, 22, 23, 24 blocks.
In this figure the right hand side part is the file allocating table.
It is consisting of a single entry for each file. It shows the file name starting blockof the
file and size of the file.
This method is best suited for sequential files.
Disadvantage:
It is difficult to find the contiguous free blocks in the disks.
External fragmentation occurs (i.e:- some of the free blocks may left between two
files).
Linked allocation:-
Advantages:
Avoid external fragmentation.
Suited for sequential files.
Disadvantages:
The pointer itself occupies some memory with it in the block. So less space
available for storing information.
Takes much accessing time.
This method solves all the problem of the linked allocation method.
It solves the problem by bringing all the pointers at a particular place, which isknown
as index value.
An individual block having the pointers to the other blocks.
An individual index block is provided to every file and it contains all the disk block
addresses.
When creating a file, all the pointers are set to it.
The fig. Shows the indexed allocation of disk space.
Advantages:
Generally the files are stored on disk so management of the disk space is a major problem to the
designer, if user wants to allocate the space for the files we have toknow what stocks on the
disk available.
Thus we need a disk allocation table in addition to the file allocation table.
To keep track of free disk space, the file system maintains a free space list. The free space
list records all the disks bocks which are free i.e not allocated some other files.
To create a file, we search the free space list. When a file is deleted its disk space is added
to the free space list.
These are the number of techniques used for disk space management:- 1:-bit sector or bit
value.
2:-chain free points or linked free space list.3:-
index block list.
Bit sector or bit table
A bit vector is a collection of bits, in which each block is represented byone
bit.
If the block is free, the bit is 0.
If the bock is allocated the bit is 1.
Ex:- consider a disk where blocks 4,8,14,17 are free
111011101111101101
4 8 14 17
Another approach is to link all the free space blocks together keeping a pointer to the first
free block.
This block contains a pointer to the next free disk block and so on.
In an example we keep a pointer to the block 4, as the first free block 4 would contain a
pointer to block 8, which would point to block 14, which would block to point to 17 and
so on.
The chain free points are not very efficient to traverse the list.
In index block list technique it would store the address of n free blocks in thefirst
free block.
The n-1 of these are actually free.
The last one is the disk address of another block containing the address ofanother
‘n’ free blocks.
Advantages:
Any information present in the computer system must be protected from physicaldamage and
improper access.
Files can be damaged due to h/w problems such as temperature and validationand may
be deleted accidently.
So, there is need to protect these files. There are many methods for providing
protection to various files.
File protection is depending on the system
-in a single user system we can provide protection by simply removing floppydisks
and storing them at a safe place.
-but in multi-user system, there are various mechanism used to provideprotection.
They are:-
1-type of access2-
access control
3-other protection approaches (such as password).
Type of access:-
We can easily provide protection by prohibiting access.
Controlled access is provided by protection mechanism. These mechanisms canaccept or
reject an access depending on the type of access.
The various operations that can be controlled are:-
Read:- helps to read from the file
Write:- helps to write or rewrite the file
Execute:- helps to execute a stored file
Append:- helps to write new information at the end of file.
Delete:- helps to delete the file
List:- helps to list the name and attribute of the file
Access control:-
In this approach of protection, access depends on the identity of the user
Every user uses a different type to access a file or directory
The most common method to make a list with identity of each user and theiraccess
control.
When a user requests an access for file, then first it checks the access listrelated
to that file.
If that particular user is listed, then the operating system allows to access that user.
If not, then it leads to protection violation and operating system denies the request.
Advantage:-it can handle complex methodologies.
Disadvantage:-the list becomes very large, when the no. Of user increases. So it is very
difficult to maintain and construct a list.
In order to solve this problem, access control is introduced. The system classifiesthe user
into three different categories related to each file.
Owner:-it is the user who creates the file.
Group:- it is the set of user who shares the file and requires the same time toaccess.
Universe:-refers to all other user of system that constitutes a universe.
Disadvantages:
Unit-7
System programming
System programming:-system programming is the activity of programming system software.
The primary distinguishing characteristics of the systemprogramming when compared to that
application programming aims to produce software which provides services to the user directly
whereas systems programming aims to produce software and software platforms which provide
services to other software.
System programming requires a greater degree of hardware awareness.
Application program:-
1. Application software is a set of one or more than one programs which are designed to carry
out operation for a specified application.
2. For example payroll packages are designed to produce pay slip as the major product. An
application package for processing examination result produced mark sheet as the major
product.
3. The person who prepares the application program is known as application programmer.
4. Now a days application packages are used for application such as banking, administration,
insurance, publishing, manufacturing science and engineering.
System software:-
1. System software also known as system packages. These are the set of one or more than one
program which are designed not to perform specific application. But these are designed to
operate computer system properly.
2. The system programs helps or assist human for performing several application such as
input and output data to the system.
3. It also executes the application program.
4. It manages and monitors the activities of all hardware such as memory, printer, keyboard
etc.
5. These are very complex to design. So rarely it is designed in houses. These are designed
by system programmers.
Assemblers:
1. A computer program which translates an assembly language program toits machine
language equivalent is known as assembler.
2. The assembler is a system program which is supplied by the manufacture.
3. A symbolic program written by a programmer is assembly language is called a source
program. After the source program has been converted into machine language by an
assembler it is referred to as an object program.
4.The input of the assembler is the assembly language program and outputfrom the
assembler is the machine language program.
Complier:
1.A complier is a program that translates the high level language into machine
level language by reading the entire source code.
2.A program written by a programmer in high level language is called source program
that has been converted into machine language by acomplier is referred to as object
program.
3.So input to a complier is known as source program and output from acomplier is
known as object program.
4.A single complier cannot translate all the high level language into machine level
language should have a dedicated complier for itscompilation.
5.Complier is a large program which resides in secondary. When it isrequired it
is copied into main memory.
Interpreter
An interpreter is also a translator that translates the high level language into machine level
language by reading one by one.
Here translation and execution alternate for each statement encountered in high level language
program.
An interpreter translates the instruction and the control unit executes the resulting
machine code so on.
It is simple to write and required less space in main memory for storage.As one by
one line is translated so it is slower.
Complier Interpreter
1.Complier is software that translates the high 1.Interpreter is a software thattranslates the high
level language into machine language by level language into machine language by
reading the entire code at a time. reading one statement at a time.
Stages of complier
The complier takes an input a source program and produces as output an equivalent sequence of
machine instruction. The complier does this transition by some sequence of gates or phase.
Lexical analyzer/scanner:-
• Lexical analysis is the first phase of compiler which is also termed as scanning.
• Source program is scanned to read the stream of characters and those characters are grouped to
form a sequence called lexemes which produces token as output.
Token: Token is a sequence of characters that represent lexical unit, which matches with the
pattern, such as keywords, operators, identifiers etc. This separators character of sources
language into groups that logically belong together, these groups are called tokens. The usual
tokens are keyword operator symbol.
Syntax analyzer/parser:- The output of the lexical analyzer is passed to this syntax analyzer.
The syntax analyzer check whether the statement is valid or not every language has its production.
If sentence follows this rules this rules then the sentence is valid.
To check the validation of a sentence two techniques are used:-
Intermediate code generation:- This phase uses the structure produces by the syntax analyzer to
create a stream of simple instruction. These instructions are similar to assembly language.
Code optimization:-This is an optional phase, whose job is to improve the intermediate code. So
that the ultimate object program can run faster.
Code generation:-This phase produces the object code by deciding where the memory space will
be allocated to the variables, literals and constant.
Table management:- This portion of the complier keeps track of the names used by the program
and records essential information. The data structure used to record this information is called a
symbol table.
Error handler:-The error handler is involved when an error in the source programis detected.
Generally the error occurs at the syntax analyzer phase.
Both the table management and error handler routines interact with all the phase of the
complier.
References
1. “Operating System Concepts” by Avi Silberschatz ,Peter Baer Galvin and Greg Gagne
2. “Operating System” by Er. Rajiv Chopra
3. “Operating System and System Programming” by P. Balkrishna Prasad
4. “Operating Systems” by Vijay Shukla
5. “Operating System” by Stuart Madnick and John Donovan.
6. https://www.geeksforgeeks.org
7. https://nptel.ac.in
8. https://en.wikipedia.org