Operating System Lecture 2,3
Operating System Lecture 2,3
Evolution of OS:
1.Mainframe Systems
2. Batch Processing Operating System:
This type of OS accepts more than one jobs and these jobs are batched/ grouped together
according to their similar requirements. This is done by computer operators. Whenever the
computer becomes available, the batched jobs are sent for execution and gradually the output
is sent back to the user.
It allowed only one program at a time.
This OS is responsible for scheduling the jobs according to priority and the resource required.
3. Multiprogramming Operating System:
This type of OS is used to execute more than one jobs simultaneously by a single processor.
it increases CPU utilization by organizing jobs so that the CPU always has one job to execute.
The concept of multiprogramming is described as follows:
All the jobs that enter the system are stored in the job pool( in disc). The operating system
loads a set of jobs from job pool into main memory and begins to execute.
During execution, the job may have to wait for some task, such as an I/O operation, to
complete. In a multiprogramming system, the operating system simply switches to another job
and executes. When that job needs to wait, the CPU is switched to another job, and so on.
When the first job finishes waiting and it gets the CPU back.
As long as at least one job needs to execute, the CPU is never idle.
Multiprogramming operating systems use the mechanism of job scheduling and CPU
scheduling.
3. Time-Sharing/multitasking Operating Systems
Time sharing (or multitasking) OS is a logical extension of multiprogramming. It provides extra
facilities such as:
Faster switching between multiple jobs to make processing faster.
Allows multiple users to share computer system simultaneously.
The users can interact with each job while it is running.
These systems use a concept of virtual memory for effective utilization of memory space.
Hence, in this OS, no jobs are discarded.
4. Multiprocessor Operating Systems
Multiprocessor operating systems are also known as parallel OS or tightly coupled OS. Such
operating systems have more than one processor in close communication that sharing the
computer bus, the clock and sometimes memory and peripheral devices. It executes multiple
jobs at same time and makes the processing faster.
Multiprocessor systems have three main advantages:
Increased throughput: By increasing the number of processors, the system performs more
work in less time.
pg. 1
VUNA – CIT – CSC 203 OPERATING SYSTEM 1 – LECTURE NOTES 2 AND 3 2021
Economy of scale: Multiprocessor systems can save more money than multiple single-
processor systems, because they can share peripherals, mass storage, and power supplies.
Increased reliability: If one processor fails to done its task, then each of the remaining
processors must pick up a share of the work of the failed processor. The failure of one processor
will not halt the system, only slow it down.
v 5. Distributed Operating Systems
In distributed system, the different machines are connected in a network and each machine
has its own processor and own local memory.
In this system, the operating systems on all the machines work together to manage the
collective network resource.
It can be classified into two categories:
1. Client-Server systems
2. Peer-to-Peer systems
Advantages of distributed systems.
Resources Sharing
Computation speed up – load sharing
Reliability
Communications
Requires networking infrastructure.
Local area networks (LAN) or Wide area networks (WAN)
.
6. Desktop Systems/Personal Computer Systems
The PC operating system is designed for maximizing user convenience and responsiveness.
This system is neither multi-user nor multitasking.
These systems include PCs running Microsoft Windows and the Apple Macintosh.
.
7. Real-Time Operating Systems (RTOS)
A real-time operating system (RTOS) is a multitasking operating system intended for
applications with fixed deadlines (real-time computing). Such applications include some small
embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial
control, and some large-scale computing systems.
The real time operating system can be classified into two categories:
1. hard real time system and 2. soft real time system.
A hard real-time system guarantees that critical tasks be completed on time. This goal
requires that all delays in the system be bounded, from the retrieval of stored data to the time
that it takes the operating system to finish any request made of it. Such time constraints dictate
the facilities that are available in hard real-time systems.
A soft real-time system is a less restrictive type of real-time system. Here, a critical real-time
task gets priority over other tasks and retains that priority until it completes. Soft real time
pg. 2
VUNA – CIT – CSC 203 OPERATING SYSTEM 1 – LECTURE NOTES 2 AND 3 2021
system can be mixed with other types of systems. Due to less restriction, they are risky to use
for industrial control and robotics.
PROCESS
A Process is something that is currently under execution. So, an active program can be called a
Process. For example, when you want to search something on web then you start a browser.
So, this can be a process. Another example of process can be starting your music player to listen
to some cool music of your choice.
• New State: This is the state when the process is just created. It is the first state of a
process.
• Ready State: After the creation of the process, when the process is ready for its execution
then it goes in the ready state. In a ready state, the process is ready for its execution by
the CPU but it is waiting for its turn to come. There can be more than one process in the
ready state.
• Ready Suspended State: There can be more than one process in the ready state but due
to memory constraint, if the memory is full then some process from the ready state gets
placed in the ready suspended state.
pg. 3
VUNA – CIT – CSC 203 OPERATING SYSTEM 1 – LECTURE NOTES 2 AND 3 2021
• Running State: Amongst the process present in the ready state, the CPU chooses one
process amongst them by using some CPU scheduling algorithm. The process will now be
executed by the CPU and it is in the running state.
• Waiting or Blocked State: During the execution of the process, the process might require
some I/O operation like writing on file or some more priority process might come. In
these situations, the running process will have to go into the waiting or blocked state and
the other process will come for its execution. So, the process is waiting for something in
the waiting state.
• Waiting Suspended State: When the waiting queue of the system becomes full then
some of the processes will be sent to the waiting suspended state.
• Terminated State: After the complete execution of the process, the process comes into
the terminated state and the information related to this process is deleted.
The following image will show the flow of a process from the new state to the terminated state.
In the above image, you can see that when a process is created then it goes into the new state.
After the new state, it goes into the ready state. If the ready queue is full, then the process will
be shifted to the ready suspended state. From the ready sate, the CPU will choose the process
and the process will be executed by the CPU and will be in the running state. During the
execution of the process, the process may need some I/O operation to perform. So, it has to go
into the waiting state and if the waiting state is full then it will be sent to the waiting suspended
state. From the waiting state, the process can go to the ready state after performing I/O
operations. From the waiting suspended state, the process can go to waiting or ready
pg. 4
VUNA – CIT – CSC 203 OPERATING SYSTEM 1 – LECTURE NOTES 2 AND 3 2021
suspended state. At last, after the complete execution of the process, the process will go to the
terminated state and the information of the process will be deleted.
In an Operating System, there are a number of processes present in it. Each process has some
information that is needed by the CPU for the execution of the process. So, we need some kind
of data structure to store information about a particular process.
A Process Control Block or simple PCB is a data structure that is used to store the information
of a process that might be needed to manage the scheduling of a particular process.
So, each process will be given a PCB which is a kind of identification card for a process. All the
processes present in the system will have a PCB associated with it and all these PCBs are
connected in a Linked List.
Attributes of a Process Control Block
There are various attributes of a PCB that helps the CPU to execute a particular process. These
attributes are:
• Process Id: A process id is a unique identity of a process. Each process is identified with
the help of the process id.
• Program Counter: The program counter, points to the next instruction that is to be
executed by the CPU. It is used to find the next instruction that is to be executed.
• Process State: A process can be in any state out of the possible states of a process. So,
the CPU needs to know about the current state of a process, so that its execution can be
done easily.
• Priority: There is a priority associated with each process. Based on that priority the CPU
finds which process is to be executed first. Higher priority process will be executed first.
• General-purpose Registers: During the execution of a process, it deals with a number of
data that are being used and changed by the process. But in most of the cases, we have
to stop the execution of a process to start another process and after some times, the
previous process should be resumed once again. Since the previous process was dealing
with some data and had changed the data so when the process resumes then it should
use that data only. These data are stored in some kind of storage units called registers.
• CPU Scheduling Information: It indicates the information about the process scheduling
algorithms that are being used by the CPU for the process.
pg. 5
VUNA – CIT – CSC 203 OPERATING SYSTEM 1 – LECTURE NOTES 2 AND 3 2021
• List of opened files: A process can deal with a number of files, so the CPU should maintain
a list of files that are being opened by a process to make sure that no other process can
open the file at the same time.
• List of I/O devices: A process may need a number of I/O devices to perform various tasks.
So, a proper list should be maintained that shows which I/O device is being used by which
process.
pg. 6
VUNA – CIT – CSC 203 OPERATING SYSTEM 1 – LECTURE NOTES 2 AND 3 2021
Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types −
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
Scheduling Objectives
It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new
to ready, then there is use of long-term scheduler.
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of
pg. 7
VUNA – CIT – CSC 203 OPERATING SYSTEM 1 – LECTURE NOTES 2 AND 3 2021
the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled
out. Swapping may be necessary to improve the process mix
pg. 8
VUNA – CIT – CSC 203 OPERATING SYSTEM 1 – LECTURE NOTES 2 AND 3 2021
DEADLOCK
A process in operating systems uses different resources and uses resources in the following
way.
1) Requests a resource
2) Use the resource
3) Releases the resource
Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other. A
similar situation occurs in operating systems when there are two or more processes that hold
some resources and wait for resources held by other(s). For example, in the below diagram,
Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and
process 2 is waiting for resource 1
pg. 9
VUNA – CIT – CSC 203 OPERATING SYSTEM 1 – LECTURE NOTES 2 AND 3 2021
Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
Mutual Exclusion: One or more than one resource are non-shareable (Only one process can
use at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes are waiting for each other in circular form.
Deadlock Detection
A deadlock occurrence can be detected by the resource scheduler. A resource scheduler helps
OS to keep track of all the resources which are allocated to different processes.
Methods that are used in order to handle the problem of deadlocks are as follows:
According to this method, it is assumed that deadlock would never occur. This approach is used
by many operating systems where they assume that deadlock will never occur which means
operating systems simply ignores the deadlock. This approach can be beneficial for those
systems that are only used for browsing and for normal tasks. Thus ignoring the deadlock
method can be useful in many cases but it is not perfect in order to remove the deadlock from
the operating system.
2.Deadlock Prevention
As we have discussed in the above section, that all four conditions: Mutual Exclusion, Hold and
Wait, No preemption, and circular wait if held by a system then causes deadlock to occur. The
main aim of the deadlock prevention method is to violate any one condition among the four;
because if any of one condition is violated then the problem of deadlock will never occur. As
the idea behind this method is simple but the difficulty can occur during the physical
implementation of this method in the system.
pg. 10
VUNA – CIT – CSC 203 OPERATING SYSTEM 1 – LECTURE NOTES 2 AND 3 2021
This method is used by the operating system in order to check whether the system is in a safe
state or in an unsafe state. This method checks every step performed by the operating system.
Any process continues its execution until the system is in a safe state. Once the system enters
into an unsafe state, the operating system has to take a step back.
Basically, with the help of this method, the operating system keeps an eye on each allocation,
and make sure that allocation does not cause any deadlock in the system.
With this method, the deadlock is detected first by using some algorithms of the resource-
allocation graph. This graph is mainly used to represent the allocations of various resources to
different processes. After the detection of deadlock, a number of methods can be used in order
to recover from that deadlock.
One way is preemption by the help of which a resource held by one process is provided to
another process.
The second way is to roll back, as the operating system keeps a record of the process state and
it can easily make a process roll back to its previous state due to which deadlock situation can
be easily eliminate.
The third way to overcome the deadlock situation is by killing one or more processes.
pg. 11
VUNA – CIT – CSC 203 OPERATING SYSTEM 1 – LECTURE NOTES 2 AND 3 2021
pg. 12