0% found this document useful (0 votes)
8 views59 pages

OS Notes Unit-1

An Operating System (OS) serves as an interface between users and computer hardware, managing essential tasks such as file, memory, and process management. Various types of operating systems exist, including batch, multiprogramming, time-sharing, and distributed systems, each with distinct functionalities and advantages. Popular examples of operating systems include Windows, Linux, macOS, Android, and iOS.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views59 pages

OS Notes Unit-1

An Operating System (OS) serves as an interface between users and computer hardware, managing essential tasks such as file, memory, and process management. Various types of operating systems exist, including batch, multiprogramming, time-sharing, and distributed systems, each with distinct functionalities and advantages. Popular examples of operating systems include Windows, Linux, macOS, Android, and iOS.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

WHAT IS AN OPERATING SYSTEM?

An Operating System (OS) is an interface between computer user and computer hardware. Every
computer system must have at least one operating system to run other programs. Applications like
Browsers, MS Office, Notepad Games, etc., need some environment to run and perform its tasks.

An operating system is software which performs all the basic tasks like file management, memory
management, process management, handling input and output, and controlling peripheral devices such
as disk drives and printers.

Some popular Operating Systems include Linux, MS-Windows, Ubuntu, Mac OS, Fedora, Solaris, Free
BSD, Chrome OS, CentOS, Debian, Deepin, , VMS, OS/400, AIX, z/OS, etc.

Definition: An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.

Functions of the Operating System


• Resource Management: The operating system manages and allocates memory, CPU
time, and other hardware resources among the various programs and processes running
on the computer.
• Process Management: The operating system is responsible for starting, stopping, and
managing processes and programs. It also controls the scheduling of processes and
allocates resources to them.

1
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

• Memory Management: The operating system manages the computer’s primary


memory and provides mechanisms for optimizing memory usage.
• Security: The operating system provides a secure environment for the user,
applications, and data by implementing security policies and mechanisms such as access
controls and encryption.
• Job Accounting: It keeps track of time and resources used by various jobs or users.
• File Management: The operating system is responsible for organizing and managing
the file system, including the creation, deletion, and manipulation of files and directories.
• Device Management: The operating system manages input/output devices such as
printers, keyboards, mice, and displays. It provides the necessary drivers and interfaces
to enable communication between the devices and the computer.
• Networking: The operating system provides networking capabilities such as
establishing and managing network connections, handling network protocols, and sharing
resources such as printers and files over a network.
• User Interface: The operating system provides a user interface that enables users to
interact with the computer system. This can be a Graphical User Interface (GUI), a
Command-Line Interface (CLI), or a combination of both.
• Backup and Recovery: The operating system provides mechanisms for backing up data
and recovering it in case of system failures, errors, or disasters.
• Virtualization: The operating system provides virtualization capabilities that allow
multiple operating systems or applications to run on a single physical machine. This can
enable efficient use of resources and flexibility in managing workloads.
• Performance Monitoring: The operating system provides tools for monitoring and
optimizing system performance, including identifying bottlenecks, optimizing resource
usage, and analysing system logs and metrics.
• Time-Sharing: The operating system enables multiple users to share a computer system
and its resources simultaneously by providing time-sharing mechanisms that allocate
resources fairly and efficiently.
• System Calls: The operating system provides a set of system calls that enable
applications to interact with the operating system and access its resources. System calls
provide a standardized interface between applications and the operating system, enabling
portability and compatibility across different hardware and software platforms.
• Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages, and other debugging and error-detecting methods.

2
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Examples of Operating Systems


• Windows (GUI-based, PC)
• GNU/Linux (Personal, Workstations, ISP, File, and print server, Three-tier
client/Server)
• macOS (Macintosh), used for Apple’s personal computers and workstations
(MacBook, iMac).
• Android (Google’s Operating System for smartphones/tablets/smartwatches)
• iOS (Apple’s OS for iPhone, iPad, and iPod Touch)
*******************

Types of Operating Systems

An Operating System performs all the basic tasks like managing files, processes, and memory. Thus
operating system acts as the manager of all the resources, i.e. resource manager. Thus, the operating
system becomes an interface between the user and the machine. It is one of the most required software
that is present in the device.

Operating System is a type of software that works as an interface between the system program and the
hardware. There are several types of Operating Systems in which many of which are mentioned below

❖ Batch Operating System


❖ Multi-Programming System
❖ Multi-Processing System
❖ Multi-Tasking Operating System
❖ Time-Sharing Operating System
❖ Distributed Operating System
❖ Network Operating System
❖ Real-Time Operating System
❖ Embedded Operating System:

Batch Operating System

A batch Processing Operating System (BatchOS) is an open-source operating system designed to


manage multiple jobs in sequence. It is based on the CentOS Linux distribution and is licensed
under the GNU General Public License. The Batch operating system is designed to support a wide
range of batch processing tasks, including data warehousing, OLAP and data mining, big data
processing, data integration, and time series analysis.
3
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Batch processing is a process that is used in many industries to improve efficiency. It is a type of
operating system that is used to manage multiple tasks and processes in a sequence. It is a type of
operating system that is used to improve the efficiency of a business by allowing it to run multiple
tasks at the same time. One of the main benefits of using a batch-processing operating system is
that it can improve the efficiency of a business by allowing it to run multiple tasks at the same time.
This can be done by allowing the operating system to manage the tasks and processes. This can
allow the business to run more tasks at the same time without having to wait for each one to finish.

Batch processing operating systems are designed to execute a large number of similar jobs or tasks
without user intervention. These operating systems are commonly used in business and scientific
applications where a large number of jobs need to be processed in a specific order.

Overall, batch processing operating systems are ideal for processing large volumes of similar jobs
efficiently, but they are less flexible than other types of operating systems that allow for user
interaction and dynamic job scheduling.

Another benefit of using a batch processing operating system is that it can improve the efficiency
of a business by allowing it to run multiple tasks in sequence. This can be done by allowing the
operating system to manage the tasks and processes. Batch Processing Operating System (BPOS)
is an open-source platform that helps to manage large-scale batch processing jobs. BPOS uses a
centralized execution architecture that enables the execution of multiple jobs in parallel. BPOS also
features a user-friendly graphical interface that makes it easy to manage and monitor your job.

The advantages of batch processing operating systems include:


1. Efficient use of resources: Batch processing operating systems allow for the efficient use
of computing resources, as jobs are processed in batches and scheduled to run when
resources are available.
2. High throughput: Batch processing operating systems can process a large number of jobs
quickly, allowing for high throughput and fast turnaround times.
3. Reduced errors: As batch processing operating systems do not require user intervention,
they can help reduce errors that may occur during manual job processing.
4. Simplified job management: Batch processing operating systems simplify job
management by automating job submission, scheduling, and execution.
5. Cost-effective: Batch processing operating systems can be cost-effective, as they allow
for the efficient use of resources and can help reduce errors and processing time.
6. Scalability: Batch processing operating systems can easily handle a large number of jobs,
making them scalable for large organizations that require high-volume data processing.
4
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Disadvantages: There are many disadvantages to using batch operating systems, including:

• Limited functionality: Batch systems are designed for simple tasks, not for more
complex tasks. This can make them difficult to use for certain tasks, such as managing
files or software.
• Security issues: Because batch systems are not typically used for day-to-day tasks, they
may not be as secure as more common operating systems. This can lead to security risks
if the system is used by people who should not have access to it.
• Interruptions Batch systems can be interrupted frequently, which can lead to missed
deadlines or mistakes.
• Inefficiency: Batch systems are often slow and difficult to use, which can lead to
inefficiency in the workplace.

Multiprogramming in Operating System

Multiprogramming in an operating system as the name suggests multi means more than one and
programming means the execution of the program. when more than one program can execute in an
operating system then this is termed a multiprogramming operating system.

Before the concept of Multiprogramming, computing takes place in other way which does not use
the CPU efficiently. Earlier, CPU executes only one program at a time. In earlier day’s computing,
the problem is that when a program undergoes in waiting state for an input/output operation, the
CPU remains idle which leads to underutilization of CPU and thus poor performance.
Multiprogramming addresses this issue and solve this issue.

5
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Multiprogramming was developed in 1950s. It was first used in mainframe computing.

The major task of multiprogramming is to maximize the utilization of resources.


Multiprogramming is broadly classified into two types namely

❖ Multi-user operating system


❖ Multitasking operating system

Multiuser and Multitasking both are different in every aspect and multitasking is an operating
system that allows you to run more than one program simultaneously. The operating system does
this by moving each program in and out of memory one at a time. When a program runs out of
memory, it is temporarily stored on disk until it is needed again.

A multi-user operating system allows many users to share processing time on a powerful central
computer on different terminals. The operating system does this by quickly switching between
terminals, each receiving a limited amount of CPU time on the central computer. Operating systems
change so rapidly between terminals that each user appears to have constant access to the central
computer. If there are many users on such a system, the time it takes for the central computer to
respond may become more apparent.

Features of Multiprogramming
1. Need Single CPU for implementation.
2. Context switch between process.
3. Switching happens when current process undergoes waiting state.
4. CPU idle time is reduced.
5. High resource utilization.
6. High Performance.
6
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Disadvantages of Multiprogramming
1. Prior knowledge of scheduling algorithms is required.
2. If it has a large number of jobs, then long-term jobs will have to require a long wait.
3. Memory management is needed in the operating system because all types of tasks are
stored in the main memory.
4. Using multiprogramming up to a larger extent can cause a heat-up issue.
Time Sharing Operating System
Multiprogrammed, batched systems provide an environment where various system resources
were used effectively, but it did not provide for user interaction with computer systems. Time-sharing
is a logical extension of multiprogramming. The CPU performs many tasks by switches that are so
frequent that the user can interact with each program while it is running. A time-shared operating
system allows multiple users to share computers simultaneously. With each action or order at a time
the shared system becomes smaller, so only a little CPU time is required for each user. As the system
rapidly switches from one user to another, each user is given the impression that the entire computer
system is dedicated to its use, although it is being shared among multiple users.

A time-shared operating system uses CPU scheduling and multi-programming to provide each user
with a small portion of a shared computer at once. Each user has at least one separate program in
memory. A program is loaded into memory and executes, it performs a short period of time either
before completion or to complete I/O. This short period of time during which the user gets the attention
of the CPU is known as time slice, time slot, or quantum. It is typically of the order of 10 to 100
milliseconds. Time-shared operating systems are more complex than multiprogrammed operating
systems. In both, multiple jobs must be kept in memory simultaneously, so the system must have
memory management and security. To achieve a good response time, jobs may have to swap in and out
of disk from the main memory which now serves as a backing store for the main memory. A common
method to achieve this goal is virtual memory, a technique that allows the execution of a job that may
not be completely in memory.

7
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

In the above figure the user 5 is active state but user 1, user 2, user 3, and user 4 are in a waiting

state whereas user 6 is in a ready state.


1. Active State – The user’s program is under the control of the CPU. Only one program is
available in this state.
2. Ready State – The user program is ready to execute but it is waiting for its turn to get
the CPU. More than one user can be in a ready state at a time.
3. Waiting State – The user’s program is waiting for some input/output operation. More
than one user can be in a waiting state at a time.
Requirements of Time-Sharing Operating System: An alarm clock mechanism to send an
interrupt signal to the CPU after every time slice. Memory Protection mechanism to prevent one
job’s instructions and data from interfering with other jobs.
Advantages
1. Each task gets an equal opportunity.
2. Fewer chances of duplication of software.
3. CPU idle time can be reduced.
Disadvantages
1. Reliability problem.
2. One must have to take of the security and integrity of user programs and data.
3. Data communication problem.
Distributed Operating System
A distributed operating system provides an environment in which multiple independent CPU
or processor communicates with each other through physically separate computational nodes. Each
node contains specific software that communicates with the global aggregate operating system. With

8
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

the ease of a distributed system, the programmer or developer can easily access any operating system
and resource to execute the computational tasks and achieve a common goal. It is the extension of a
network operating system that facilitates a high degree of connectivity to communicate with other
users over the network.
Types of Distributed Operating System
There are various types of Distributed Operating systems. Some of them are as follows:
❖ Client-Server Systems
❖ Peer-to-Peer Systems
❖ Middleware
❖ Three-tier
❖ N-tier
Advantages

There are various advantages of the distributed operating system. Some of them are as follow:

1. It may share all resources (CPU, disk, network interface, nodes, computers, and so on) from
one site to another, increasing data availability across the entire system.
2. It reduces the probability of data corruption because all data is replicated across all sites; if one
site fails, the user can access data from another operational site.
3. The entire system operates independently of one another, and as a result, if one site crashes, the
entire system does not halt.
4. It increases the speed of data exchange from one site to another site.
5. It is an open system since it may be accessed from both local and remote locations.
6. It helps in the reduction of data processing time.
7. Most distributed systems are made up of several nodes that interact to make them fault-tolerant.
If a single machine fails, the system remains operational.

Disadvantages

There are various disadvantages of the distributed operating system. Some of them are as follows:

1. The system must decide which jobs must be executed when they must be executed, and where
they must be executed. A scheduler has limitations, which can lead to underutilized hardware
and unpredictable runtimes.
2. It is hard to implement adequate security in DOS since the nodes and connections must be
secured.
9
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

3. The database connected to a DOS is relatively complicated and hard to manage in contrast to a
single-user system.
4. The underlying software is extremely complex and is not understood very well compared to
other systems.
5. The more widely distributed a system is, the more communication latency can be expected. As
a result, teams and developers must choose between availability, consistency, and latency.
6. These systems aren't widely available because they're thought to be too expensive.
7. Gathering, processing, presenting, and monitoring hardware use metrics for big clusters can be
a real issue.

Network operating system


An Operating system, which includes software and associated protocols to communicate with other
autonomous computers via a network conveniently and cost-effectively, is called Network
Operating System. It allows devices like a disk, printers, etc., shared between computers. The
individual machines that are part of the Network have their operating system, and the Network
Operating System resides on the top of the individual machines. Since individual machines have
their Operating System to access resources from other computers, they have to log into another
machine using the correct password. This feature also results in no process migration, and processes
running at different machines cannot communicate. The transmission control protocol is the
common network protocol

10
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Types of Network Operating System

Network operating systems can be specialized serve as:

o Peer To Peer System


o Client-Server System

Peer To Peer Network Operating System

Peer To Peer networks are the network resources in which each system has the same capabilities and
responsibilities, i.e., none of the systems in this architecture is superior to the others in terms of
functionality. There is no master-slave relationship among the systems, i.e., every node is equal in a
Peer Peer Network Operating System. All the nodes at the Network have an equal relationship with
others and have a similar type of software that helps the sharing of resources.

A Peer-to-Peer Network Operating System allows two or more computers to share their resources,
along with printers, scanners, CD-ROM, etc., to be accessible from each computer. These networks are
best suitable for smaller environments with 25 or fewer workstations.

Advantages of Peer-to-Peer Network Operating System

o This type of system is less expensive to set up and maintain.


o In this, dedicated hardware is not required.
o It does not require a dedicated network administrator to set up some network policies.

11
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

o It is very easy to set up as a simple cabling scheme is used, usually a twisted pair cable.

Disadvantages of Peer To Peer Network Operating System

o Peer to Peer networks is usually less secure because they commonly use share-level security.
o This failure of any node in a system affects the whole system.
o Its performance degrades as the Network grows.
o Peer to Peer networks cannot differentiate among network users who are accessing a resource.
o In Peer-to-Peer Network, each shared resource you wish to control must have its password.
These multiple passwords may be difficult to remember.
o Lack of central control over the Network.

Client-Server Network Operating System

In Client-Server systems, there are two broad categories of systems:

o The server is called the backend.


o A client called as frontend.

Client-Server Network Operating System is a server-based Network in which storage and processing
workload is shared among clients and servers.

The client requests offerings which include printing and document storage, and servers satisfy their
requests. Normally all community offerings like digital mail, printing are routed through the server.

Server computers systems are commonly greater effective than client computer systems. This
association calls for software programs for the customers and servers. The software program walking
at the server is known as the Network Operating System, which offers a community of surroundings
for server and client.

Client-Server Network was developed to deal with the environment when many PC printers and servers
are connected via a network. The fundamental concept changed to outline a specialized server with
unique functionality.

12
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Advantages of Client-Server Network Operating System

o This Network is more secure than the Peer Peer Network system due to centralized data
security.
o Network traffic reduces due to the division of work among clients and the server.
o The area covered is quite large, so it is valuable to large and modern organizations because it
distributes storage and processing.
o The server can be accessed remotely and across multiple platforms in the Client-Server
Network system.

Disadvantages of Client-Server Network Operating Systems

o In Client-Server Networks, security and performance are important issues. So trained network
administrators are required for network administration.
o Implementing the Client-Server Network can be a costly issue depending upon the security,
resources, and connectivity.

13
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Real Time Operating System (RTOS)

Real-time operating systems (RTOS) are used in environments where a large number of
events, mostly external to the computer system, must be accepted and processed in a short time or
within certain deadlines. such applications are industrial control, telephone switching equipment,
flight control, and real-time simulations. With an RTOS, the processing time is measured in tenths
of seconds. This system is time-bound and has a fixed deadline. The processing in this type of system
must occur within the specified constraints. Otherwise, This will lead to system failure.
Examples of real-time operating systems are airline traffic control systems, Command Control
Systems, airline reservation systems, Heart pacemakers, Network Multimedia Systems, robots, etc.
The real-time operating systems can be of 3 types –

Hard Real-Time Operating System: These operating systems guarantee that critical tasks are
completed within a range of time.
For example, a robot is hired to weld a car body. If the robot welds too early or too late, the car
cannot be sold, so it is a hard real-time system that requires complete car welding by the robot hardly
on time., scientific experiments, medical imaging systems, industrial control systems, weapon

14
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

systems, robots, air traffic control systems, etc.


Soft real-time operating system: This operating system provides some relaxation in the time limit.
For example – Multimedia systems, digital audio systems, etc. Explicit, programmer-defined, and
controlled processes are encountered in real-time systems. A separate process is changed by handling
a single external event. The process is activated upon the occurrence of the related event signaled by
an interrupt.
Multitasking operation is accomplished by scheduling processes for execution independently of each
other. Each process is assigned a certain level of priority that corresponds to the relative importance
of the event that it services. The processor is allocated to the highest-priority processes. This type of
schedule, called, priority-based pre-emptive scheduling is used by real-time systems.
Firm Real-time Operating System: RTOS of this type have to follow deadlines as well. In spite of
its small impact, missing a deadline can have unintended consequences, including a reduction in the
quality of the product. Example: Multimedia applications.
Deterministic Real-time operating System: Consistency is the main key in this type of real-time
operating system. It ensures that all the task and processes execute with predictable timing all the
time, which make it more suitable for applications in which timing accuracy is very
important. Examples: INTEGRITY, PikeOS.
Advantages:
The advantages of real-time operating systems are as follows-
Maximum consumption: Maximum utilization of devices and systems. Thus more output from all
the resources.
Task Shifting: Time assigned for shifting tasks in these systems is very less. For example, in older
systems, it takes about 10 microseconds. Shifting one task to another and in the latest systems, it
takes 3 microseconds.
Focus On Application: Focus on running applications and less importance to applications that are
in the queue.
Real-Time Operating System In Embedded System: Since the size of programs is small, RTOS
can also be embedded systems like in transport and others.
Error Free: These types of systems are error-free.
Memory Allocation: Memory allocation is best managed in these types of systems.
Disadvantages:
The disadvantages of real-time operating systems are as follows-
Limited Tasks: Very few tasks run simultaneously, and their concentration is very less on few
applications to avoid errors.

15
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Use Heavy System Resources: Sometimes the system resources are not so good and they are
expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer to write on.
Device Driver and Interrupt signals: It needs specific device drivers and interrupts signals to
respond earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less prone to
switching tasks.
Minimum Switching: RTOS performs minimal task switching.
Comparison of Regular and Real-Time operating systems:

Embedded Operating System:

An embedded operating system is a computer operating system designed for use in embedded computer
systems. It has limited features. The term "embedded operating system" also refers to a "real-time
operating system". The main goal of designing an embedded operating system is to perform specified
tasks for non-computer devices. It allows the executing programming codes that deliver access to
devices to complete their jobs.

16
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

An embedded operating system is a combination of software and hardware. It produces an easily


understandable result by humans in many formats such as images, text, and voice. Embedded operating
systems are developed with programming code, which helps convert hardware languages into software
languages like C and C++.

The embedded operating system improves overall efficiency by controlling all hardware resources and
minimizing response times for specific tasks for which devices were built.

Types of Embedded Operating System

There are various types of Embedded operating systems. Some of them are as follows:

Real-Time Operating System

A real-time operating system (RTOS) is a deterministic operating system with limited functionalities
that allows multi-threaded applications by giving processed outputs within set time limitations. Since
some apps are time-critical, they must be executed exactly when they are expected to maintain the
entire system functioning.

The real-time operating system is dependent on clock interruptions. Interrupt Service Routine (ISR)
interruptions are generated by this system. The Priority system was implemented by RTOS for the
execution of all types of processes. The process and the RTOS are synchronized and can communicate
with one another. The RTOS is stored on a ROM (Read Only Memory) chip because this chip can store
data for a long time.

Multi-tasking Operating System

The multitasking operating system may execute multiple tasks at the same time. In a multitasking
operating system, multiple tasks and processes run at the same time. If the system contains more than
one processor, it may perform a wide range of functions.

The multitasking operating system is switched between the multiple tasks. Some tasks are waiting for
events to occur, while others are receiving events and preparing to run. When using a multitasking
operating system, software development is easier since different software components may be made
independent of each other.

Pre-emptive Operating System

A multitasking operating system that interprets task pre-emption is known as a pre-emptive operating
system. A task with a higher priority is always defined and executed before a task with a lower priority.
Such multitasking operating systems improve system reaction to events and simplify software

17
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

development, resulting in a more dependable system. The system designer may calculate the time
required for service interpreters in the system and the time required by the scheduler to switch tasks.
Such systems can fail to meet a system's deadline, and the program is unaware of the missed deadline.
CPU load can be naturally measured in a pre-emptive operating system by defining a lower priority
process that does nothing except increment the counter.

Rate Monotonic Operating System

Some embedded systems are designed to use a specific task scheduling method known as 'Rate
Monotonic Scheduling'. It is an operating system that assures that tasks in a system may operate for a
specific amount of time and duration of time. It is a priority-based scheduling algorithm. It is used in
operating systems as a pre-emptive. It means that all tasks can be interrupted or suspended by other
tasks within a short period of time. It is generally used to perform shorter tasks with higher priority.

Single System Control Loop

It is a very simple type of operating system designed to perform only one function. It is used in several
devices, including smartphones, thermostats or temperature controls, digital controllable equipment,
etc. Users may set any point of temperature variable as desired in this type of OS. Several sensors are
included in this system to determine various temperature points in the environment

Advantages

There are various advantages of an embedded operating system. Some of them are as follows:

1. It is small in size and faster to load.


2. It is low cost.
3. It is easy to manage.
4. It provides better stability.
5. It provides higher reliability.
6. It provides some interconnections.
7. It has low power consumption.
8. It helps to increase the product quality.

Disadvantages

There are various disadvantages of an embedded operating system. Some of them are as follows:

1. It isn't easy to maintain.


2. The troubleshooting is harder.
18
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

3. It has limited resources for memory.


4. It isn't easy to take a back of embedded files.
5. You can't change, improve, or upgrade an embedded system once it's been developed.
6. If any problem occurs, you need to reset the setting.
7. Its hardware is limited.

*************

OPERATING SYSTEM ARCHITECTURE


Application Interface: An operating system is a program that acts as an interface between a user of a
computer and the computer resources. The purpose of an operating system is to provide an environment
in which a user may execute programs.
Shell: A shell is an environment or a special user program which provide an interface to user to use
operating system services. It executes programs based on the input provided by the user. Shell allows
the users to communicate with the kernel.
Kernel: Kernel is the heart and core of an Operating System that manages operations of computer and
hardware. It acts as a bridge between the user and the resources of the system by accessing various
computer resources like the CPU, I/O devices and other resources. The kernel's responsibilities include
managing the system's resources (the communication between hardware and software components).
The facilities available to application processes through inter-process communication mechanisms and
system calls.
Hardware: The hardware consists of the memory, CPU, arithmetic-logic unit, various bulk storage
devices, I/O, peripheral devices and other physical devices.

USER MODE AND KERNEL MODE


A processor in a computer running two different modes: User mode and Kernel mode.
19
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

The processor switches between the two modes depending on what type of code is running on the
processor. Applications run in user mode, and core operating system components run in kernel mode.
While many drivers run in kernel mode, some drivers may run in user mode.
1. User Mode: When the computer system runs user applications like file creation or any other
application program in the User Mode, this mode does not have direct access to the computer's
hardware. The transition from user mode to kernel mode occurs when the application requests the help
of operating system or an interrupt or a system call occurs. The mode bit of the user mode is 1. This
means that if the mode bit of the system's processor is 1, then the system will be in the User Mode.
.2. Kernel Mode: All the bottom level tasks of the Operating system are performed in the Kernel
Mode. As the Kernel space has direct access to the hardware of the system, so the kernel-mode handles
all the processes which require hardware support. Apart from this, the main functionality of the Kernel
Mode is to execute privileged instructions.
These privileged instructions are not provided with user access, and that's why these instructions cannot
be processed in the User mode. So, all the processes and instructions that the user is restricted to
interfere with are executed in the Kernel Mode of the Operating System. The mode bit for the Kernel
Mode is 0. So, for the system to function in the Kernel Mode, the Mode bit of the processor must be
equal to 0.

********************

Operating System Services

Operating system is a software that acts as an intermediary between the user and computer hardware.
It is a program with the help of which we are able to run various applications. It is the one program
that is running all the time. Every computer must have an operating system to smoothly execute other
programs. The OS coordinates the use of the hardware and application programs for various users. It
20
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

provides a platform for other application programs to work. The operating system is a set of special
programs that run on a computer system that allows it to work properly. It controls input-output devices,
execution of programs, managing files, etc.

Services of Operating System

• Program execution
• Input Output Operations
• Communication between Process
• File Management
• Memory Management
• Process Management
• Security and Privacy
• Resource Management
• User Interface
• Networking
• Error handling
• Time Management

Program Execution

It is the Operating System that manages how a program is going to be executed. It loads the
program into the memory after which it is executed. The order in which they are executed depends on
the CPU Scheduling Algorithms. A few are FCFS, SJF, etc. When the program is in execution, the
Operating System also handles deadlock i.e. no two processes come for execution at the same time.
The Operating System is responsible for the smooth execution of both user and system programs. The
Operating System utilizes various resources available for the efficient running of all types of
functionalities.

Input Output Operations

Operating System manages the input-output operations and establishes communication


between the user and device drivers. Device drivers are software that is associated with hardware that
is being managed by the OS so that the sync between the devices works properly. It also provides
access to input-output devices to a program when needed.

Communication between Processes

21
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

The Operating system manages the communication between processes. Communication


between processes includes data transfer among them. If the processes are not on the same computer
but connected through a computer network, then also their communication is managed by the Operating
System itself.

File Management

The operating system helps in managing files also. If a program needs access to a file, it is the
operating system that grants access. These permissions include read-only, read-write, etc. It also
provides a platform for the user to create, and delete files. The Operating System is responsible for
making decisions regarding the storage of all types of data or files, i.e, floppy disk/hard disk/pen drive,
etc. The Operating System decides how the data should be manipulated and stored.

Memory Management

Let’s understand memory management by OS in simple way. Imagine a cricket team with
limited number of players. The team manager (OS) decides whether the upcoming player will be in
playing 11, playing 15 or will not be included in team, based on his performance. In the same way, OS
first check whether the upcoming program fulfil all requirement to get memory space or not, if all
things good, it checks how much memory space will be sufficient for program and then load the
program into memory at certain location. And thus, it prevents program from using unnecessary
memory.

Process Management

Let’s understand the process management in unique way. Imagine, our kitchen stove as the
(CPU) where all cooking(execution) is really happened and chef as the (OS) who uses kitchen-stove
(CPU) to cook different dishes(program). The chef (OS) has to cook different dishes(programs) so he
ensures that any particular dish(program) does not take long time (unnecessary time) and all
dishes(programs) gets a chance to cooked(execution). The chef (OS) basically scheduled time for all
dishes(programs) to run kitchen (all the system) smoothly and thus cooked(execute) all the different
dishes(programs) efficiently.

Security and Privacy

Security: OS keep our computer safe from an unauthorised user by adding security layer to it.
Basically, Security is nothing but just a layer of protection which protect computer from bad guys like
viruses and hackers. OS provide us defences like firewalls and anti-virus software and ensure good
safety of computer and personal information.

22
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Privacy: OS give us facility to keep our essential information hidden like having a lock on our
door, where only you can enter and other are not allowed. Basically, it respects our secrets and provide
us facility to keep it safe.

Resource Management

System resources are shared between various processes. It is the Operating system that manages
resource sharing. It also manages the CPU time among processes using CPU Scheduling Algorithms.
It also helps in the memory management of the system. It also controls input-output devices. The OS
also ensures the proper use of all the resources available by deciding which resource to be used by
whom.

User Interface

User interface is essential and all operating systems provide it. Users either interface with the
operating system through the command-line interface or graphical user interface or GUI. The command
interpreter executes the next user-specified command.

A GUI offers the user a mouse-based window and menu system as an interface.

Networking

This service enables communication between devices on a network, such as connecting to the
internet, sending and receiving data packets, and managing network connections.

Error Handling

The Operating System also handles the error occurring in the CPU, in Input-Output devices,
etc. It also ensures that an error does not occur frequently and fixes the errors. It also prevents the
process from coming to a deadlock. It also looks for any type of error or bugs that can occur while any
task. The well-secured OS sometimes also acts as a countermeasure for preventing any sort of breach
of the Computer System from any external source and probably handling them.

Time Management

Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be
stop(red)=> (simple queue), start(yellow)=> (ready queue), move(green)=> (under execution) and this
light (control) changes after a certain interval of time at each side of the road (computer system) so that
the cars(program) from all side of road move smoothly without traffic.

****************

23
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

System Calls
A system call is a mechanism used by programs to request services from the operating
system (OS). In simpler terms, it is a way for a program to interact with the underlying system, such
as accessing hardware resources or performing privileged operations.
A system call is initiated by the program executing a specific instruction, which triggers a switch
to kernel mode, allowing the program to request a service from the OS. The OS then handles the
request, performs the necessary operations, and returns the result back to the program.
System calls are essential for the proper functioning of an operating system, as they provide a
standardized way for programs to access system resources. Without system calls, each program
would need to implement its own methods for accessing hardware and system services, leading to
inconsistent and error-prone behavior.
Services Provided by System Calls
• Process creation and management
• Main memory management
• File Access, Directory, and File system management
• Device handling(I/O)
• Protection
• Networking, etc.
• Process control: end, abort, create, terminate, allocate, and free memory.
• File management: create, open, close, delete, read files,s, etc.
• Device management
• Information maintenance
• Communication
Features of System Calls
• Interface: System calls provide a well-defined interface between user programs and the
operating system. Programs make requests by calling specific functions, and the operating
system responds by executing the requested service and returning a result.
• Protection: System calls are used to access privileged operations that are not available
to normal user programs. The operating system uses this privilege to protect the system
from malicious or unauthorized access.
• Kernel Mode: When a system call is made, the program is temporarily switched from
user mode to kernel mode. In kernel mode, the program has access to all system resources,
including hardware, memory, and other processes.

24
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

• Context Switching: A system call requires a context switch, which involves saving the
state of the current process and switching to the kernel mode to execute the requested
service. This can introduce overhead, which can impact system performance.
• Error Handling: System calls can return error codes to indicate problems with the
requested service. Programs must check for these errors and handle them appropriately.
• Synchronization: System calls can be used to synchronize access to shared resources,
such as files or network connections. The operating system provides synchronization
mechanisms, such as locks or semaphores, to ensure that multiple programs can access
these resources safely.
System Calls Advantages
• Access to hardware resources: System calls allow programs to access hardware
resources such as disk drives, printers, and network devices.
• Memory management: System calls provide a way for programs to allocate and
deallocate memory, as well as access memory-mapped hardware devices.
• Process management: System calls allow programs to create and terminate processes,
as well as manage inter-process communication.
• Security: System calls provide a way for programs to access privileged resources, such
as the ability to modify system settings or perform operations that require administrative
permissions.
• Standardization: System calls provide a standardized interface for programs to interact
with the operating system, ensuring consistency and compatibility across different
hardware platforms and operating system versions.
Examples of a System Call in Windows and Unix
System calls for Windows and Unix come in many different forms. These are listed in the table below
as follows:

Process Windows Quiz

CreateProcess() Fork()
Process Control ExitProcess() Exit()
WaitForSingleObject() Wait()

25
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Process Windows Quiz

Open()
CreateFile()
Read()
File manipulation ReadFile()
Write()
WriteFile()
Close()

SetConsoleMode() Ioctl()
Device Management ReadConsole() Read()
WriteConsole() Write()

GetCurrentProcessID() Getpid()
Information Maintenance SetTimer() Alarm()
Sleep() Sleep()

CreatePipe() Pipe()
Communication CreateFileMapping() Shmget()
MapViewOfFile() Mmap()

SetFileSecurity() Chmod()
Protection InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()

open(): Accessing a file on a file system is possible with the open() system call. It gives the file
resources it needs and a handle the process can use. A file can be opened by multiple processes
simultaneously or just one process. Everything is based on the structure and file system.
read(): Data from a file on the file system is retrieved using it. In general, it accepts three arguments:
1. A description of a file.
2. A buffer for read data storage.
3. How many bytes should be read from the file
Before reading, the file to be read could be identified by its file descriptor and opened
using the open() function.
wait(): In some systems, a process might need to hold off until another process has finished running
before continuing. When a parent process creates a child process, the execution of the parent process

26
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

is halted until the child process is complete. The parent process is stopped using the wait() system
call. The parent process regains control once the child process has finished running.
write(): Data from a user buffer is written using it to a device like a file. A program can produce
data in one way by using this system call. generally, there are three arguments:
1. A description of a file.
2. A reference to the buffer where data is stored.
3. The amount of data that will be written from the buffer in bytes.
fork(): The fork() system call is used by processes to create copies of themselves. It is one of the
methods used the most frequently in operating systems to create processes. When a parent process
creates a child process, the parent process’s execution is suspended until the child process is finished.
The parent process regains control once the child process has finished running.
exit(): A system call called exit() is used to terminate a program. In environments with multiple
threads, this call indicates that the thread execution is finished. After using the exit() system function,
the operating system recovers the resources used by the process.
***********************

Virtual Machines in Operating System

Virtual Machine abstracts the hardware of our personal computer such as CPU, disk drives,
memory, NIC (Network Interface Card) etc, into many different execution environments as per our
requirements, hence giving us a feel that each execution environment is a single computer. For
example, VirtualBox.
When we run different processes on an operating system, it creates an illusion that each process is
running on a different processor having its own virtual memory, with the help of CPU scheduling
and virtual-memory techniques. There are additional features of a process that cannot be provided
by the hardware alone like system calls and a file system. The virtual machine approach does not
provide these additional functionalities but it only provides an interface that is same as basic
hardware. Each process is provided with a virtual copy of the underlying computer system.

We can create a virtual machine for several reasons, all of which are fundamentally related to the
ability to share the same basic hardware yet can also support different execution environments, i.e.,
different operating systems simultaneously.

The main drawback with the virtual-machine approach involves disk systems. Let us suppose that
the physical machine has only three disk drives but wants to support seven virtual machines.
Obviously, it cannot allocate a disk drive to each virtual machine, because virtual-machine software

27
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

itself will need substantial disk space to provide virtual memory and spooling. The solution is to
provide virtual disks.

Users are thus given their own virtual machines. After which they can run any of the operating
systems or software packages that are available on the underlying machine. The virtual -machine
software is concerned with multi-programming multiple virtual machines onto a physical machine,
but it does not need to consider any user-support software. This arrangement can provide a useful
way to divide the problem of designing a multi-user interactive system, into two smaller pieces.

Advantages:
1. There are no protection problems because each virtual machine is completely isolated
from all other virtual machines.
2. Virtual machine can provide an instruction set architecture that differs from real
computers.
3. Easy maintenance, availability and convenient recovery.
Disadvantages:
1. When multiple virtual machines are simultaneously running on a host computer, one
virtual machine can be affected by other running virtual machines, depending on the
workload.
2. Virtual machines are not as efficient as a real one when accessing the hardware.
*******************
Operating System Design and Implementation
An operating system is a construct that allows the user application programs to interact with the
system hardware. Operating system by itself does not provide any function but it provides an
atmosphere in which different applications and programs can do useful work.
There are many problems that can occur while designing and implementing an operating system.
These are covered in operating system design and implementation.

28
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Operating System Design Goals


It is quite complicated to define all the goals and specifications of the operating system while
designing it. The design changes depending on the type of the operating system i.e if it is batch system,
time shared system, single user system, multi user system, distributed system etc.
There are basically two types of goals while designing an operating system. These are −
User Goals
The operating system should be convenient, easy to use, reliable, safe and fast according to the
users. However, these specifications are not very useful as there is no set method to achieve these goals.
System Goals
The operating system should be easy to design, implement and maintain. These are
specifications required by those who create, maintain and operate the operating system. But there is
not specific method to achieve these goals as well.
Operating System Mechanisms and Policies
There is no specific way to design an operating system as it is a highly creative task. However,
there are general software principles that are applicable to all operating systems.
A subtle difference between mechanism and policy is that mechanism shows how to do
something and policy shows what to do. Policies may change over time and this would lead to changes
in mechanism. So, it is better to have a general mechanism that would require few changes even when
a policy change occurs.
For example - If the mechanism and policy are independent, then few changes are required in
mechanism if policy changes. If a policy favours I/O intensive processes over CPU intensive processes,
then a policy change to preference of CPU intensive processes will not change the mechanism.

29
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Operating System Implementation


The operating system needs to be implemented after it is designed. Earlier they were written in
assembly language but now higher-level languages are used. The first system not written in assembly
language was the Master Control Program (MCP) for Burroughs Computers.
Advantages of Higher-Level Language
There are multiple advantages to implementing an operating system using a higher level
language such as: the code is written more fast, it is compact and also easier to debug and understand.
Also, the operating system can be easily moved from one hardware to another if it is written in a high
level language.
Disadvantages of Higher-Level Language
Using high level language for implementing an operating system leads to a loss in speed and increase
in storage requirements. However, in modern systems only a small amount of code is needed for high
performance, such as the CPU scheduler and memory manager. Also, the bottleneck routines in the
system can be replaced by assembly language equivalents if required.
**********************

30
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

PROCESS MANAGEMENT
Process
A process is basically a program in execution. The execution of a process must progress in a sequential
fashion.
To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four sections
─ stack, heap, text and data. The following image shows a simplified layout of a process inside main
memory −

***************
31
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Process Life Cycle


When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.

32
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Process Control Block (PCB)


A Process Control Block is a data structure maintained by the Operating System for every process.
The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to
keep track of a process as listed below in the table –

Information & Description


S.N.

Process State
1 The current state of the process i.e., whether it is ready, running, waiting, or
whatever.

Process privileges
2
This is required to allow/disallow access to system resources.

Process ID
3
Unique identification for each of the process in the operating system.

Pointer
4
A pointer to parent process.

Program Counter
5 Program Counter is a pointer to the address of the next instruction to be executed for
this process.

CPU registers
6 Various CPU registers where process need to be stored for execution for running
state.

CPU Scheduling Information


7 Process priority and other scheduling information which is required to schedule the
process.

Memory management information


8 This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.

Accounting information
9 This includes the amount of CPU used for process execution, time limits, execution
ID etc.

IO status information
10
This includes a list of I/O devices allocated to the process.

33
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB –

******************

INTER PROCESS COMMUNICATION


Inter-process communication is the mechanism provided by the operating system that allows processes
to communicate with each other. This communication could involve a process letting another process
know that some event has occurred or the transferring of data from one process to another.
A diagram that illustrates inter process communication is as follows –

Synchronization in Inter process Communication

Synchronization is a necessary part of inter process communication. It is either provided by the inter
process control mechanism or handled by the communicating processes. Some of the methods to
provide synchronization are as follows −

34
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

• Semaphore
A semaphore is a variable that controls the access to a common resource by multiple processes.
The two types of semaphores are binary semaphores and counting semaphores.
• Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a time.
This is useful for synchronization and also prevents race conditions.
• Barrier
A barrier does not allow individual processes to proceed until all the processes reach it. Many
parallel languages and collective routines impose barriers.
• Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if
the lock is available or not. This is known as busy waiting because the process is not doing any
useful operation even though it is active.

Approaches to Inter process Communication

The different approaches to implement inter process communication are given as follows –

• Pipe

A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way data
channel between two processes. This uses standard input and output methods. Pipes are used in
all POSIX systems as well as Windows operating systems.
• Socket
The socket is the endpoint for sending or receiving data in a network. This is true for data sent
between processes on the same computer or data sent between different computers on the same
network. Most of the operating systems use sockets for inter process communication.
• File
A file is a data record that may be stored on a disk or acquired on demand by a file server.
Multiple processes can access a file as required. All operating systems use files for data storage.
• Signal
Signals are useful in inter process communication in a limited way. They are system messages
that are sent from one process to another. Normally, signals are not used to transfer data but are
used for remote commands between processes.
• Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple processes. This
is done so that the processes can communicate with each other. All POSIX systems, as well as
Windows operating systems use shared memory.
• Message Queue

35
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Multiple processes can read and write data to the message queue without being connected to
each other. Messages are stored in the queue until their recipient retrieves them. Message
queues are quite useful for inter process communication and are used by most operating
systems.

A diagram that demonstrates message queue and shared memory methods of inter process
communication is as follows –

**************************

THREADS
What is Thread?
A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of
registers/ Traditional (heavy weight) processes have a single thread of control - There is one program
counter, and one sequence of instructions that can be carried out at any given time. Now multi-thread
(lightweight) process provides a way to improve application performance through parallelism. Multi-
threaded applications have multiple threads within a single process, each having their own program
counter, stack and set of registers, but sharing common code, data, and certain structures such as open
files.

36
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Advantages of Thread
Responsiveness: One thread may provide rapid response while other threads are blocked or slowed
down doing intensive calculations.
Resource sharing: By default, threads share common code, data, and other resources, which allow
multiple tasks to be performed simultaneously in a single address space.
Economy: Creating and managing threads is much faster than performing the same tasks for processes.
Scalability, i.e., Utilization of multiprocessor architectures: A single threaded process can only run
on one CPU, no matter how many may be available, whereas the execution of a multi-threaded
application may be split amongst available processors
TYPES OF THREADS
Threads are implemented in following two ways
1. User Level Threads − User managed threads.
2. Kernel Level Threads − Operating System managed threads acting on kernel, an operating system
core.
1. User-level thread: The operating system does not recognize the user-level thread. User threads can
be easily implemented and it is implemented by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread does not know nothing about the user
level thread.
2. Kernel level thread: The kernel thread recognizes the operating system. There is a thread control
block and process control block in the system for each thread and process in the kernel-level thread.
The kernel-level thread is implemented by the operating system. The kernel knows about all the threads

37
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

and manages them. The kernel-level thread offers a system call to create and manage the threads from
user-space. The implementation of kernel threads is more difficult than the user thread.

THREADING ISSUES
There are several threading issues when we are in a multithreading environment. We will discuss the
threading issues with system calls, cancellation of thread, signal handling, thread pool and thread-
specific data.
1. fork() and exec() System Calls: The fork() and exec() are the system calls. The fork()
call creates a duplicate process of the process that invokes fork(). The new duplicate
process is called child process and process invoking the fork() is called the parent
process. Both the parent process and the child process continue their execution from the
instruction that is just after the fork().Next system call i.e. exec() system call when
invoked replaces the program along with all its threads with the program that is specified
in the parameter to exec(). Typically the exec() system call is lined up after the fork()
system call.
2. Thread cancellation: Termination of the thread in the middle of its execution it is termed
as ‘thread cancellation’. For example when the multiple threads to search through a database
for some information. However, if one of the thread returns with the desired result the
remaining threads will be cancelled. Thread cancellation can be performed in two ways:
Asynchronous Cancellation, Deferred Cancellation.
3. Signal Handling: Signal handling is more convenient in the single-threaded program as the
signal would be directly forwarded to the process. But when it comes to multithreaded
program, the issue arrives to which thread of the program the signal should be delivered.
38
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

• All the threads of the process.


• To some specific threads in a process.
• To the thread to which it applies
• Or you can assign a thread to receive all the signals.
4. Thread Pool: To avoid the wastage of CPU time, The thread pool create a finite amount
of threads when the process starts. This collection of threads is referred to as the thread
pool. The threads stay in the thread pool and wait till they are assigned any request to be
serviced.
5. Thread Specific data: The specific data associated with the specific thread is referred to
as thread-specific data.
***************
THREAD LIFE CYCLE
A thread at any point of time exists in any one of the following states. A thread lies only in one of the shown
states at any instant:
• New

• Runnable

• Blocked

• Waiting

• Timed Waiting

• Terminated
Life Cycle of a thread
1. New Thread: When a new thread is created, it is in the new state. The thread has not yet started to run
when the thread is in this state. When a thread lies in the new state, its code is yet to be run and hasn’t
started to execute.
2. Runnable State: A thread that is ready to run is moved to a runnable state. In this state, a thread might
actually be running or it might be ready to run at any instant of time. It is the responsibility of the thread
scheduler to give the thread, time to run.

A multi-threaded program allocates a fixed amount of time to each individual thread. Each and every
thread runs for a short while and then pauses and relinquishes the CPU to another thread so that other
threads can get a chance to run. When this happens, all such threads that are ready to run, waiting for
the CPU and the currently running thread lie in a runnable state.

3. Blocked/Waiting state: When a thread is temporarily inactive, then it’s in one of the following
states: 1. Blocked 2. Waiting

39
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

4. Timed Waiting: A thread lies in a timed waiting state when it calls a method with a time-out
parameter. A thread lies in this state until the timeout is completed or until a notification is received.
For example, when a thread calls sleep or a conditional wait, it is moved to a timed waiting state.

5. Terminated State: A thread terminates because of either of the following reasons:

Because it exits normally. This happens when the code of the thread has been entirely executed by the
program.

********************
Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the executable memory at a time and the loaded
process shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:
• Non-pre-emptive: Here the resource can’t be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves
to a waiting state.
40
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

• Pre-emptive: Here the OS allocates the resources to a process for a fixed amount of time.
During resource allocation, the process switches from running state to ready state or from
waiting state to ready state. This switching occurs as the CPU may give priority to other
processes and replace the process with higher priority with the running process.
Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS maintains
a separate queue for each of the process states and PCBs of all processes in the same execution state
are placed in the same queue. When the state of a process is changed, its PCB is unlinked from its
current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to
execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O device constitute this
queue.

OPERATIONS ON THE PROCESS


The processes in most systems can executing concurrently, and they are be created and destroyed
dynamically. Thus, these systems must provide a mechanism for process creation and termination.
Process Creation: A process can create several new processes through creating process system calls
during the process execution. Creating a process, we call it the parent process and the new process is a
child process.
1. When a new process is created, the operating system assigns a unique Process Identifier (PID)
to it and inserts a new entry in the primary process table.
41
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

2. Then required memory space for all the elements of the process such as program, data, and
stack is allocated including space for its Process Control Block (PCB).

3. Next, the various values in PCB are initialized such as,


I. The process identification part is filled with PID assigned to it in step (1) and also its parent’s
PID.
II. The processor register values are mostly filled with zeroes, except for the stack pointer and
program counter. The stack pointer is filled with the address of the stack-allocated to it in step
(ii) and the program counter is filled with the address of its program entry point.
III. The process state information would be set to ‘New’.
IV. Priority would be lowest by default, but the user can specify any priority during creation.
4. Then the operating system will link this process to the scheduling queue and the process state
would be changed from ‘New’ to ‘Ready’. Now the process is competing for the CPU.
5. Additionally, the operating system will create some other data structures such as log files or
accounting files to keep track of processes activity.

Process termination: Processes are terminated by themselves when they finish executing their last
statement, then operating system USES exit () system call to delete its context. Then all the resources
held by that process like physical and virtual memory, buffers, open files, etc., are taken back by the
operating system. A process can be terminated either by the operating system or by the parent process.
***********************

CPU SCHEDULING
CPU Scheduling Algorithms: CPU Scheduling Algorithms are very important topic in Operating
Systems. This is because this CPU Scheduling Algorithms forms a base and foundation for the
Operating Systems subject. A task is a group of processes. Every task is executed by the Operating
System. The Operating System divides the task into many processes. The final goal of the Operating
System is completion of the task. The task must be finished in the quickest possible time with the
limited resources which the Operating System have. This is the main motive of CPU Scheduling
Algorithms.
CPU Scheduling: The CPU Scheduling is the process by which a process is executed by the using
the resources of the CPU. The process also can wait due to the absence or unavailability of the
resources. These processes make the complete use of Central Processing Unit. The operating
system must choose one of the processes in the list of ready-to-launch processes whenever the CPU
gets idle. A temporary (CPU) scheduler does the selection. The Scheduler choose one of the ready-
to-start memory processes to get the CPU. Before, going to the Types of CPU Scheduling
Algorithms, we are going to learn about the Basic Terminologies which are to be followed and
used in the CPU Scheduling Algorithms by us.
1. Process ID: The Process ID acts like the name of the process. It is usually represented with
numbers or P letter with numbers
Example: P0, P1, P2, P3 . . . . . . . .

42
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

2. Arrival Time: The time which is required for the Process to enter the ready queue or the time
when the Process is ready to be executed by the CPU. This Arrival Time can be represented as AT
in short form.
3. Burst Time: The Time Slot which the Process requires to complete the Process is known as the
Burst Time. The Burst Time can be represented as BT in short form.
4. Completion Time: The Total Time required by the CPU to complete the process is known as
Completion Time. The Completion Time can be represented as CT in short form.
5. Turn Around Time: The time taken by the CPU since the Process has been ready to execute or
since the process is in Ready Queue is known as Turn Around Time. The Turn Around Time can be
calculated with the help of Completion Time and Arrival Time. The Turn Around Time can be
represented as TAT in short form. The Turn Around Time is the difference of Completion Time and
Arrival Time.
Formula: TAT = CT - AT
6. Waiting Time: The time the Process has been waiting to complete its process since the
assignment of process for completion is known as Waiting Time. The Waiting Time can be
represented as WT in short form. The Waiting Time can be calculated with the help of Turn
Around Time and Burst Time.
The Waiting Time is the difference between Turn Around Time and Burst Time
Formula: WT = TAT - BT
7. Ready Queue: The Queue where all the processes are stored until the execution of the previous
process. This ready queue is very important because there would be confusion in CPU when two
same kinds of processes are being executed at the same time. Then, in these kinds of conditions the
ready queue comes into place.
8. Gantt Chart: It is the place where all the already executed processes are stored. This is very
useful for calculating Waiting Time, Completion Time, Turn Around Time.

There are two approaches in CPU Scheduling Algorithms. They are: Pre-emptive, Non-Pre-
emptive
Types of CPU Scheduling Algorithms: 1. First Come First Serve
2. Shortest Job First
3. Priority Scheduling
4. Round Robin Scheduling
1. FIRST COME FIRST SERVE(FCFS) SCHEDULING
• Simplest CPU scheduling algorithm that schedules according to arrival times of processes.
• FCFS is a non-preemptive scheduling algorithm.
• Tasks are always executed on a First-come, First-serve concept.
• FCFS is easy to implement and use.
• This algorithm is not much efficient in performance, and the wait time is quite high.
Examples

Process ID Arrival Time Burst Time


P1 0 6

43
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

P2 2 2
P3 3 1
P4 4 9
P5 5 8

Solution:

Process Arrival Burst Completion Turn Around Waiting


ID Time Time Time Time Time
P1 0 6 6 6 0
P2 2 2 8 8 6
P3 3 1 9 9 8
P4 4 9 18 18 9
P5 5 8 26 26 18

Gantt Chart:

Average Completion Time = The Total Sum of Completion Times which is divided by the total
number of processes is known as Average Completion Time.
Average Completion Time =( CT1 + CT2 + CT3 + ...................... + CTn ) / n
= ( 6 + 8 +9 + 18 + 26 ) / 5 = 67/5 = 13.4
Average Turn Around Time = The Total Sum of Turn Around Times which is divided by the
total number of processes is known as Average Turn Around Time.
Average Turn Around Time = (TAT1 + TAT2 + TAT3 + ...................... + TATn ) / n
= ( 6 + 8 + 9 +18 +26 ) / 5 = 67/5 = 13.4
Average Waiting Time = The Total Sum of Waiting Times which is divided by the total
number of processes is known as Average Waiting Time.
Average Waiting Time = (WT1 + WT2 + WT3 + ...................... + WTn ) / n
= ( 0 + 6 + 8 + 9 + 18 ) / 5 = 41 / 5 = 8.2
First Come First Serve (FCFS) Scheduling Algorithm
Aim: Write a C program to implement the various process scheduling mechanismssuch
Algorithm for FCFS scheduling:
Step 1: Start the process
Step 2: Accept the number of processes in the ready Queue
Step 3: For each process in the ready Q, assign the process id and accept the CPU
burst time Step 4: Set the waiting of the first process as ‘0’ and its burst time as
its turnaround time Step 5: for each process in the Ready Q calculate
44
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

(a) Waiting time for process(n)= waiting time of process (n-1) +


Burst time of process(n-1)
(b) Turnaround time for Process(n)= waiting time of Process(n)+ Burst
time for process(n)
Step 6: Calculate
(a) Average waiting time = Total waiting Time / Number of process
(b) Average Turnaround time = Total Turnaround Time / Number of process

Step 7: Stop the process


First Come First Serve(FCFS) Scheduling Program:

#include<stdio.h>
void main()
{
int i,n,sum,wt,tat,twt,ttat;
int t[10];
float awt,atat;
clrscr();
printf("Enter number of processors:\n");
scanf("%d",&n);
for(i=0;i<n;i++)
{
printf("\n Enter the Burst Time of the process %d",i+1);
scanf("\n %d",&t[i]);
}
printf("\n\n FIRST COME FIRST SERVE SCHEDULING ALGORITHM \n");
printf("\n Process ID \t Waiting Time \t Turn Around Time \n");
printf("1 \t\t 0 \t\t %d \n",t[0]);
sum=0;
twt=0;
ttat=t[0];
for(i=1;i<n;i++)
{

45
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

sum+=t[i-1];
wt=sum; tat=sum+t[i]; twt=twt+wt;
ttat=ttat+tat;
printf("\n %d \t\t %d \t\t %d",i+1,wt,tat);
printf("\n\n");
}
awt=(float)twt/n; atat=(float)ttat/n;
printf("\n Average Waiting Time %4.2f",awt);
printf("\n Average Turnaround Time %4.2f",atat);
getch();
}
OUTPUT
Enter number of processors: 3
Enter the Burst Time of the process 1: 2
Enter the Burst Time of the process 2: 5
Enter the Burst Time of the process 3: 4
FIRST COME FIRST SERVE SCHEDULING ALGORITHM
Process ID Waiting Time Turn Around Time
102
227
3 7 11
Average Waiting Time 3.00
Average Turnaround Time 6.67

FIRST COME FIRST SERVE SCHEDULING ALGORITHM

Process ID Waiting Time Turn Around Time


1 0 2
2 2 7
3 7 11

Average Waiting
46
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Time 3.00 Average


Turnaround Time
6.67

2. SHORTEST JOB FIRST (SJF) SCHEDULING


• The shortest job first (SJF), is a scheduling policy that selects the waiting process with the
smallest execution time to execute next.
• Shortest Job first has the advantage of having a minimum average waiting time among all
scheduling algorithms.
• It is a Greedy Algorithm.
• It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
• SJF can be used in specialized environments where accurate estimates of running time are
available.
Examples

Process Arrival Burst


ID Time Time
P0 1 3
P1 2 6
P2 0 2
P3 3 7
P4 2 4
P5 5 5
Solution of SJF CPU Scheduling with Non Preemptive Approach

Process Arrival Burst Completion Turn Around Time Waiting Time


ID Time Time Time TAT = CT - AT WT = CT - BT
P0 1 3 5 4 1
P1 2 6 20 18 12
P2 0 2 2 2 0
P3 3 7 27 24 17
P4 2 4 9 7 4
P5 5 5 14 10 5

Gantt chart:

Average Completion Time = ( 5 + 20 + 2 + 27 + 9 + 14 ) / 6 = 77/6 = 12.833


Average Waiting Time = ( 1 + 12 + 17 + 0 + 5 + 4 ) / 6 = 37 / 6 = 6.666
Average Turn Around Time = ( 4 +18 + 2 +24 + 7 + 10 ) / 6 = 65 / 6 = 6.166

Solution of SJF CPU Scheduling with Preemptive Approach


In Pre emptive way, the process is executed when the best possible solution is found.
47
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Process Arrival Burst Completion Turn Around Time Waiting Time


ID Time Time Time TAT = CT - AT WT = CT - BT
P0 1 3 5 4 1
P1 2 6 17 15 9
P2 0 2 2 2 0
P3 3 7 24 21 14
P4 2 4 11 9 5
P5 6 2 8 2 0

Gantt chart:

After P4 P5 is executed just because of the Pre Emptive Condition.


Average Completion Time = ( 5 + 17 + 2 + 24 + 11 +8 ) / 6 = 67 / 6 = 11.166
Average Turn Around Time = ( 4 +15 + 2 + 21 + 9 + 2 ) / 6 = 53 / 6 = 8.833
Average Waiting Time = ( 1 + 9 + 0 + 14 + 5 + 0 ) /6 = 29 / 6 = 4.833
Shortest Job First (SJF) Scheduling Algorithm
Aim: Write a C program to implement the various process scheduling mechanismssuch
as SJF Scheduling.

Algorithm for SJF:


Step 1: Start the process
Step 2: Accept the number of processes in the ready Queue
Step 3: For each process in the ready Q, assign the process id and accept the
CPUburst time Step 4: Start the Ready Q according the shortest Burst time by
sorting according tolowest to
highest burst time.
Step 5: Set the waiting time of the first process as ‘0’ and its turnaround time as itsburst time.
Step 6: For each process in the ready queue, calculate
a. Waiting time for process(n)= waiting time of process (n-1) + Burst time ofprocess(n-1)
b. Turn around time for Process(n)= waiting time of Process(n)+ Burst timefor
process(n)
Step 7: Calculate
a. Average waiting time = Total waiting Time / Number of process
b. Average Turnaround time = Total Turnaround Time / Number
of process Step 8: Stop the process
Shortest Job First (SJF) Scheduling Program
#include<stdio.h>
void main()
48
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

{
int ,j,k,n,sum,wt[10],tt[10],twt,ttat;
int t[10],p[10];
float awt,atat;
clrscr();
printf("Enter number of process\n");
scanf("%d",&n);
for(i=0;i<n;i++)
{
printf("\n Enter the Burst Time of Process %d",i);
scanf("\n %d",&t[i]);
}
for(i=0;i<n;i++)
p[i]=i;
for(i=0;i<n;i++)
{
for(k=i+1;k<n;k++)
{
if(t[i]>t[k])
{
int temp;
temp=t[i];
t[i]=t[k];
t[k]=temp;
temp=p[i];
p[i]=p[k];
p[k]=temp;
}
}
printf("\n\n SHORTEST JOB FIRST SCHEDULING ALGORITHM");
printf("\n PROCESS ID \t BURST TIME \t WAITING TIME \t TURNAROUND TIME
\n\n");
49
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

wt[0]=0;
for(i=0;i<n;i++)
{
sum=0;
for(k=0;k<i;k++)
{
wt[i]=sum+t[k];
sum=wt[i];
}
}
for(i=0;i<n;i++)
{
tt[i]=t[i]+wt[i];
}
for(i=0;i<n;i++)
{
printf("%5d \t\t5%d \t\t %5d \t\t %5d \n\n",p[i],t[i],wt[i],tt[i]);
}
twt=0;
ttat=t[0];
for(i=1;i<n;i++)
{
twt=twt+wt[i];
ttat=ttat+tt[i];
}
awt=(float)twt/n;
atat=(float)ttat/n;
printf("\n AVERAGE WAITING TIME %4.2f",awt);
printf("\n AVERAGE TURN AROUND TIME %4.2f",atat);
getch();
}
}
50
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

OUTPUT:
Enter number of process 3
Enter the Burst Time of Process 04
Enter the Burst Time of Process 13
Enter the Burst Time of Process 25

SHORTEST JOB FIRST SCHEDULING ALGORITHM

PROCESS ID BURST TIME WAITING TIME TURNAROUND TIME


1 3 0 3
0 4 3 7
2 5 7 12
AVERAGE WAITING TIME
3.33 AVERAGE TURN
AROUND TIME 7.33

3. PRIORITY SCHEDULING
• Priority Scheduling is a method of scheduling processes that is based on priority.
• In this algorithm, the scheduler selects the tasks to work as per the priority.
• The processes with higher priority should be carried out first.
• Whereas jobs with equal priorities are carried out on a round-robin or FCFS basis.
• Types of Priority Scheduling
1. Preemptive Scheduling
2. Non-Preemptive Scheduling
Examples:
Here, in this problem the priority number with highest number is least prioritized.

Process Arrival Burst Completion Time Turn Around Time Waiting Time
Priority
Id Time Time TAT = CT - AT TAT = CT - AT WT = TAT - BT
P1 0 5 5 5 5 0
P2 1 6 4 27 26 20
P3 2 2 0 7 5 3
P4 3 1 2 15 12 11
P5 4 7 1 14 10 3
P6 4 6 3 21 17 11

Gantt Chart:

Average Completion Time = ( 5 +27 +7 +15 +14 + 21 ) / 6 = 89 / 6 = 14.8333

51
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Average Waiting Time = ( 0 + 20 + 3 + 11 + 3 + 11 ) / 6 = 48 / 6 = 7


Average Turn Around Time = ( 5 + 26 + 5 + 11 + 10 + 17 ) / 6 = 74 / 6 = 12.333
Priority Scheduling Algorithm:
Aim: Write a C program to implement the various process scheduling mechanismssuch as Priority
Scheduling.
Algorithm for Priority Scheduling:
Step 1: Start the process
Step 2: Accept the number of processes in the ready Queue
Step 3: For each process in the ready Q, assign the process id and accept the CPUburst
time Step 4: Sort the ready queue according to the priority number.
Step 5: Set the waiting of the first process as ‘0’ and its burst time as its turn
aroundtime Step 6: For each process in the Ready Q calculate
a. Waiting time for process(n)= waiting time of process (n-1) + Burst time ofprocess(n-1)
b. Turn around time for Process(n)= waiting time of Process(n)+ Burst timefor process(n)
Step 7: Calculate
a. Average waiting time = Total waiting Time / Number of process
b. Average Turnaround time = Total Turnaround Time / Number
of process Step8: Stop the process

Priority Scheduling Program:


#include <stdio.h>
#include <conio.h>
void main()
{
int i,j,n,tat[10],wt[10],bt[10],pid[10],pr[10],t,twt=0,ttat=0;
float awt,atat;
clrscr();
printf("\n-----------PRIORITY SCHEDULING---------------------\n");
printf("Enter the No of Process: ");
scanf("%d", &n);
for (i=0;i<n;i++)
{
pid[i] = i;
printf("Enter the Burst time of Pid %d : ",i);
52
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

scanf("%d",&bt[i]);
printf("Enter the Priority of Pid %d : ",i);
scanf ("%d",&pr[i]);
}
// Sorting start
for (i=0;i<n;i++)
for(j=i+1;j<n;j++)
{
if (pr[i] > pr[j] )
{
t = pr[i];
pr[i] = pr[j];
pr[j] = t;
t = bt[i];
bt[i] = bt[j];
bt[j] = t;
t = pid[i];
pid[i] = pid[j];
pid[j] = t;
}
}
// Sorting finished
tat[0] = bt[0];
wt[0] = 0;
for (i=1;i<n;i++)
{
wt[i] = wt[i-1] + bt[i-1];
tat[i] = wt[i] + bt[i];
}
printf("\n-----------------------------------------------------------------------\n");
printf("Pid\t Priority\tBurst time\t WaitingTime\tTurnArroundTime\n");
printf("\n------------------------------------------------------------------------\n");
53
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

for(i=0;i<n;i++)
{
printf("\n%d\t\t%d\t%d\t\t%d\t\t%d",pid[i],pr[i],bt[i],wt[i],tat[i]);
}
for(i=0;i<n;i++)
{
ttat = ttat+tat[i];
twt = twt + wt[i];
}
awt = (float)twt / n;
atat = (float)ttat / n;
printf("\n\nAvg.Waiting Time: %f\nAvg.Turn Around Time: %f\n",awt,atat);
getch();
}
OUTPUT:
-----------PRIORITY SCHEDULING--------------
Enter the No of Process: 4
Enter the Burst time of Pid 0 : 2
Enter the Priority of Pid 0 : 3
Enter the Burst time of Pid 1 : 6
Enter the Priority of Pid 1 : 2
Enter the Burst time of Pid 2 : 4
Enter the Priority of Pid 2 : 1
Enter the Burst time of Pid 3 : 5
Enter the Priority of Pid 3 : 7

Pid Priority Burst time WaitingTime TurnArroundTime

2 1 4 0 4
1 2 6 4 10
0 3 2 10 12
3 7 5 12 17

Avg.Waiting Time:

54
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

6.500000 Avg.Turn
Around Time:
10.750000
4. ROUND ROBIN CPU SCHEDULING
• Round Robin is a CPU scheduling mechanism those cycles around assigning each task
a specific time slot.
• Round Robin is the preemptive process scheduling algorithm.
• Each process is provided a fix time to execute, it is called a quantum.
• Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
• Context switching is used to save states of preempted processes.
Examples:
Problem

Process ID Arrival Time Burst Time


P0 1 3
P1 0 5
P2 3 2
P3 4 3
P4 2 1

Solution:
Process Arrival Burst Completion Turn Around Waiting
ID Time Time Time Time Time
P0 1 3 5 4 1
P1 0 5 14 14 9
P2 3 2 7 4 2
P3 4 3 10 6 3
P4 2 1 3 1 0

Gantt Chart:

Average Completion
Time = 7.8 Average
Turn Around Time =
4.8 Average Waiting
Time = 3
55
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

Round Robin scheduling algorithm


Aim: Write a C program to implement the various process scheduling mechanisms such as Round
Robin Scheduling.
Algorithm for RR:
Step 1: Start the process
Step 2: Accept the number of processes in the ready Queue and time quantum
(or) time slice Step 3: For each process in the ready Q, assign the process id and
accept the CPU burst time

Step 4: Calculate the no. of time slices for each process where No. of time slice for process(n) =
burst time process(n)/time slice
Step 5: If the burst time is less than the time slice then the no. of time slices =1.
Step 6: Consider the ready queue is a circular Q, calculate
a. Waiting time for process(n) = waiting time of process(n-1)+ burst time of process(n-1 ) +
the time difference in getting the CPU from process(n-1)
b. Turn around time for process(n) = waiting time of process(n) + burst time of process(n)+
the time difference in getting CPU from process(n).
Step 7: Calculate
a. Average waiting time = Total waiting Time / Number of process
b. Average Turnaround time = Total Turnaround Time / Number
of process Step 8: Stop the process
Round Robin scheduling Program:

#include<stdio.h>
#include<conio.h>
void main()
{
int ts,pid[10],need[10],wt[10],tat[10],i,j,n,n1;
int bt[10],flag[10],ttat=0,twt=0;
float awt,atat;
clrscr();
printf("\t\t ROUND ROBIN SCHEDULING \n");
printf("Enter the number of Processors \n");
scanf("%d",&n);
56
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

n1=n;
printf("\n Enter the Timeslice \n");
scanf("%d",&ts);
for(i=1;i<=n;i++)
{
printf("\n Enter the process ID %d",i);
scanf("%d",&pid[i]);
printf("\n Enter the Burst Time for the process");
scanf("%d",&bt[i]);
need[i]=bt[i];
}
for(i=1;i<=n;i++)
{
flag[i]=1;
wt[i]=0;
}
while(n!=0)
{
for(i=1;i<=n;i++)
{
if(need[i]>=ts)
{
for(j=1;j<=n;j++)
{
if((i!=j)&&(flag[i]==1)&&(need[j]!=0))
wt[j]+=ts;
}
need[i]-=ts;
if(need[i]==0)
{
flag[i]=0;
57
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

n--;
}
}
else
{
for(j=1;j<=n;j++)
{
if((i!=j)&&(flag[i]==1)&&(need[j]!=0))
wt[j]+=need[i];
}
need[i]=0;
n--;
flag[i]=0;
}
}
}
for(i=1;i<=n1;i++)
{
tat[i]=wt[i]+bt[i];
twt=twt+wt[i];
ttat=ttat+tat[i];
}
awt=(float)twt/n1;
atat=(float)ttat/n1;
printf("\n\n ROUND ROBIN SCHEDULING ALGORITHM \n\n");
printf("\n\n Process \t Process ID \t BurstTime \t Waiting Time \t TurnaroundTime \n ");
for(i=1;i<=n1;i++)
{
printf("\n %5d \t %5d \t\t %5d \t\t %5d \t\t %5d \n", i,pid[i],bt[i],wt[i],tat[i]);
}
printf("\n The average Waiting Time=4.2f",awt);
58
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM

printf("\n The average Turn around Time=4.2f",atat);


getch();
}
OUTPUT:
ROUND ROBIN SCHEDULING
Enter the number of Processors 4
Enter the Timeslice 5
Enter the process ID 1 5
Enter the Burst Time for the process 10
Enter the process ID 2 6
Enter the Burst Time for the process 15
Enter the process ID 3 7
Enter the Burst Time for the process 20
Enter the process ID 4 8
Enter the Burst Time for the process 25

ROUND ROBIN SCHEDULING ALGORITHM


Process Process ID BurstTime Waiting TurnaroundTi
Time me
1 5 10 15 25
2 6 15 25 40
3 7 20 25 45
4 8 25 20 45
The average Waiting
Time=4.2f The average Turn
around Time=4.2f

59
SIR C R REDDY COLLEGE, ELURU

You might also like