OS Notes Unit-1
OS Notes Unit-1
An Operating System (OS) is an interface between computer user and computer hardware. Every
computer system must have at least one operating system to run other programs. Applications like
Browsers, MS Office, Notepad Games, etc., need some environment to run and perform its tasks.
An operating system is software which performs all the basic tasks like file management, memory
management, process management, handling input and output, and controlling peripheral devices such
as disk drives and printers.
Some popular Operating Systems include Linux, MS-Windows, Ubuntu, Mac OS, Fedora, Solaris, Free
BSD, Chrome OS, CentOS, Debian, Deepin, , VMS, OS/400, AIX, z/OS, etc.
Definition: An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.
1
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
2
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
An Operating System performs all the basic tasks like managing files, processes, and memory. Thus
operating system acts as the manager of all the resources, i.e. resource manager. Thus, the operating
system becomes an interface between the user and the machine. It is one of the most required software
that is present in the device.
Operating System is a type of software that works as an interface between the system program and the
hardware. There are several types of Operating Systems in which many of which are mentioned below
Batch processing is a process that is used in many industries to improve efficiency. It is a type of
operating system that is used to manage multiple tasks and processes in a sequence. It is a type of
operating system that is used to improve the efficiency of a business by allowing it to run multiple
tasks at the same time. One of the main benefits of using a batch-processing operating system is
that it can improve the efficiency of a business by allowing it to run multiple tasks at the same time.
This can be done by allowing the operating system to manage the tasks and processes. This can
allow the business to run more tasks at the same time without having to wait for each one to finish.
Batch processing operating systems are designed to execute a large number of similar jobs or tasks
without user intervention. These operating systems are commonly used in business and scientific
applications where a large number of jobs need to be processed in a specific order.
Overall, batch processing operating systems are ideal for processing large volumes of similar jobs
efficiently, but they are less flexible than other types of operating systems that allow for user
interaction and dynamic job scheduling.
Another benefit of using a batch processing operating system is that it can improve the efficiency
of a business by allowing it to run multiple tasks in sequence. This can be done by allowing the
operating system to manage the tasks and processes. Batch Processing Operating System (BPOS)
is an open-source platform that helps to manage large-scale batch processing jobs. BPOS uses a
centralized execution architecture that enables the execution of multiple jobs in parallel. BPOS also
features a user-friendly graphical interface that makes it easy to manage and monitor your job.
Disadvantages: There are many disadvantages to using batch operating systems, including:
• Limited functionality: Batch systems are designed for simple tasks, not for more
complex tasks. This can make them difficult to use for certain tasks, such as managing
files or software.
• Security issues: Because batch systems are not typically used for day-to-day tasks, they
may not be as secure as more common operating systems. This can lead to security risks
if the system is used by people who should not have access to it.
• Interruptions Batch systems can be interrupted frequently, which can lead to missed
deadlines or mistakes.
• Inefficiency: Batch systems are often slow and difficult to use, which can lead to
inefficiency in the workplace.
Multiprogramming in an operating system as the name suggests multi means more than one and
programming means the execution of the program. when more than one program can execute in an
operating system then this is termed a multiprogramming operating system.
Before the concept of Multiprogramming, computing takes place in other way which does not use
the CPU efficiently. Earlier, CPU executes only one program at a time. In earlier day’s computing,
the problem is that when a program undergoes in waiting state for an input/output operation, the
CPU remains idle which leads to underutilization of CPU and thus poor performance.
Multiprogramming addresses this issue and solve this issue.
5
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Multiuser and Multitasking both are different in every aspect and multitasking is an operating
system that allows you to run more than one program simultaneously. The operating system does
this by moving each program in and out of memory one at a time. When a program runs out of
memory, it is temporarily stored on disk until it is needed again.
A multi-user operating system allows many users to share processing time on a powerful central
computer on different terminals. The operating system does this by quickly switching between
terminals, each receiving a limited amount of CPU time on the central computer. Operating systems
change so rapidly between terminals that each user appears to have constant access to the central
computer. If there are many users on such a system, the time it takes for the central computer to
respond may become more apparent.
Features of Multiprogramming
1. Need Single CPU for implementation.
2. Context switch between process.
3. Switching happens when current process undergoes waiting state.
4. CPU idle time is reduced.
5. High resource utilization.
6. High Performance.
6
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Disadvantages of Multiprogramming
1. Prior knowledge of scheduling algorithms is required.
2. If it has a large number of jobs, then long-term jobs will have to require a long wait.
3. Memory management is needed in the operating system because all types of tasks are
stored in the main memory.
4. Using multiprogramming up to a larger extent can cause a heat-up issue.
Time Sharing Operating System
Multiprogrammed, batched systems provide an environment where various system resources
were used effectively, but it did not provide for user interaction with computer systems. Time-sharing
is a logical extension of multiprogramming. The CPU performs many tasks by switches that are so
frequent that the user can interact with each program while it is running. A time-shared operating
system allows multiple users to share computers simultaneously. With each action or order at a time
the shared system becomes smaller, so only a little CPU time is required for each user. As the system
rapidly switches from one user to another, each user is given the impression that the entire computer
system is dedicated to its use, although it is being shared among multiple users.
A time-shared operating system uses CPU scheduling and multi-programming to provide each user
with a small portion of a shared computer at once. Each user has at least one separate program in
memory. A program is loaded into memory and executes, it performs a short period of time either
before completion or to complete I/O. This short period of time during which the user gets the attention
of the CPU is known as time slice, time slot, or quantum. It is typically of the order of 10 to 100
milliseconds. Time-shared operating systems are more complex than multiprogrammed operating
systems. In both, multiple jobs must be kept in memory simultaneously, so the system must have
memory management and security. To achieve a good response time, jobs may have to swap in and out
of disk from the main memory which now serves as a backing store for the main memory. A common
method to achieve this goal is virtual memory, a technique that allows the execution of a job that may
not be completely in memory.
7
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
In the above figure the user 5 is active state but user 1, user 2, user 3, and user 4 are in a waiting
8
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
the ease of a distributed system, the programmer or developer can easily access any operating system
and resource to execute the computational tasks and achieve a common goal. It is the extension of a
network operating system that facilitates a high degree of connectivity to communicate with other
users over the network.
Types of Distributed Operating System
There are various types of Distributed Operating systems. Some of them are as follows:
❖ Client-Server Systems
❖ Peer-to-Peer Systems
❖ Middleware
❖ Three-tier
❖ N-tier
Advantages
There are various advantages of the distributed operating system. Some of them are as follow:
1. It may share all resources (CPU, disk, network interface, nodes, computers, and so on) from
one site to another, increasing data availability across the entire system.
2. It reduces the probability of data corruption because all data is replicated across all sites; if one
site fails, the user can access data from another operational site.
3. The entire system operates independently of one another, and as a result, if one site crashes, the
entire system does not halt.
4. It increases the speed of data exchange from one site to another site.
5. It is an open system since it may be accessed from both local and remote locations.
6. It helps in the reduction of data processing time.
7. Most distributed systems are made up of several nodes that interact to make them fault-tolerant.
If a single machine fails, the system remains operational.
Disadvantages
There are various disadvantages of the distributed operating system. Some of them are as follows:
1. The system must decide which jobs must be executed when they must be executed, and where
they must be executed. A scheduler has limitations, which can lead to underutilized hardware
and unpredictable runtimes.
2. It is hard to implement adequate security in DOS since the nodes and connections must be
secured.
9
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
3. The database connected to a DOS is relatively complicated and hard to manage in contrast to a
single-user system.
4. The underlying software is extremely complex and is not understood very well compared to
other systems.
5. The more widely distributed a system is, the more communication latency can be expected. As
a result, teams and developers must choose between availability, consistency, and latency.
6. These systems aren't widely available because they're thought to be too expensive.
7. Gathering, processing, presenting, and monitoring hardware use metrics for big clusters can be
a real issue.
10
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Peer To Peer networks are the network resources in which each system has the same capabilities and
responsibilities, i.e., none of the systems in this architecture is superior to the others in terms of
functionality. There is no master-slave relationship among the systems, i.e., every node is equal in a
Peer Peer Network Operating System. All the nodes at the Network have an equal relationship with
others and have a similar type of software that helps the sharing of resources.
A Peer-to-Peer Network Operating System allows two or more computers to share their resources,
along with printers, scanners, CD-ROM, etc., to be accessible from each computer. These networks are
best suitable for smaller environments with 25 or fewer workstations.
11
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
o It is very easy to set up as a simple cabling scheme is used, usually a twisted pair cable.
o Peer to Peer networks is usually less secure because they commonly use share-level security.
o This failure of any node in a system affects the whole system.
o Its performance degrades as the Network grows.
o Peer to Peer networks cannot differentiate among network users who are accessing a resource.
o In Peer-to-Peer Network, each shared resource you wish to control must have its password.
These multiple passwords may be difficult to remember.
o Lack of central control over the Network.
Client-Server Network Operating System is a server-based Network in which storage and processing
workload is shared among clients and servers.
The client requests offerings which include printing and document storage, and servers satisfy their
requests. Normally all community offerings like digital mail, printing are routed through the server.
Server computers systems are commonly greater effective than client computer systems. This
association calls for software programs for the customers and servers. The software program walking
at the server is known as the Network Operating System, which offers a community of surroundings
for server and client.
Client-Server Network was developed to deal with the environment when many PC printers and servers
are connected via a network. The fundamental concept changed to outline a specialized server with
unique functionality.
12
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
o This Network is more secure than the Peer Peer Network system due to centralized data
security.
o Network traffic reduces due to the division of work among clients and the server.
o The area covered is quite large, so it is valuable to large and modern organizations because it
distributes storage and processing.
o The server can be accessed remotely and across multiple platforms in the Client-Server
Network system.
o In Client-Server Networks, security and performance are important issues. So trained network
administrators are required for network administration.
o Implementing the Client-Server Network can be a costly issue depending upon the security,
resources, and connectivity.
13
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Real-time operating systems (RTOS) are used in environments where a large number of
events, mostly external to the computer system, must be accepted and processed in a short time or
within certain deadlines. such applications are industrial control, telephone switching equipment,
flight control, and real-time simulations. With an RTOS, the processing time is measured in tenths
of seconds. This system is time-bound and has a fixed deadline. The processing in this type of system
must occur within the specified constraints. Otherwise, This will lead to system failure.
Examples of real-time operating systems are airline traffic control systems, Command Control
Systems, airline reservation systems, Heart pacemakers, Network Multimedia Systems, robots, etc.
The real-time operating systems can be of 3 types –
Hard Real-Time Operating System: These operating systems guarantee that critical tasks are
completed within a range of time.
For example, a robot is hired to weld a car body. If the robot welds too early or too late, the car
cannot be sold, so it is a hard real-time system that requires complete car welding by the robot hardly
on time., scientific experiments, medical imaging systems, industrial control systems, weapon
14
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
15
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Use Heavy System Resources: Sometimes the system resources are not so good and they are
expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer to write on.
Device Driver and Interrupt signals: It needs specific device drivers and interrupts signals to
respond earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less prone to
switching tasks.
Minimum Switching: RTOS performs minimal task switching.
Comparison of Regular and Real-Time operating systems:
An embedded operating system is a computer operating system designed for use in embedded computer
systems. It has limited features. The term "embedded operating system" also refers to a "real-time
operating system". The main goal of designing an embedded operating system is to perform specified
tasks for non-computer devices. It allows the executing programming codes that deliver access to
devices to complete their jobs.
16
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
The embedded operating system improves overall efficiency by controlling all hardware resources and
minimizing response times for specific tasks for which devices were built.
There are various types of Embedded operating systems. Some of them are as follows:
A real-time operating system (RTOS) is a deterministic operating system with limited functionalities
that allows multi-threaded applications by giving processed outputs within set time limitations. Since
some apps are time-critical, they must be executed exactly when they are expected to maintain the
entire system functioning.
The real-time operating system is dependent on clock interruptions. Interrupt Service Routine (ISR)
interruptions are generated by this system. The Priority system was implemented by RTOS for the
execution of all types of processes. The process and the RTOS are synchronized and can communicate
with one another. The RTOS is stored on a ROM (Read Only Memory) chip because this chip can store
data for a long time.
The multitasking operating system may execute multiple tasks at the same time. In a multitasking
operating system, multiple tasks and processes run at the same time. If the system contains more than
one processor, it may perform a wide range of functions.
The multitasking operating system is switched between the multiple tasks. Some tasks are waiting for
events to occur, while others are receiving events and preparing to run. When using a multitasking
operating system, software development is easier since different software components may be made
independent of each other.
A multitasking operating system that interprets task pre-emption is known as a pre-emptive operating
system. A task with a higher priority is always defined and executed before a task with a lower priority.
Such multitasking operating systems improve system reaction to events and simplify software
17
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
development, resulting in a more dependable system. The system designer may calculate the time
required for service interpreters in the system and the time required by the scheduler to switch tasks.
Such systems can fail to meet a system's deadline, and the program is unaware of the missed deadline.
CPU load can be naturally measured in a pre-emptive operating system by defining a lower priority
process that does nothing except increment the counter.
Some embedded systems are designed to use a specific task scheduling method known as 'Rate
Monotonic Scheduling'. It is an operating system that assures that tasks in a system may operate for a
specific amount of time and duration of time. It is a priority-based scheduling algorithm. It is used in
operating systems as a pre-emptive. It means that all tasks can be interrupted or suspended by other
tasks within a short period of time. It is generally used to perform shorter tasks with higher priority.
It is a very simple type of operating system designed to perform only one function. It is used in several
devices, including smartphones, thermostats or temperature controls, digital controllable equipment,
etc. Users may set any point of temperature variable as desired in this type of OS. Several sensors are
included in this system to determine various temperature points in the environment
Advantages
There are various advantages of an embedded operating system. Some of them are as follows:
Disadvantages
There are various disadvantages of an embedded operating system. Some of them are as follows:
*************
The processor switches between the two modes depending on what type of code is running on the
processor. Applications run in user mode, and core operating system components run in kernel mode.
While many drivers run in kernel mode, some drivers may run in user mode.
1. User Mode: When the computer system runs user applications like file creation or any other
application program in the User Mode, this mode does not have direct access to the computer's
hardware. The transition from user mode to kernel mode occurs when the application requests the help
of operating system or an interrupt or a system call occurs. The mode bit of the user mode is 1. This
means that if the mode bit of the system's processor is 1, then the system will be in the User Mode.
.2. Kernel Mode: All the bottom level tasks of the Operating system are performed in the Kernel
Mode. As the Kernel space has direct access to the hardware of the system, so the kernel-mode handles
all the processes which require hardware support. Apart from this, the main functionality of the Kernel
Mode is to execute privileged instructions.
These privileged instructions are not provided with user access, and that's why these instructions cannot
be processed in the User mode. So, all the processes and instructions that the user is restricted to
interfere with are executed in the Kernel Mode of the Operating System. The mode bit for the Kernel
Mode is 0. So, for the system to function in the Kernel Mode, the Mode bit of the processor must be
equal to 0.
********************
Operating system is a software that acts as an intermediary between the user and computer hardware.
It is a program with the help of which we are able to run various applications. It is the one program
that is running all the time. Every computer must have an operating system to smoothly execute other
programs. The OS coordinates the use of the hardware and application programs for various users. It
20
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
provides a platform for other application programs to work. The operating system is a set of special
programs that run on a computer system that allows it to work properly. It controls input-output devices,
execution of programs, managing files, etc.
• Program execution
• Input Output Operations
• Communication between Process
• File Management
• Memory Management
• Process Management
• Security and Privacy
• Resource Management
• User Interface
• Networking
• Error handling
• Time Management
Program Execution
It is the Operating System that manages how a program is going to be executed. It loads the
program into the memory after which it is executed. The order in which they are executed depends on
the CPU Scheduling Algorithms. A few are FCFS, SJF, etc. When the program is in execution, the
Operating System also handles deadlock i.e. no two processes come for execution at the same time.
The Operating System is responsible for the smooth execution of both user and system programs. The
Operating System utilizes various resources available for the efficient running of all types of
functionalities.
21
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
File Management
The operating system helps in managing files also. If a program needs access to a file, it is the
operating system that grants access. These permissions include read-only, read-write, etc. It also
provides a platform for the user to create, and delete files. The Operating System is responsible for
making decisions regarding the storage of all types of data or files, i.e, floppy disk/hard disk/pen drive,
etc. The Operating System decides how the data should be manipulated and stored.
Memory Management
Let’s understand memory management by OS in simple way. Imagine a cricket team with
limited number of players. The team manager (OS) decides whether the upcoming player will be in
playing 11, playing 15 or will not be included in team, based on his performance. In the same way, OS
first check whether the upcoming program fulfil all requirement to get memory space or not, if all
things good, it checks how much memory space will be sufficient for program and then load the
program into memory at certain location. And thus, it prevents program from using unnecessary
memory.
Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove as the
(CPU) where all cooking(execution) is really happened and chef as the (OS) who uses kitchen-stove
(CPU) to cook different dishes(program). The chef (OS) has to cook different dishes(programs) so he
ensures that any particular dish(program) does not take long time (unnecessary time) and all
dishes(programs) gets a chance to cooked(execution). The chef (OS) basically scheduled time for all
dishes(programs) to run kitchen (all the system) smoothly and thus cooked(execute) all the different
dishes(programs) efficiently.
Security: OS keep our computer safe from an unauthorised user by adding security layer to it.
Basically, Security is nothing but just a layer of protection which protect computer from bad guys like
viruses and hackers. OS provide us defences like firewalls and anti-virus software and ensure good
safety of computer and personal information.
22
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Privacy: OS give us facility to keep our essential information hidden like having a lock on our
door, where only you can enter and other are not allowed. Basically, it respects our secrets and provide
us facility to keep it safe.
Resource Management
System resources are shared between various processes. It is the Operating system that manages
resource sharing. It also manages the CPU time among processes using CPU Scheduling Algorithms.
It also helps in the memory management of the system. It also controls input-output devices. The OS
also ensures the proper use of all the resources available by deciding which resource to be used by
whom.
User Interface
User interface is essential and all operating systems provide it. Users either interface with the
operating system through the command-line interface or graphical user interface or GUI. The command
interpreter executes the next user-specified command.
A GUI offers the user a mouse-based window and menu system as an interface.
Networking
This service enables communication between devices on a network, such as connecting to the
internet, sending and receiving data packets, and managing network connections.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output devices,
etc. It also ensures that an error does not occur frequently and fixes the errors. It also prevents the
process from coming to a deadlock. It also looks for any type of error or bugs that can occur while any
task. The well-secured OS sometimes also acts as a countermeasure for preventing any sort of breach
of the Computer System from any external source and probably handling them.
Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be
stop(red)=> (simple queue), start(yellow)=> (ready queue), move(green)=> (under execution) and this
light (control) changes after a certain interval of time at each side of the road (computer system) so that
the cars(program) from all side of road move smoothly without traffic.
****************
23
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
System Calls
A system call is a mechanism used by programs to request services from the operating
system (OS). In simpler terms, it is a way for a program to interact with the underlying system, such
as accessing hardware resources or performing privileged operations.
A system call is initiated by the program executing a specific instruction, which triggers a switch
to kernel mode, allowing the program to request a service from the OS. The OS then handles the
request, performs the necessary operations, and returns the result back to the program.
System calls are essential for the proper functioning of an operating system, as they provide a
standardized way for programs to access system resources. Without system calls, each program
would need to implement its own methods for accessing hardware and system services, leading to
inconsistent and error-prone behavior.
Services Provided by System Calls
• Process creation and management
• Main memory management
• File Access, Directory, and File system management
• Device handling(I/O)
• Protection
• Networking, etc.
• Process control: end, abort, create, terminate, allocate, and free memory.
• File management: create, open, close, delete, read files,s, etc.
• Device management
• Information maintenance
• Communication
Features of System Calls
• Interface: System calls provide a well-defined interface between user programs and the
operating system. Programs make requests by calling specific functions, and the operating
system responds by executing the requested service and returning a result.
• Protection: System calls are used to access privileged operations that are not available
to normal user programs. The operating system uses this privilege to protect the system
from malicious or unauthorized access.
• Kernel Mode: When a system call is made, the program is temporarily switched from
user mode to kernel mode. In kernel mode, the program has access to all system resources,
including hardware, memory, and other processes.
24
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
• Context Switching: A system call requires a context switch, which involves saving the
state of the current process and switching to the kernel mode to execute the requested
service. This can introduce overhead, which can impact system performance.
• Error Handling: System calls can return error codes to indicate problems with the
requested service. Programs must check for these errors and handle them appropriately.
• Synchronization: System calls can be used to synchronize access to shared resources,
such as files or network connections. The operating system provides synchronization
mechanisms, such as locks or semaphores, to ensure that multiple programs can access
these resources safely.
System Calls Advantages
• Access to hardware resources: System calls allow programs to access hardware
resources such as disk drives, printers, and network devices.
• Memory management: System calls provide a way for programs to allocate and
deallocate memory, as well as access memory-mapped hardware devices.
• Process management: System calls allow programs to create and terminate processes,
as well as manage inter-process communication.
• Security: System calls provide a way for programs to access privileged resources, such
as the ability to modify system settings or perform operations that require administrative
permissions.
• Standardization: System calls provide a standardized interface for programs to interact
with the operating system, ensuring consistency and compatibility across different
hardware platforms and operating system versions.
Examples of a System Call in Windows and Unix
System calls for Windows and Unix come in many different forms. These are listed in the table below
as follows:
CreateProcess() Fork()
Process Control ExitProcess() Exit()
WaitForSingleObject() Wait()
25
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Open()
CreateFile()
Read()
File manipulation ReadFile()
Write()
WriteFile()
Close()
SetConsoleMode() Ioctl()
Device Management ReadConsole() Read()
WriteConsole() Write()
GetCurrentProcessID() Getpid()
Information Maintenance SetTimer() Alarm()
Sleep() Sleep()
CreatePipe() Pipe()
Communication CreateFileMapping() Shmget()
MapViewOfFile() Mmap()
SetFileSecurity() Chmod()
Protection InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()
open(): Accessing a file on a file system is possible with the open() system call. It gives the file
resources it needs and a handle the process can use. A file can be opened by multiple processes
simultaneously or just one process. Everything is based on the structure and file system.
read(): Data from a file on the file system is retrieved using it. In general, it accepts three arguments:
1. A description of a file.
2. A buffer for read data storage.
3. How many bytes should be read from the file
Before reading, the file to be read could be identified by its file descriptor and opened
using the open() function.
wait(): In some systems, a process might need to hold off until another process has finished running
before continuing. When a parent process creates a child process, the execution of the parent process
26
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
is halted until the child process is complete. The parent process is stopped using the wait() system
call. The parent process regains control once the child process has finished running.
write(): Data from a user buffer is written using it to a device like a file. A program can produce
data in one way by using this system call. generally, there are three arguments:
1. A description of a file.
2. A reference to the buffer where data is stored.
3. The amount of data that will be written from the buffer in bytes.
fork(): The fork() system call is used by processes to create copies of themselves. It is one of the
methods used the most frequently in operating systems to create processes. When a parent process
creates a child process, the parent process’s execution is suspended until the child process is finished.
The parent process regains control once the child process has finished running.
exit(): A system call called exit() is used to terminate a program. In environments with multiple
threads, this call indicates that the thread execution is finished. After using the exit() system function,
the operating system recovers the resources used by the process.
***********************
Virtual Machine abstracts the hardware of our personal computer such as CPU, disk drives,
memory, NIC (Network Interface Card) etc, into many different execution environments as per our
requirements, hence giving us a feel that each execution environment is a single computer. For
example, VirtualBox.
When we run different processes on an operating system, it creates an illusion that each process is
running on a different processor having its own virtual memory, with the help of CPU scheduling
and virtual-memory techniques. There are additional features of a process that cannot be provided
by the hardware alone like system calls and a file system. The virtual machine approach does not
provide these additional functionalities but it only provides an interface that is same as basic
hardware. Each process is provided with a virtual copy of the underlying computer system.
We can create a virtual machine for several reasons, all of which are fundamentally related to the
ability to share the same basic hardware yet can also support different execution environments, i.e.,
different operating systems simultaneously.
The main drawback with the virtual-machine approach involves disk systems. Let us suppose that
the physical machine has only three disk drives but wants to support seven virtual machines.
Obviously, it cannot allocate a disk drive to each virtual machine, because virtual-machine software
27
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
itself will need substantial disk space to provide virtual memory and spooling. The solution is to
provide virtual disks.
Users are thus given their own virtual machines. After which they can run any of the operating
systems or software packages that are available on the underlying machine. The virtual -machine
software is concerned with multi-programming multiple virtual machines onto a physical machine,
but it does not need to consider any user-support software. This arrangement can provide a useful
way to divide the problem of designing a multi-user interactive system, into two smaller pieces.
Advantages:
1. There are no protection problems because each virtual machine is completely isolated
from all other virtual machines.
2. Virtual machine can provide an instruction set architecture that differs from real
computers.
3. Easy maintenance, availability and convenient recovery.
Disadvantages:
1. When multiple virtual machines are simultaneously running on a host computer, one
virtual machine can be affected by other running virtual machines, depending on the
workload.
2. Virtual machines are not as efficient as a real one when accessing the hardware.
*******************
Operating System Design and Implementation
An operating system is a construct that allows the user application programs to interact with the
system hardware. Operating system by itself does not provide any function but it provides an
atmosphere in which different applications and programs can do useful work.
There are many problems that can occur while designing and implementing an operating system.
These are covered in operating system design and implementation.
28
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
29
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
30
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
PROCESS MANAGEMENT
Process
A process is basically a program in execution. The execution of a process must progress in a sequential
fashion.
To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four sections
─ stack, heap, text and data. The following image shows a simplified layout of a process inside main
memory −
***************
31
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
32
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Process State
1 The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
Process privileges
2
This is required to allow/disallow access to system resources.
Process ID
3
Unique identification for each of the process in the operating system.
Pointer
4
A pointer to parent process.
Program Counter
5 Program Counter is a pointer to the address of the next instruction to be executed for
this process.
CPU registers
6 Various CPU registers where process need to be stored for execution for running
state.
Accounting information
9 This includes the amount of CPU used for process execution, time limits, execution
ID etc.
IO status information
10
This includes a list of I/O devices allocated to the process.
33
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB –
******************
Synchronization is a necessary part of inter process communication. It is either provided by the inter
process control mechanism or handled by the communicating processes. Some of the methods to
provide synchronization are as follows −
34
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
• Semaphore
A semaphore is a variable that controls the access to a common resource by multiple processes.
The two types of semaphores are binary semaphores and counting semaphores.
• Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a time.
This is useful for synchronization and also prevents race conditions.
• Barrier
A barrier does not allow individual processes to proceed until all the processes reach it. Many
parallel languages and collective routines impose barriers.
• Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if
the lock is available or not. This is known as busy waiting because the process is not doing any
useful operation even though it is active.
The different approaches to implement inter process communication are given as follows –
• Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way data
channel between two processes. This uses standard input and output methods. Pipes are used in
all POSIX systems as well as Windows operating systems.
• Socket
The socket is the endpoint for sending or receiving data in a network. This is true for data sent
between processes on the same computer or data sent between different computers on the same
network. Most of the operating systems use sockets for inter process communication.
• File
A file is a data record that may be stored on a disk or acquired on demand by a file server.
Multiple processes can access a file as required. All operating systems use files for data storage.
• Signal
Signals are useful in inter process communication in a limited way. They are system messages
that are sent from one process to another. Normally, signals are not used to transfer data but are
used for remote commands between processes.
• Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple processes. This
is done so that the processes can communicate with each other. All POSIX systems, as well as
Windows operating systems use shared memory.
• Message Queue
35
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Multiple processes can read and write data to the message queue without being connected to
each other. Messages are stored in the queue until their recipient retrieves them. Message
queues are quite useful for inter process communication and are used by most operating
systems.
A diagram that demonstrates message queue and shared memory methods of inter process
communication is as follows –
**************************
THREADS
What is Thread?
A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of
registers/ Traditional (heavy weight) processes have a single thread of control - There is one program
counter, and one sequence of instructions that can be carried out at any given time. Now multi-thread
(lightweight) process provides a way to improve application performance through parallelism. Multi-
threaded applications have multiple threads within a single process, each having their own program
counter, stack and set of registers, but sharing common code, data, and certain structures such as open
files.
36
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Advantages of Thread
Responsiveness: One thread may provide rapid response while other threads are blocked or slowed
down doing intensive calculations.
Resource sharing: By default, threads share common code, data, and other resources, which allow
multiple tasks to be performed simultaneously in a single address space.
Economy: Creating and managing threads is much faster than performing the same tasks for processes.
Scalability, i.e., Utilization of multiprocessor architectures: A single threaded process can only run
on one CPU, no matter how many may be available, whereas the execution of a multi-threaded
application may be split amongst available processors
TYPES OF THREADS
Threads are implemented in following two ways
1. User Level Threads − User managed threads.
2. Kernel Level Threads − Operating System managed threads acting on kernel, an operating system
core.
1. User-level thread: The operating system does not recognize the user-level thread. User threads can
be easily implemented and it is implemented by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread does not know nothing about the user
level thread.
2. Kernel level thread: The kernel thread recognizes the operating system. There is a thread control
block and process control block in the system for each thread and process in the kernel-level thread.
The kernel-level thread is implemented by the operating system. The kernel knows about all the threads
37
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
and manages them. The kernel-level thread offers a system call to create and manage the threads from
user-space. The implementation of kernel threads is more difficult than the user thread.
THREADING ISSUES
There are several threading issues when we are in a multithreading environment. We will discuss the
threading issues with system calls, cancellation of thread, signal handling, thread pool and thread-
specific data.
1. fork() and exec() System Calls: The fork() and exec() are the system calls. The fork()
call creates a duplicate process of the process that invokes fork(). The new duplicate
process is called child process and process invoking the fork() is called the parent
process. Both the parent process and the child process continue their execution from the
instruction that is just after the fork().Next system call i.e. exec() system call when
invoked replaces the program along with all its threads with the program that is specified
in the parameter to exec(). Typically the exec() system call is lined up after the fork()
system call.
2. Thread cancellation: Termination of the thread in the middle of its execution it is termed
as ‘thread cancellation’. For example when the multiple threads to search through a database
for some information. However, if one of the thread returns with the desired result the
remaining threads will be cancelled. Thread cancellation can be performed in two ways:
Asynchronous Cancellation, Deferred Cancellation.
3. Signal Handling: Signal handling is more convenient in the single-threaded program as the
signal would be directly forwarded to the process. But when it comes to multithreaded
program, the issue arrives to which thread of the program the signal should be delivered.
38
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
• Runnable
• Blocked
• Waiting
• Timed Waiting
• Terminated
Life Cycle of a thread
1. New Thread: When a new thread is created, it is in the new state. The thread has not yet started to run
when the thread is in this state. When a thread lies in the new state, its code is yet to be run and hasn’t
started to execute.
2. Runnable State: A thread that is ready to run is moved to a runnable state. In this state, a thread might
actually be running or it might be ready to run at any instant of time. It is the responsibility of the thread
scheduler to give the thread, time to run.
A multi-threaded program allocates a fixed amount of time to each individual thread. Each and every
thread runs for a short while and then pauses and relinquishes the CPU to another thread so that other
threads can get a chance to run. When this happens, all such threads that are ready to run, waiting for
the CPU and the currently running thread lie in a runnable state.
3. Blocked/Waiting state: When a thread is temporarily inactive, then it’s in one of the following
states: 1. Blocked 2. Waiting
39
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
4. Timed Waiting: A thread lies in a timed waiting state when it calls a method with a time-out
parameter. A thread lies in this state until the timeout is completed or until a notification is received.
For example, when a thread calls sleep or a conditional wait, it is moved to a timed waiting state.
Because it exits normally. This happens when the code of the thread has been entirely executed by the
program.
********************
Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the executable memory at a time and the loaded
process shares the CPU using time multiplexing.
Categories of Scheduling
There are two categories of scheduling:
• Non-pre-emptive: Here the resource can’t be taken from a process until the process completes
execution. The switching of resources occurs when the running process terminates and moves
to a waiting state.
40
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
• Pre-emptive: Here the OS allocates the resources to a process for a fixed amount of time.
During resource allocation, the process switches from running state to ready state or from
waiting state to ready state. This switching occurs as the CPU may give priority to other
processes and replace the process with higher priority with the running process.
Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS maintains
a separate queue for each of the process states and PCBs of all processes in the same execution state
are placed in the same queue. When the state of a process is changed, its PCB is unlinked from its
current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to
execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O device constitute this
queue.
2. Then required memory space for all the elements of the process such as program, data, and
stack is allocated including space for its Process Control Block (PCB).
Process termination: Processes are terminated by themselves when they finish executing their last
statement, then operating system USES exit () system call to delete its context. Then all the resources
held by that process like physical and virtual memory, buffers, open files, etc., are taken back by the
operating system. A process can be terminated either by the operating system or by the parent process.
***********************
CPU SCHEDULING
CPU Scheduling Algorithms: CPU Scheduling Algorithms are very important topic in Operating
Systems. This is because this CPU Scheduling Algorithms forms a base and foundation for the
Operating Systems subject. A task is a group of processes. Every task is executed by the Operating
System. The Operating System divides the task into many processes. The final goal of the Operating
System is completion of the task. The task must be finished in the quickest possible time with the
limited resources which the Operating System have. This is the main motive of CPU Scheduling
Algorithms.
CPU Scheduling: The CPU Scheduling is the process by which a process is executed by the using
the resources of the CPU. The process also can wait due to the absence or unavailability of the
resources. These processes make the complete use of Central Processing Unit. The operating
system must choose one of the processes in the list of ready-to-launch processes whenever the CPU
gets idle. A temporary (CPU) scheduler does the selection. The Scheduler choose one of the ready-
to-start memory processes to get the CPU. Before, going to the Types of CPU Scheduling
Algorithms, we are going to learn about the Basic Terminologies which are to be followed and
used in the CPU Scheduling Algorithms by us.
1. Process ID: The Process ID acts like the name of the process. It is usually represented with
numbers or P letter with numbers
Example: P0, P1, P2, P3 . . . . . . . .
42
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
2. Arrival Time: The time which is required for the Process to enter the ready queue or the time
when the Process is ready to be executed by the CPU. This Arrival Time can be represented as AT
in short form.
3. Burst Time: The Time Slot which the Process requires to complete the Process is known as the
Burst Time. The Burst Time can be represented as BT in short form.
4. Completion Time: The Total Time required by the CPU to complete the process is known as
Completion Time. The Completion Time can be represented as CT in short form.
5. Turn Around Time: The time taken by the CPU since the Process has been ready to execute or
since the process is in Ready Queue is known as Turn Around Time. The Turn Around Time can be
calculated with the help of Completion Time and Arrival Time. The Turn Around Time can be
represented as TAT in short form. The Turn Around Time is the difference of Completion Time and
Arrival Time.
Formula: TAT = CT - AT
6. Waiting Time: The time the Process has been waiting to complete its process since the
assignment of process for completion is known as Waiting Time. The Waiting Time can be
represented as WT in short form. The Waiting Time can be calculated with the help of Turn
Around Time and Burst Time.
The Waiting Time is the difference between Turn Around Time and Burst Time
Formula: WT = TAT - BT
7. Ready Queue: The Queue where all the processes are stored until the execution of the previous
process. This ready queue is very important because there would be confusion in CPU when two
same kinds of processes are being executed at the same time. Then, in these kinds of conditions the
ready queue comes into place.
8. Gantt Chart: It is the place where all the already executed processes are stored. This is very
useful for calculating Waiting Time, Completion Time, Turn Around Time.
There are two approaches in CPU Scheduling Algorithms. They are: Pre-emptive, Non-Pre-
emptive
Types of CPU Scheduling Algorithms: 1. First Come First Serve
2. Shortest Job First
3. Priority Scheduling
4. Round Robin Scheduling
1. FIRST COME FIRST SERVE(FCFS) SCHEDULING
• Simplest CPU scheduling algorithm that schedules according to arrival times of processes.
• FCFS is a non-preemptive scheduling algorithm.
• Tasks are always executed on a First-come, First-serve concept.
• FCFS is easy to implement and use.
• This algorithm is not much efficient in performance, and the wait time is quite high.
Examples
43
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
P2 2 2
P3 3 1
P4 4 9
P5 5 8
Solution:
Gantt Chart:
Average Completion Time = The Total Sum of Completion Times which is divided by the total
number of processes is known as Average Completion Time.
Average Completion Time =( CT1 + CT2 + CT3 + ...................... + CTn ) / n
= ( 6 + 8 +9 + 18 + 26 ) / 5 = 67/5 = 13.4
Average Turn Around Time = The Total Sum of Turn Around Times which is divided by the
total number of processes is known as Average Turn Around Time.
Average Turn Around Time = (TAT1 + TAT2 + TAT3 + ...................... + TATn ) / n
= ( 6 + 8 + 9 +18 +26 ) / 5 = 67/5 = 13.4
Average Waiting Time = The Total Sum of Waiting Times which is divided by the total
number of processes is known as Average Waiting Time.
Average Waiting Time = (WT1 + WT2 + WT3 + ...................... + WTn ) / n
= ( 0 + 6 + 8 + 9 + 18 ) / 5 = 41 / 5 = 8.2
First Come First Serve (FCFS) Scheduling Algorithm
Aim: Write a C program to implement the various process scheduling mechanismssuch
Algorithm for FCFS scheduling:
Step 1: Start the process
Step 2: Accept the number of processes in the ready Queue
Step 3: For each process in the ready Q, assign the process id and accept the CPU
burst time Step 4: Set the waiting of the first process as ‘0’ and its burst time as
its turnaround time Step 5: for each process in the Ready Q calculate
44
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
#include<stdio.h>
void main()
{
int i,n,sum,wt,tat,twt,ttat;
int t[10];
float awt,atat;
clrscr();
printf("Enter number of processors:\n");
scanf("%d",&n);
for(i=0;i<n;i++)
{
printf("\n Enter the Burst Time of the process %d",i+1);
scanf("\n %d",&t[i]);
}
printf("\n\n FIRST COME FIRST SERVE SCHEDULING ALGORITHM \n");
printf("\n Process ID \t Waiting Time \t Turn Around Time \n");
printf("1 \t\t 0 \t\t %d \n",t[0]);
sum=0;
twt=0;
ttat=t[0];
for(i=1;i<n;i++)
{
45
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
sum+=t[i-1];
wt=sum; tat=sum+t[i]; twt=twt+wt;
ttat=ttat+tat;
printf("\n %d \t\t %d \t\t %d",i+1,wt,tat);
printf("\n\n");
}
awt=(float)twt/n; atat=(float)ttat/n;
printf("\n Average Waiting Time %4.2f",awt);
printf("\n Average Turnaround Time %4.2f",atat);
getch();
}
OUTPUT
Enter number of processors: 3
Enter the Burst Time of the process 1: 2
Enter the Burst Time of the process 2: 5
Enter the Burst Time of the process 3: 4
FIRST COME FIRST SERVE SCHEDULING ALGORITHM
Process ID Waiting Time Turn Around Time
102
227
3 7 11
Average Waiting Time 3.00
Average Turnaround Time 6.67
Average Waiting
46
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Gantt chart:
Gantt chart:
{
int ,j,k,n,sum,wt[10],tt[10],twt,ttat;
int t[10],p[10];
float awt,atat;
clrscr();
printf("Enter number of process\n");
scanf("%d",&n);
for(i=0;i<n;i++)
{
printf("\n Enter the Burst Time of Process %d",i);
scanf("\n %d",&t[i]);
}
for(i=0;i<n;i++)
p[i]=i;
for(i=0;i<n;i++)
{
for(k=i+1;k<n;k++)
{
if(t[i]>t[k])
{
int temp;
temp=t[i];
t[i]=t[k];
t[k]=temp;
temp=p[i];
p[i]=p[k];
p[k]=temp;
}
}
printf("\n\n SHORTEST JOB FIRST SCHEDULING ALGORITHM");
printf("\n PROCESS ID \t BURST TIME \t WAITING TIME \t TURNAROUND TIME
\n\n");
49
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
wt[0]=0;
for(i=0;i<n;i++)
{
sum=0;
for(k=0;k<i;k++)
{
wt[i]=sum+t[k];
sum=wt[i];
}
}
for(i=0;i<n;i++)
{
tt[i]=t[i]+wt[i];
}
for(i=0;i<n;i++)
{
printf("%5d \t\t5%d \t\t %5d \t\t %5d \n\n",p[i],t[i],wt[i],tt[i]);
}
twt=0;
ttat=t[0];
for(i=1;i<n;i++)
{
twt=twt+wt[i];
ttat=ttat+tt[i];
}
awt=(float)twt/n;
atat=(float)ttat/n;
printf("\n AVERAGE WAITING TIME %4.2f",awt);
printf("\n AVERAGE TURN AROUND TIME %4.2f",atat);
getch();
}
}
50
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
OUTPUT:
Enter number of process 3
Enter the Burst Time of Process 04
Enter the Burst Time of Process 13
Enter the Burst Time of Process 25
3. PRIORITY SCHEDULING
• Priority Scheduling is a method of scheduling processes that is based on priority.
• In this algorithm, the scheduler selects the tasks to work as per the priority.
• The processes with higher priority should be carried out first.
• Whereas jobs with equal priorities are carried out on a round-robin or FCFS basis.
• Types of Priority Scheduling
1. Preemptive Scheduling
2. Non-Preemptive Scheduling
Examples:
Here, in this problem the priority number with highest number is least prioritized.
Process Arrival Burst Completion Time Turn Around Time Waiting Time
Priority
Id Time Time TAT = CT - AT TAT = CT - AT WT = TAT - BT
P1 0 5 5 5 5 0
P2 1 6 4 27 26 20
P3 2 2 0 7 5 3
P4 3 1 2 15 12 11
P5 4 7 1 14 10 3
P6 4 6 3 21 17 11
Gantt Chart:
51
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
scanf("%d",&bt[i]);
printf("Enter the Priority of Pid %d : ",i);
scanf ("%d",&pr[i]);
}
// Sorting start
for (i=0;i<n;i++)
for(j=i+1;j<n;j++)
{
if (pr[i] > pr[j] )
{
t = pr[i];
pr[i] = pr[j];
pr[j] = t;
t = bt[i];
bt[i] = bt[j];
bt[j] = t;
t = pid[i];
pid[i] = pid[j];
pid[j] = t;
}
}
// Sorting finished
tat[0] = bt[0];
wt[0] = 0;
for (i=1;i<n;i++)
{
wt[i] = wt[i-1] + bt[i-1];
tat[i] = wt[i] + bt[i];
}
printf("\n-----------------------------------------------------------------------\n");
printf("Pid\t Priority\tBurst time\t WaitingTime\tTurnArroundTime\n");
printf("\n------------------------------------------------------------------------\n");
53
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
for(i=0;i<n;i++)
{
printf("\n%d\t\t%d\t%d\t\t%d\t\t%d",pid[i],pr[i],bt[i],wt[i],tat[i]);
}
for(i=0;i<n;i++)
{
ttat = ttat+tat[i];
twt = twt + wt[i];
}
awt = (float)twt / n;
atat = (float)ttat / n;
printf("\n\nAvg.Waiting Time: %f\nAvg.Turn Around Time: %f\n",awt,atat);
getch();
}
OUTPUT:
-----------PRIORITY SCHEDULING--------------
Enter the No of Process: 4
Enter the Burst time of Pid 0 : 2
Enter the Priority of Pid 0 : 3
Enter the Burst time of Pid 1 : 6
Enter the Priority of Pid 1 : 2
Enter the Burst time of Pid 2 : 4
Enter the Priority of Pid 2 : 1
Enter the Burst time of Pid 3 : 5
Enter the Priority of Pid 3 : 7
2 1 4 0 4
1 2 6 4 10
0 3 2 10 12
3 7 5 12 17
Avg.Waiting Time:
54
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
6.500000 Avg.Turn
Around Time:
10.750000
4. ROUND ROBIN CPU SCHEDULING
• Round Robin is a CPU scheduling mechanism those cycles around assigning each task
a specific time slot.
• Round Robin is the preemptive process scheduling algorithm.
• Each process is provided a fix time to execute, it is called a quantum.
• Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
• Context switching is used to save states of preempted processes.
Examples:
Problem
Solution:
Process Arrival Burst Completion Turn Around Waiting
ID Time Time Time Time Time
P0 1 3 5 4 1
P1 0 5 14 14 9
P2 3 2 7 4 2
P3 4 3 10 6 3
P4 2 1 3 1 0
Gantt Chart:
Average Completion
Time = 7.8 Average
Turn Around Time =
4.8 Average Waiting
Time = 3
55
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
Step 4: Calculate the no. of time slices for each process where No. of time slice for process(n) =
burst time process(n)/time slice
Step 5: If the burst time is less than the time slice then the no. of time slices =1.
Step 6: Consider the ready queue is a circular Q, calculate
a. Waiting time for process(n) = waiting time of process(n-1)+ burst time of process(n-1 ) +
the time difference in getting the CPU from process(n-1)
b. Turn around time for process(n) = waiting time of process(n) + burst time of process(n)+
the time difference in getting CPU from process(n).
Step 7: Calculate
a. Average waiting time = Total waiting Time / Number of process
b. Average Turnaround time = Total Turnaround Time / Number
of process Step 8: Stop the process
Round Robin scheduling Program:
#include<stdio.h>
#include<conio.h>
void main()
{
int ts,pid[10],need[10],wt[10],tat[10],i,j,n,n1;
int bt[10],flag[10],ttat=0,twt=0;
float awt,atat;
clrscr();
printf("\t\t ROUND ROBIN SCHEDULING \n");
printf("Enter the number of Processors \n");
scanf("%d",&n);
56
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
n1=n;
printf("\n Enter the Timeslice \n");
scanf("%d",&ts);
for(i=1;i<=n;i++)
{
printf("\n Enter the process ID %d",i);
scanf("%d",&pid[i]);
printf("\n Enter the Burst Time for the process");
scanf("%d",&bt[i]);
need[i]=bt[i];
}
for(i=1;i<=n;i++)
{
flag[i]=1;
wt[i]=0;
}
while(n!=0)
{
for(i=1;i<=n;i++)
{
if(need[i]>=ts)
{
for(j=1;j<=n;j++)
{
if((i!=j)&&(flag[i]==1)&&(need[j]!=0))
wt[j]+=ts;
}
need[i]-=ts;
if(need[i]==0)
{
flag[i]=0;
57
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
n--;
}
}
else
{
for(j=1;j<=n;j++)
{
if((i!=j)&&(flag[i]==1)&&(need[j]!=0))
wt[j]+=need[i];
}
need[i]=0;
n--;
flag[i]=0;
}
}
}
for(i=1;i<=n1;i++)
{
tat[i]=wt[i]+bt[i];
twt=twt+wt[i];
ttat=ttat+tat[i];
}
awt=(float)twt/n1;
atat=(float)ttat/n1;
printf("\n\n ROUND ROBIN SCHEDULING ALGORITHM \n\n");
printf("\n\n Process \t Process ID \t BurstTime \t Waiting Time \t TurnaroundTime \n ");
for(i=1;i<=n1;i++)
{
printf("\n %5d \t %5d \t\t %5d \t\t %5d \t\t %5d \n", i,pid[i],bt[i],wt[i],tat[i]);
}
printf("\n The average Waiting Time=4.2f",awt);
58
SIR C R REDDY COLLEGE, ELURU
DEPARTMENT OF COMPUTER SCIENCE-MCA OPERATING SYSTEM
59
SIR C R REDDY COLLEGE, ELURU