Operating System
Operating System
OPERATING SYSTEM
1. Introduction: Operating system overview, computer system structure, structure and
components of an operating system.
2. System calls: class of system calls and description.
3. Process and threads: process and thread model, process and thread creation and
termination, user and kernel level thread, scheduling, scheduling algorithms,
dispatcher, context switch, real time scheduling.
4. Concurrency and synchronization: IPC and inter-thread communication, critical
region, critical section problems and solutions.
5. Resource management: introduction to deadlock, ostrich algorithm, deadlock
detection and recovery, deadlock avoidance, deadlock prevention, starvation.
6. File management: File Naming and structure, file access and attributes, system calls,
File organization: OS and user perspective view of file, memory mapped file, file
directories organization,
7. File System Implementation: implementing file, allocation strategy, method of
allocation, directory implementation, UNIX i-node, block management, quota.
8. Memory management: basic memory management, fixed and dynamic partition,
virtual memory, segmentation, paging and swapping, MMU.
9. Virtual memory management: paging, page table structure, page replacement, TLB,
exception vector, demand paging and segmentation, thrashing and performance.
10. Disk I/O management: structure, performance, low-level disk formatting, Disk arm
scheduling algorithm, error handling, stable storage.
Reference Book:
1. Silberschatz, Galvin, Peterson, Operating system Concepts, sixth Edition.
2. 2.A.S.Tanenbaum,OS,Prentice Hall
3. P.B. Hausen,OS Concepts, Prentice Hall
4. S. Madnick and J.Donovon, OS, McGraw Hill
244 | Operating system
CHAPTER 1
INTRODUCTION
1) What is an operating system? (2008,2009,2012,2013,2014,2017,2021)
Answer: An operating system (OS) is system software that manages computer hardware and
software resources and provides common services for computer programs by acts as an
interface between the user and the computer hardware and controls the execution of all kinds of
programs.
System Application
Software software software
Operating system
Hardware
CPU RAM I/O
6) Figure out the abstract views of a computer system and describe the importance of
operating system. (2015)
Or, Write about the main components of an operating system. (2017)
Or, what are basic components of an operating system? (2014)
Answer:
7) The operating system can be view as a government and a resource allocator- Explain.
(2014)
Answer: The operating system as a government
Operating system is called the government of any computer system because
1. Like government systems differ based on rule like democracy, bureaucracy, autocracy etc
operating systems also differ by permissions in shell and kernel.
2. Government issues license and passes rules, makes law etc likewise an OS permits users to run
programs by granting access and permissions.
3. Government tries to create jobs while OS executes and creates jobs. Common man cannot
access certain units of the government due to security reasons; likewise Kernel operations are
controlled by OS in monitor mode.
4. Bribe and conspiracy are special system calls through which rich people try to run kernel
programs in a government system. OS also allows intelligent programmers to access the kernel
to program it.
5. The government assigns specific tasks to smaller units called state government which in turn
creates tasks for city and district offices. OS on the other hand spawns processes and in turn
spawns threads for their smooth functioning.
6. At any instant Government can lose its stability by losing the failth from the people and
dissolves itself. OS can also crash trying to execute a fatal process.
7. Bad people try to create havoc thus overloading the government. They employ police or
military to handle those. OS troublemakers are called viruses, worms, spams etc and they
employ system level security and cryptographic techniques to handle them.
The operating system as a resource manager
Modern computers consist of processors, memory, clocks, records, monitors, network
interfaces, printers, and other devices that can be used by multiple users simultaneously. The
work consists of the operating system to direct and control the allocation of processors, memory
and peripheral devices the various programs that use it.
Imagine what would happen if three programs running on a computer trying simultaneously to
print the results on the same printer. The first printed lines could come from program 1, the
following program 2, and then the program 3 and so on. This would result in the total disorder.
The operating system can avoid this potential chaos by transferring the results to be printed in a
buffer file disk. When printing is completed, the operating system can then print the files in the
buffer. Simultaneously, another program can continue generate results without realizing that it
does not send them (yet) to the printer.
258 | Operating system
8) What is Multiprocessor Systems.
Answer: Multiprocessor Operating System refers to the use of two or more central processing
units (CPU) within a single computer system. These multiple CPUs are in a close communication
sharing the computer bus, memory and other peripheral devices. These systems are referred
as tightly coupled systems.
These types of systems are used when very high speed is required to process a large volume of
data. These systems are generally used in environment like satellite control, weather forecasting
etc. The basic organization of multiprocessing system is shown in fig.
21) What do you mean by asymmetric and symmetric clustering? Which one is more
efficient and way? (2009)
Answer:
Asymmetric Clustering - In this, one machine is in hot standby mode while the other is running
the applications. The hot standby host (machine) does nothing but monitor the active server. If
that server fails, the hot standby host becomes the active server.
Symmetric Clustering - In this, two or more hosts are running applications, and they are
monitoring each other. This mode is obviously more efficient, as it uses all of the available
hardware.
23) Differentiate between time sharing and real time system. (2017)
Answer: Following are the differences between Real Time system and Timesharing System.
Sr. Real Time System Timesharing System
No.
1 In this system, events mostly external In this system, many users are
to computer system are accepted and allowed to simultaneously share the
processed within certain deadlines. computer resources.
2 Real time processing is mainly Time sharing processing deals with
devoted to one application. many different applications.
3 User can make inquiry only and Users can write and modify
cannot write or modify programs. programs.
4 User must get a response within the User should get a response within
specified time limit; otherwise it may fractions of seconds but if not, the
result in a disaster. results are not disastrous.
5 No context switching takes place in The CPU switches from one process
this system. to another as a time slice expires or
a process terminates.
27) What is the main difficulty that a programmer must overcome in writing an operating
system? (2008)
Answer: The main difficulty is keeping the operating system within the fixed time constraints of a
real-time system. If the system does not complete a task in a certain time frame, it may cause a
breakdown of the entire system it is running. Therefore when writing an operating system for a
real-time system, the writer must be sure that his scheduling schemes don't allow response time
to exceed the time constraint.
Operating system | 267
28) What is the purpose of command-interpreter? (2013)
Answer: A command interpreter is the part of a computer operating system that understands and
executes commands that are entered interactively by a human being or from a program. In some
operating systems, the command interpreter is called the shell.
The main features/purpose of the command interpreter are :
1. The possibility to add new commands in a very easy way. It contains 81 built-in commands.
2. The use of an expression evaluator, written by Mark Morley, which can be used to parse
numeric arguments, or make direct computations, and define variables. It is possible to add easily
new expression evaluators. One using complex numbers is implemented in the library.
3. The possibility to write, load and execute programs, which are sequences of commands, using
loops and jumps.
4. The definition of objects which are arrays of several types of numbers, having names. So it is
possible to refer to objects in arguments of commands for instance, by giving their name. It is also
possible to define structures, whose members are objects, other structures or variables of the
expression evaluator.
5. There is an implementation of complex numbers in two ways. The library contains also some
functions that simplify the use of arrays of numbers.
6. it is possible to run several programs simultaneously, and these programs can communicate
with each other (threads).
29) Write down the important features of command line interface and graphical user
interface. (2013)
Answer: The main features/purpose of the command interpreter are :
1. The possibility to add new commands in a very easy way. It contains 81 built-in commands.
2. The use of an expression evaluator, written by Mark Morley, which can be used to parse
numeric arguments, or make direct computations, and define variables. It is possible to add easily
new expression evaluators. One using complex numbers is implemented in the library.
3. The possibility to write, load and execute programs, which are sequences of commands, using
loops and jumps.
4. The definition of objects which are arrays of several types of numbers, having names. So it is
possible to refer to objects in arguments of commands for instance, by giving their name. It is also
possible to define structures, whose members are objects, other structures or variables of the
expression evaluator.
5. There is an implementation of complex numbers in two ways. The library contains also some
functions that simplify the use of arrays of numbers.
6. It is possible to run several programs simultaneously, and these programs can communicate
with each other (threads).
Features of the Graphical User Interface (GUI)
Entering dates
A graphical representation of a calendar that allows you to enter the date in your form by
clicking on the desired date in the calendar.
Access the calendar in date fields by using the LOV icon or through the menu under Edit, List
of Values.
268 | Operating system
Folders are special blocks that allow you to:
Only display the fields you are interested in.
Arrange the fields to best meet your needs.
Define query parameters to automatically call the records you need when opening the folder.
Sort in any order relevant to your needs.
Toolbar
Most commonly used menu items are duplicated as icons at the top of the Applications
window.
Attachments
Used to link non-structured data such as images, word processing documents, or video to
application data.
Multiple windows
Allows you to display all elements of a business flow on the same screen.
Does not require that you complete entering data in one form before navigating to another
form. Each form can be committed independently.
On-line Help
Help is now based on the functional flow of the task rather than according to the form's
structure.
Lets you select the task you want to perform and provides a step by step description of the
task.
Allows navigation to any part of the Help system.
30) Difference between command line interface and graphical user interface.(2021)
Solution:
BASIS FOR CLI GUI
COMPARISON
Basic Command line interface allows a Graphical User interface allows a
user to interact with the system user to interact with the system
through commands. through graphics which includes
images, icons, etc.
Device used Keyboard Mouse and keyboard
Ease of performing Hard to perform an operation and Easy to perform tasks and does
tasks require expertise. not require expertise.
Precision High Low
Flexibility Intransigent More flexible
32. What are the different directory structure generally used? [2014]
Solution:
Directory: Directory can be defined as the listing of the related files on the disk. The
directory may store some or the entire file attributes.
To get the benefit of different file systems on the different operating systems, A hard disk can be
divided into the number of partitions of different sizes. The partitions are also called volumes or
mini disks.
Each partition must have at least one directory in which, all the files of the partition can be listed.
A directory entry is maintained for each file in the directory which stores all the information
related to that file.
A directory can be viewed as a file which contains the Meta data of the bunch of files.
270 | Operating system
Every Directory supports a number of common operations on the file:
1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files
Operating system | 271
CHAPTER 2
OPERATING SYSTEM CALLS
(1) Mention some common operating system component.
Answer: From the virtual machine point of view (also resource management)
These components reflect the services made available by the O.S.
Process Management
Process is a program in execution --- numerous processes to choose from in
a multi-programmed system,
Process creation/deletion (bookkeeping)
Process suspension/resumption (scheduling, system vs. user)
Process synchronization
Process communication
Deadlock handling
Memory Management
Maintain bookkeeping information
Map processes to memory locations
Allocate/deallocate memory space as requested/required
I/O Device Management
Disk management functions such as free space management, storage allocation, fragmentation
removal, head scheduling
Consistent, convenient software to I/O device interface through buffering/caching,
custom drivers for each device.
File System
Built on top of disk management
File creation/deletion.
Support for hierarchical file systems
Update/retrieval operations: read, write, append, seek
Mapping of files to secondary storage
Protection
Controlling access to the system
Resources --- CPU cycles, memory, files, devices
Users --- authentication, communication
Mechanisms, not policies
Network Management
Often built on top of file system
TCP/IP, IPX, IPng
Connection/Routing strategies
``Circuit'' management --- circuit, message, packet switching
Communication mechanism
Data/Process migration
Network Services (Distributed Computing)
Built on top of networking
272 | Operating system
Email, messaging (GroupWise)
FTP
gopher, www
Distributed file systems --- NFS, AFS, LAN Manager
Name service --- DNS, YP, NIS
Replication --- gossip, ISIS
Security --- Kerberos
User Interface
Character-Oriented shell --- sh, csh, command.com ( User replaceable)
GUI --- X, Windows 95
(3) Describe three methods to passing parameters between user program and parameter.
Answer: Three general methods exist for passing parameters to the OS:
1. Parameters can be passed in registers.
2. When there are more parameters than registers, parameters can be stored in a block and the
block address can be passed as a parameter to a register.
3. Parameters can also be pushed on or popped off the stack by the operating system.
Operating system | 273
(4) What are the three major activates of an operating system in regard to memory
management. (2014)
Answer: Memory management refers to management of Primary Memory or Main Memory. Main
memory is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program to
be executed, it must in the main memory. An Operating System does the following activities for
memory management −
1. Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in
use.
2. In multiprogramming, the OS decides which process will get memory when and how much.
3. Allocates the memory when a process requests it to do so and De-allocates the memory when a
process no longer needs it or has been terminated.
(5) Define system call. Mention major categories of system calls with examples.
A system call is the programmatic way in which a computer program requests a service from the
kernel of the operating system it is executed on. A system call is a way for programs to interact
with the operating system. A computer program makes a system call when it makes a request to
the operating system’s kernel. System call provides the services of the operating system to the
user programs via Application Program Interface(API). It provides an interface between a process
and operating system to allow user-level processes to request services of the operating system.
Types of System Calls
There are 5 different categories of system calls:
Process control, file manipulation, device manipulation, information maintenance and
communication.
Process Control
A running program needs to be able to stop execution either normally or abnormally. When
execution is stopped abnormally, often a dump of memory is taken and can be examined with a
debugger.
File Management
Some common system calls are create, delete, read, write, reposition, or close. Also, there is a need
to determine the file attributes – get and set file attribute. Many times the OS provides an API to
make these system calls.
Device Management
Process usually require several resources to execute, if these resources are available, they will be
granted and control returned to the user process. These resources are also thought of as devices.
Some are physical, such as a video card, and others are abstract, such as a file.
User programs request the device, and when finished they release the device. Similar to files, we
can read, write, and reposition the device.
Information Management
Some system calls exist purely for transferring information between the user program and the
operating system. An example of this is time, or date.
The OS also keeps information about all its processes and provides system calls to report this
information.
274 | Operating system
Communication
There are two models of interprocess communication, the message-passing model and the shared
memory model.
Message-passing uses a common mailbox to pass messages between processes.
Shared memory use certain system calls to create and gain access to create and gain access to
regions of memory owned by other processes. The two processes exchange information by
reading and writing in the shared data.
It provides minimal services of process and memory management. The communication between
client program/application and services running in user address space is established through
message passing, reducing the speed of execution microkernel. The Operating System remains
unaffected as user services and kernel services are isolated so if any user service fails it does not
affect kernel service. Thus it adds to one of the advantages in a microkernel. It is
easily extendable i.e. if any new services are to be added they are added to user address space
and hence requires no modification in kernel space. It is also portable, secure and reliable.
Microkernel Architecture –
Since kernel is the core part of the operating system, so it is meant for handling the most
important services only. Thus in this architecture only the most important services are inside
kernel and rest of the OS services are present inside system application program. Thus users are
able to interact with those not-so important services within the system application. And the
microkernel is solely responsible for the most important services of operating system they are
named as follows:
Inter process-Communication
Memory Management
CPU-Scheduling
Operating system | 275
Advantages of Microkernel –
The architecture of this kernel is small and isolated hence it can function better.
Expansion of the system is easier, it is simply added in the system application without
disturbing the kernel.
Vertices are mainly of two types, Resource and process. Each of them will be represented by a
different shape. Circle represents process while rectangle represents resource.
A resource can have more than one instance. Each instance will be represented by a dot inside the
rectangle.
276 | Operating system
Edges in RAG are also of two types, one represents assignment and other represents the wait of a
process for a resource. The above image shows each of them.
A resource is shown as assigned to a process if the tail of the arrow is attached to an instance to
the resource and the head is attached to a process.
A process is shown as waiting for a resource if the tail of an arrow is attached to the process while
the head is pointing towards the resource.
This technique requires the programmers to specify which overlay to load at different
circumstances.
278 | Operating system
CHAPTER 3
PROCESS AND THREADS
(1) What is process? (2015,2013,2012,2008)
Answer: A process is basically a program in execution. The execution of a process must progress
in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in the
system.
To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data. The following image shows a simplified layout of a process
inside main memory −
(4) Mention the types of process- specific information associated with PCB.(2021,2015)
Or, what kinds of information’s are contained in a PCB? (2008)
Or, briefly explain about the contents of the Process Control Block (PCB). (2013, 2012)
Answer: A PCB keeps all the information needed to keep track of a process as listed below in the
table −
Information & Description
Process State: The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
Process privileges: This is required to allow/disallow access to system resources.
Process ID: Unique identification for each of the process in the operating system.
Pointer: A pointer to parent process.
Program Counter: Program Counter is a pointer to the address of the next instruction to be
executed for this process.
CPU registers
Various CPU registers where process need to be stored for execution for running state.
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the process.
Memory management information
This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
IO status information
This includes a list of I/O devices allocated to the process.
Operating system | 281
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
(6) What type of possibilities exists in term of execution and in terms of the address
space when a new process is created? (2008)
Answer:
Process Creation
Through appropriate system calls, such as fork or spawn, processes may create other processes.
The process which creates other process, is termed the parent of the other process, while the
created sub-process is termed its child.
Each process is given an integer identifier, termed as process identifier, or PID. The parent PID
(PPID) is also stored for each process.
On a typical UNIX systems the process scheduler is termed as sched, and is given PID 0. The first
thing done by it at system start-up time is to launch init, which gives that process PID 1. Further
282 | Operating system
Init launches all the system daemons and user logins, and becomes the ultimate parent of all other
processes.
A child process may receive some amount of shared resources with its parent depending on
system implementation. To prevent runaway children from consuming all of a certain system
resource, child processes may or may not be limited to a subset of the resources originally
allocated to the parent.
There are two options for the parent process after creating the child:
Wait for the child process to terminate before proceeding. Parent process makes
a wait() system call, for either a specific child process or for any particular child process,
which causes the parent process to block until the wait() returns. UNIX shells normally wait
for their children to complete before issuing a new prompt.
Run concurrently with the child, continuing to process without waiting. When a UNIX shell
runs a process as a background task, this is the operation seen. It is also possible for the
parent to run for a while, and then wait for the child later, which might occur in a sort of a
parallel processing operation.
There are also two possibilities in terms of the address space of the new process:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
Process Termination
By making the exit(system call), typically returning an int, processes may request their own
termination. This int is passed along to the parent if it is doing a wait(), and is typically zero on
successful completion and some non-zero code in the event of any problem.
Processes may also be terminated by the system for a variety of reasons, including:
The inability of the system to deliver the necessary system resources.
In response to a KILL command or other unhandled process interrupts.
A parent may kill its children if the task assigned to them is no longer needed i.e. if the need of
having a child terminates.
If the parent exits, the system may or may not allow the child to continue without a parent (In
UNIX systems, orphaned processes are generally inherited by init, which then proceeds to kill
them.)
When a process ends, all of its system resources are freed up, open files flushed and closed, etc.
Operating system | 283
(7) What do you mean by co-operating process? (2014,2012,2010)
Answer: Cooperating Processes are those that can affect or be affected by other processes.
There are several reasons why cooperating processes are allowed:
Information Sharing - There may be several processes which need access to the same file for
example. ( e.g. pipelines. )
Computation speedup - Often a solution to a problem can be solved faster if the problem can be
broken down into sub-tasks to be solved simultaneously (particularly when multiple processors
are involved.)
Modularity - The most efficient architecture may be to break a system down into cooperating
modules. ( E.g. databases with a client-server architecture. )
Convenience - Even a single user may be multi-tasking, such as editing, compiling, printing, and
running the same code in different windows.
2.Thread. (2021,2014)
Answer: A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its current
working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open
files. When one thread alters a code segment memory item, all other threads see that.
284 | Operating system
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.
3.Producer-Consumer Problem; (2016)
There are two processes: Producer and Consumer. Producer produces some item and Consumer
consumes that item. The two processes shares a common space or memory location known as
buffer where the item produced by Producer is stored and from where the Consumer consumes
the item if needed. There are two version of this problem: first one is known as unbounded buffer
problem in which Producer can keep on producing items and there is no limit on size of buffer, the
second one is known as bounded buffer problem in which producer can produce up to a certain
amount of item and after that it starts waiting for consumer to consume it. We will discuss the
bounded buffer problem. First, the Producer and the Consumer will share some common memory,
then producer will start producing items. If the total produced item is equal to the size of buffer,
producer will wait to get it consumed by the Consumer. Similarly, the consumer first check for the
availability of the item and if no item is available, Consumer will wait for producer to produce it. If
there are items available, consumer will consume it.
(12) Discuss about client server communication via Remote Procedure Calls (RPC). [2020]
A remote procedure call is an inter process communication technique that is used for client-server
based applications. It is also known as a subroutine call or a function call.
A client has a request message that the RPC translates and sends to the server. This request may
be a procedure or a function call to a remote server. When the server receives the request, it sends
the required response back to the client. The client is blocked while the server is processing the
call and only resumed execution after the server is finished.
The sequence of events in a remote procedure call are given as follows −
The client stub is called by the client.
The client stub makes a system call to send the message to the server and puts the parameters
in the message.
The message is sent from the client to the server by the client’s operating system.
The message is passed to the server stub by the server operating system.
The parameters are removed from the message by the server stub.
Then, the server procedure is called by the server stub.
A diagram that demonstrates this is as follows −
Operating system | 287
(13) Write short note on remote procedure call. [2018]
Solution:
Remote Procedure Call (RPC): Remote Procedure call is an inter process communication
technique. It is used for client-server applications. RPC mechanisms are used when a computer
program causes a procedure or subroutine to execute in a different address space, which is coded
as a normal procedure call without the programmer specifically coding the details for the remote
interaction. This procedure call also manages low-level transport protocol, such as User Datagram
Protocol, Transmission Control Protocol/Internet Protocol etc. It is used for carrying the message
data between programs. The Full form of RPC is Remote Procedure Call.
(14) Distinguish between “Light weight process” and “Heavy weight process”. [2018]
Solution:
Lightweight and heavyweight processes refer the mechanics of a multi-processing system.
In a lightweight process, threads are used to divvy up the workload. Here you would see one
process executing in the OS (for this application or service.)
This process would process 1 or more threads. Each of the threads in this process shares the same
address space. Because threads share their address space, communication between the threads is
simple and efficient. Each thread could be compared to a process in a heavyweight scenario.
In a heavyweight process, new processes are created to perform the work in parallel. Here (for the
same application or service), you would see multiple processes running. Each heavyweight
process contains its own address space. Communication between these processes would involve
additional communications mechanisms such as sockets or pipes.
The benefits of a lightweight process come from the conservation of resources. Since threads use
the same code section, data section and OS resources, less overall resources are used. The
drawback is now you have to ensure your system is thread-safe. You have to make sure the
threads don't step on each other. Fortunately, Java provides the necessary tools to allow you to do
this.
288 | Operating system
(15) Why do you think CPU scheduling is the basis of multi-programmed operating system?
(2017)
Answer: CPU scheduling is a process which allows one process to use the CPU while the execution
of another process is on hold(in waiting state) due to unavailability of any resource like I/O etc,
thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient, fast and
fair.
Whenever the CPU becomes idle, the operating system must select one of the processes in
the ready queue to be executed. The selection process is carried out by the short-term scheduler
(or CPU scheduler). The scheduler selects from among the processes in memory that are ready to
execute, and allocates the CPU to one of them.
CPU scheduling is the basis of multiprogramming. Whenever a computer CPU becomes idle, the
operating system must select a process in the ready queue to be executed. One application of
priority queues in operating systems is scheduling jobs on a CPU.
(18) Describe the difference among short-time, medium time and long time scheduling.
(2017)
Answer: Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
4 It is almost absent or It is also minimal in time It is a part of Time sharing
minimal in time sharing sharing system systems.
system
5 It selects processes from It selects those processes It can re-introduce the
pool and loads them into which are ready to execute process into memory and
memory for execution execution can be continued.
290 | Operating system
(19) What do you mean by dispatcher? (2021,2015,2013)
Answer: The dispatcher is the module that gives control of the CPU to the process selected by
the short-term scheduler. This function involves:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program from where it
left last time.
The dispatcher should be as fast as possible, given that it is invoked during every process switch.
The time taken by the dispatcher to stop one process and start another process is known as
the Dispatch Latency. Dispatch Latency can be explained using the below figure:
(22) Consider the following set of processes, with the length of the CPU –burst time given
in milliseconds : (2016,2015,2009)
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
(i) Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF , a non-
preemptive priority and RR (quantum=1) scheduling.
(ii) What is the turnaround time of each process for each of the scheduling algorithms in part (i)?
(iii) What is the waiting time of each process for each of the scheduling algorithm in part (i) ?
Answer: (i) Gantt charts
a) FCFS(first come first served) scheduling:
P1 P2 P3 P4 P5
0 0 11 13 4 9
Operating system | 293
b) SJF(Shortest Job first)
P1 P2 P3 P4 P5
c) Non-preemptive priority
P1 P2 P3 P4 P5
0 1 6 16 18 19
P5 5+ 14=19 ms
P1 10+9 =19 ms
P2 1 +0 = 1 ms
P3 2+2 = 4 ms
P4 1+ 1=2 ms
P5 5+ 4=9 ms
294 | Operating system
Average turnaround time = (19+1+4+2+9)/5=7 ms
c) Non-preemptive priority
Process Turnaround time = Burst time+ waiting time
P1 10+6 =16 ms
P2 1 +0 = 1 ms
P3 2+16 = 18 ms
P4 1+ 18=19 ms
P5 5+ 1=6 ms
P1 10+9 =19 ms
P2 1 +1 = 2 ms
P3 2+5 = 7 ms
P4 1+ 3=4 ms
P5 5+ 9=14 ms
c) Non-preemptive priority
Process Wait Time : Service Time - Arrival Time
P1 6 - 0 = 6 ms
P2 0 - 0 = 0 ms
P3 16 - 0 = 16 ms
P4 18 - 0 = 18 ms
P5 1 - 0 = 1 ms
Average Waiting Time: (6+0+16+18+1) / 5 = 8.2 ms
(23) Consider the following set of processes, with the length of the CPU –burst time given
in milliseconds : (2021,2017,2012,2010)
Process Burst time Priority
P1 8 3
P2 3 1
P3 2 3
P4 1 4
P5 4 2
The processes are assumed to have arrived in the order:
P1,P2,P3,P4,P5, all at time 0.
i. Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF , a no
preemptive priority and RR (quantum=1) scheduling.
ii. What is the turnaround time of each process for each of the scheduling algorithms in part (i)?
iii. What is the waiting time of each process for each of the scheduling algorithm in part (i) ?
296 | Operating system
Answer: Gantt charts
P1 P2 P3 P4 P5
0 8 11 13 14 18
P4 P3 P2 P5 P1
0 1 3 6 14 18
c) Non-preemptive priority
P2 P5 P1 P3 P4
0 3 7 15 17 18
c) Non-preemptive priority
Process Turnaround time = Burst time+ waiting time
P1 8+7 =15 ms
P2 3+0 = 3 ms
P3 2+15 = 17 ms
P4 1+ 17=18 ms
P5 4+ 3=7 ms
c) Non-preemptive priority
P2 (1 – 0)+(6-2)+(10-7) = 1+4+3=8 ms
P3 (2-0)+(7 – 3) = 2+4=6 ms
P4 3 - 0 = 3 ms
P5 (4-0)+(8-5)+(9-8)+(11-9)+(13-12)=4+3+1+2+1=11 ms
(24) Consider the following set of processes with the length of the CPU burst given in
milliseconds: (2014)
Process Burst time Priority
P1 2 2
P2 1 1
P3 8 4
P4 4 2
P5 5 3
Operating system | 299
The processes are assumed to have arrived in the order:
P1,P2,P3,P4,P5, all at time 0.
(i) Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF , a
non-preemptive priority(a large number implies a higher priority) and RR (quantum=2)
scheduling.
(ii) What is the turnaround time of each process for each of the scheduling algorithms ?
(iii) What is the waiting time of each process for each of the scheduling algorithms?
(iv) Which of the algorithms results in the minimum average waiting time (over all processes)?
Answer: Gantt charts
a) FCFS(first come first served) scheduling:
P1 P2 P3 P4 P5
0 2 3 11 15 20
P2 P1 P4 P5 P3
0 1 3 7 12 20
c) Non-preemptive priority
P3 P5 P1 P4 P2
0 8 13 15 19 20
P1 P2 P3 P4 P5 P3 P4 P5 P3 P5 P3
0 2 3 5 7 9 11 13 15 17 18 20
(ii) Turnaround time
Turnaround time = Burst time+ waiting time
a) FCFS(first come first served) scheduling:
Process Turnaround time = Burst time+ waiting time
P1 2+0 =2 ms
P2 1+2 = 3 ms
P3 8+3 = 11 ms
P4 4+ 11=15 ms
P5 5+ 15=20 ms
300 | Operating system
Average turnaround time = (2+3+11+15+20)/5=10.2 ms
c) Non-preemptive priority
Process Turnaround time = Burst time+ waiting time
P1 2+13 =15 ms
P2 1+19 = 20 ms
P3 8+0= 8 ms
P4 4+ 15=19 ms
P5 5+ 8=13 ms
Average turnaround time = (15+20+8+19+13)/5=15 ms
P1 P2 P3 P4 P5 P3 P4 P5 P3 P5 P3
0 2 3 5 7 9 11 13 15 17 18 20
Advantages: A process that waits too long in a lower priority queue may be moved to a higher
priority queue.
Operating system | 303
13. What are the purpose of disk scheduling? [2013]
Solution:
Disk Scheduling: As we know, a process needs two type of time, CPU time and IO time. For I/O, it
requests the Operating system to access the disk.
However, the operating system must be fair enough to satisfy each request and at the same time,
operating system must maintain the efficiency and speed of process execution.
The technique that operating system uses to determine the request which is to be satisfied next is
called disk scheduling.
Goal of Disk Scheduling Algorithm:
o Fairness
o High throughout
o Minimal traveling head time
304 | Operating system
CHAPTER 4
PROCESS SYNCHRONIZATION
(1) Explain dining philosopher problem. (2014)
Answer: The Dining Philosopher Problem – The Dining Philosopher Problem states that K
philosophers seated around a circular table with one chopstick between each pair of philosophers.
There is one chopstick between each philosopher. A philosopher may eat if he can pick up the two
chopsticks adjacent to him. One chopstick may be picked up by any one of its adjacent followers
but not both.
The problem was designed to illustrate the challenges of avoiding deadlock, a system state in
which no progress is possible. To see that a proper solution to this problem is not obvious,
consider a proposal in which each philosopher is instructed to behave as follows:
think until the left chopstick is available; when it is, pick it up;
think until the right chopstick is available; when it is, pick it up;
when both chopsticks are held, eat for a fixed amount of time;
then, put the right chopstick down;
then, put the left chopstick down;
Repeat from the beginning.
This attempted solution fails because it allows the system to reach a deadlock state, in which no
progress is possible. This is a state in which each philosopher has picked up the chopstick to the
left, and is waiting for the chopstick to the right to become available, or vice versa. With the given
instructions, this state can be reached, and when it is reached, the philosophers will eternally wait
for each other to release a chopstick
Mutual exclusion is the basic idea of the problem; the dining philosophers create a generic and
abstract scenario useful for explaining issues of this type. The failures these philosophers may
experience are analogous to the difficulties that arise in real computer programming when
multiple programs need exclusive access to shared resources.
Operating system | 305
(2) Describe Dinning—philosopher problem. How this can be solved by using
semaphore? (2021,2013)
Answer: Semaphore Solution to Dining Philosopher –
Each philosopher is represented by the following pseudocode:
process P
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}
There are three states of philosopher : THINKING, HUNGRY and EATING. Here there are two
semaphores : Mutex and a semaphore array for the philosophers. Mutex is used such that no two
philosophers may access the pickup or putdown at the same time. The array is used to control the
behavior of each philosopher. But, semaphores can result in deadlock due to programming errors.
(3) Discuss the critical section problem with its solution. (2014)
Or, Figure out the requirements to solve the critical-section problem. (2010)
Or, Write down the requirements that should satisfy to solve the critical- section (2008)
Answer: A Critical Section is a code segment that accesses shared variables and has to be
executed as an atomic action. It means that in a group of cooperating processes, at a given point of
time, only one process must be executing its critical section. If any other process also wants to
execute its critical section, it must wait until the first one finishes.
CHAPTER 5
RESOURCE MANAGEMENT
[DEADLOCK]
5) Is it possible to have a deadlock involving only one single process? Explain your
answer. (2016)
Answer: A deadlock situation can only arise if the following four conditions hold simultaneously
in a system:
Mutual Exclusion
Hold and Wait
No Preemption
Circular-wait
It is impossible to have circular-wait when there is only one single-threaded process. There is no
second process to form a circle with the first one. One process cannot hold a resource, yet be
waiting for another resource that it is holding.
So it is not possible to have a deadlock involving only one process.
Now coming to the edges of RAG.There are two types of edges in RAG –
Assign Edge – If you already assign a resource to a process then it is called Assign edge.
2. Request Edge – It means in future the process might want some resource to complete the
execution, that is called request edge.
So, if a process is using a resource, an arrow is drawn from the resource node to the process node.
If a process is requesting a resource, an arrow is drawn from the process node to the resource
node.
Example 1 (Single instances RAG) –
If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides only
one instance, then the processes will be in deadlock. For example, if process P1 holds resource R1,
process P2 holds resource R2 and process P1 is waiting for R2 and process P2 is waiting for R1,
then process P1 and process P2 will be in deadlock.
Operating system | 311
Here’s another example, that shows Processes P1 and P2 acquiring resources R1 and R2 while
process P3 is waiting to acquire both resources. In this example, there is no deadlock because
there is no circular dependency.
So cycle in single-instance resource type is the sufficient condition for deadlock.
Example 2 (Multi-instances RAG) –
From the above example, it is not possible to say the RAG is in a safe state or in an unsafe state.So
to see the state of this RAG, let’s construct the allocation matrix and request matrix.
The total number of processes are three; P1, P2 & P3 and the total number of resources are
two; R1 & R2.
312 | Operating system
Allocation matrix –
For constructing the allocation matrix, just go to the resources and see to which process it is
allocated.
R1 is allocated to P1, therefore write 1 in allocation matrix and similarly, R2 is allocated to P2
as well as P3 and for the remaining element just write 0.
Request matrix –
In order to find out the request matrix, you have to go to the process and see the outgoing
edges.
P1 is requesting resource R2, so write 1 in the matrix and similarly, P2 requesting R1 and for
the remaining element write 0.
So now available resource is = (0, 0).
Checking deadlock (safe or not) –
So, there is no deadlock in this RAG. Even though there is a cycle, still there is no deadlock.
Therefore in multi-instance resource cycle is not sufficient condition for deadlock.
Above example is the same as the previous example except that, the process P3 requesting for
resource R1.
So the table becomes as shown in below.
Operating system | 313
So,the Available resource is = (0, 0), but requirement are (0, 1), (1, 0) and (1, 0).So you can’t fulfill
any one requirement. Therefore, it is in deadlock.
Therefore, every cycle in a multi-instance resource type graph is not a deadlock, if there has to be
a deadlock, there has to be a cycle. So, in case of RAG with multi-instance resource type, the cycle
is a necessary condition for deadlock, but not sufficient.
9) How can you ensure that Hold and Wait and circular wait never occur in deadlock
system? (2017)
Answer: Hold and Wait
To prevent this condition processes must be prevented from holding one or more resources
while simultaneously waiting for one or more others. There are several possibilities for this:
Require that all processes request all resources at one time. This can be wasteful of system
resources if a process needs one resource early in its execution and doesn't need some other
resource until much later.
Require that processes holding resources must release them before requesting new resources,
and then re-acquire the released resources along with the new ones in a single new request.
This can be a problem if a process has partially completed an operation using a resource and
then fails to get it re-allocated after releasing it.
Either of the methods described above can lead to starvation if a process requires one or more
popular resources.
Allocate all required resources to the process before start of its execution, this way hold and
wait condition is eliminated but it will lead to low device utilization. for example, if a process
requires printer at a later time and we have allocated printer before the start of its execution
printer will remained blocked till it has completed its execution.
314 | Operating system
Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
(ii) Is the system in a safe state?
Here , we have
Work=Available = (3 3 2)
If [Need (n) <= Work ]
Then , Work=Work + Allocation
P0 (Need0 7 4 3)> (Work 3 3 2) Doesn’t work & try later
Finish = 0 0 0 0 0
Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
(i) Is the system in a safe state?
Here , we have
Work=Available = (3 3 2)
If [Need (n) <= Work ]
Then , Work=Work + Allocation
Operating system | 319
P0 (Need0 7 4 3)> (Work 3 3 2) Doesn’t work & try later
Finish = 0 0 0 0 0
New state is safe . ans sequence is (P3>P4>P1>P0>P2).thus grant the request of p1.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The
OS scheduler determines how to move processes between the ready and run queues which can
only have one entry per processor core on the system; in the above diagram, it has been merged
with the CPU.
18. Write one algorithm that determines the system is in the safe or not? [2010]
Solution:
The algorithm for finding out whether or not a system is in a safe state can be described as
follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
3) Work = Work + Allocation[i]
Finish[i] = true
goto step (2)
4) if Finish [i] = true for all i
then the system is in a safe state
Math 01:
322 | Operating system
19)Consider the following snapshot of a system:-
Allocation Max Available
A B C D A B C D 1 5 2 0
0 0 1 2 0 0 1 2
1 0 0 0 1 7 5 0
1 3 5 4 2 3 5 6
0 6 3 2 0 6 5 2
0 0 1 4 0 6 5 6
iv) Determine the need matrix
v) Is the system in safe state?
vi) If a request from process P1
Arrives for (0, 4, 2, 0) can the request be granted immediate.
[2021, 2016, 2014, 2011]
Answer:
i) Need matrix:
Need
A B C D
0 0 0 0
0 7 5 0
1 0 0 2
0 0 2 0
0 6 4 2
ii) Initialization:
Work = available= [ 1, 5, 2, 0]
Finish= 0 0 0 0
Search for safe state:
P0:( need= 0 0 0 0)
Finish = 1 0 0 0 0, work= 1 5 2 0 + 0 0 1 2
=1 5 3 2
P1: ( need= 0 7 5 0) > ( work= 1 5 3 2) ,
Doesn’t work - wait try later,
Finish= 1 0 0 0 0
(iii) Yes, because new request from process P1 in less than available request < = Available
( 0 4 2 0 ) < = ( 1 5 2 0)
So the request granted immediately.
324 | Operating system
CHAPTER 6
MEMORY MANAGEMENT
(1) Define logical address, physical address and virtual address. (2017,2015)
Answer: a logical address is the address at which an item (memory cell, storage element, network
host) appears to reside from the perspective of an executing application program. A logical
address may be different from the physical address due to the operation of an address translator
or mapping function.
A physical address is a binary number in the form of logical high and low states on an address bus
that corresponds to a particular cell of primary storage(also called main memory), or to a
particular register in a memory-mapped I/O(input/output) device.
A virtual address is a binary number in virtual memory that enables a process to use a location in
primary storage (main memory) independently of other processes and to use more space than
actually exists in primary storage by temporarily relegating some contents to a hard disk or
internal flash drive.
The CPU's memory management unit (MMU) stores a cache of recently used mappings from the
operating system's page table. This is called the translation look aside buffer (TLB), which is an
associative cache.
When a virtual address needs to be translated into a physical address, the TLB is searched first. If
a match is found (a TLB hit), the physical address is returned and memory access can continue.
However, if there is no match (called a TLB miss), the memory management unit, or the operating
system TLB miss handler, will typically look up the address mapping in the page table to see
Operating system | 325
whether a mapping exists (a page walk). If one exists, it is written back to the TLB (this must be
done, as the hardware accesses memory through the TLB in a virtual memory system), and the
faulting instruction is restarted (this may happen in parallel as well). This subsequent translation
will find a TLB hit, and the memory access will continue.
The total time taken by swapping process includes the time it takes to move the entire process to
a secondary disk and then to copy the process back to memory, as well as the time the process
takes to regain main memory.
Operating system | 327
(7) What is paging? Why are page sizes always power of 2? (2021,2014)
Answer: A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a hard that's set
up to emulate the computer's RAM. Paging technique plays an important role in implementing
virtual memory.
Paging is a memory management technique in which process address space is broken into blocks
of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of
the process is measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum
utilization of the main memory and to avoid external fragmentation.
328 | Operating system
(8) Define address binding and dynamic loading. (2016,2013,2010)
Answer:
Address binding is the process of mapping the program's logical or virtual addresses to
corresponding physical or main memory addresses. In other words, a given logical address is
mapped by the MMU (Memory Management Unit) to a physical address.
Dynamic loading is a mechanism by which a computer program can, at run time, load a library
(or other binary) into memory, retrieve the addresses of functions and variables contained in the
library, execute those functions or access those variables, and unload the library from memory.
(10) Explain the difference between logical and physical addresses. (2015)
Answer:
BASIS FOR LOGICAL ADDRESS PHYSICAL ADDRESS
COMPARISON
Basic It is the virtual address generated The physical address is a location in a
by CPU memory unit.
Address Space Set of all logical addresses Set of all physical addresses mapped to
generated by CPU in reference to a the corresponding logical addresses is
program is referred as Logical referred as Physical Address.
Address Space.
Visibility The user can view the logical The user can never view physical
address of a program. address of program
Access The user uses the logical address to The user can not directly access
access the physical address. physical address.
Generation The Logical Address is generated by Physical Address is Computed by MMU
the CPU
(12) What are the differences between internal and external fragmentation?
(2021,2016,2015,2013,2012)
Answer: Internal Fragmentation occurs when a fixed size memory allocation technique is used.
External fragmentation occurs when a dynamic memory allocation technique is used.
Internal fragmentation occurs when a fixed size partition is assigned to a program/file with
less size than the partition making the rest of the space in that partition unusable. External
fragmentation is due to the lack of enough adjacent space after loading and unloading of
programs or files for some time because then all free space is distributed here and there.
External fragmentation can be mined by compaction where the assigned blocks are moved to
one side, so that contiguous space is gained. However, this operation takes time and also
certain critical assigned areas for example system services cannot be moved safely. We can
observe this compaction step done on hard disks when running the disk defragmenter in
Windows.
External fragmentation can be prevented by mechanisms such as segmentation and paging.
Here a logical contiguous virtual memory space is given while in reality the files/programs are
splitted into parts and placed here and there.
Internal fragmentation can be maimed by having partitions of several sizes and assigning a
program based on the best fit. However, still internal fragmentation is not fully eliminated.
To implement a two-level page structure, the logical address is modified into two parts, one
for the Directory table (Outer page table) and other for the inner page table.
It is as follows: (for 32-bits)
p1 (10 bits) p2(10 bits) d(12 bits)
- - -
Here,
p1 →→ index to the outer page table
p2 →→ displacement within the page of the outer page table
d →→ page offset
This method is not considered appropriate for 64-bit architectures.
The disadvantage of this is that increases that number of memory accesses.
What are the physical address for the following logical addresses?
i. 0, 430
ii. 1, 10
iii. 2, 500
iv. 3, 400
v. 4, 112
vi. 1, 11
Answer:
(1) 0, 430
Here the number of segment is =0
Offset d=430
The length for segment 0 is = 600
Since , 430<600
The physical address is,
Base+d=219+430=649 and the memory word 649 is accessed.
(2) 1, 10
Here the number of segment is =1
Offset d=10
The length for segment 1 is = 14
334 | Operating system
Since , 10<14
The physical address is,
Base+d=2300+10=2310 and the memory word 2310 is accessed.
(3) 2, 500
Here the number of segment is =2
Offset d=500
The length for segment 2 is = 100
Since , 500>100
The logcal address is invalid. There is no physical address.
(4) 3, 400
Here the number of segment is =3
Offset d=400
The length for segment 3 is = 580
Since , 400<580
The physical address is,
Base+d=1327+400=1727 and the memory word 1727 is accessed.
(5) 4, 112
Here the number of segment is =4
Offset d=122
The length for segment 4 is = 96
Since , 122>96
The logcal address is invalid. There is no physical address.
(6) 1, 11
Here the number of segment is =0
Offset d=11
The length for segment 0 is = 14
Since , 11<14
The physical address is,
Base+d=2300+11=2311 and the memory word 2311 is accessed.
CHAPTER 7
VIRTUAL MEMORY
(1) What is virtual memory? (2016,2012)
Answer: Virtual Memory is a storage allocation scheme in which secondary memory can be
addressed as though it were part of main memory. The addresses a program may use to reference
memory are distinguished from the addresses the memory system uses to identify physical
storage sites, and program generated addresses are translated automatically to the corresponding
machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and
amount of secondary memory is available not by the actual number of the main storage locations.
(3) Explain the virtual machine structure of operating system with its advantages and
disadvantages. (2015)
Answer: Virtual machine is a software implementation of a physical machine - computer - that
works and executes analogically to it. Virtual machines are divided in two categories based on
their use and correspondence to real machine: system virtual machines and process virtual
machines. First category provides a complete system platform that executes complete operating
system, second one will run a single program.
The main advantages of virtual machines:
1. Multiple OS environments can exist simultaneously on the same machine, isolated from each
other;
2. Virtual machine can offer an instruction set architecture that differs from real computer's;
3. Easy maintenance, application provisioning, availability and convenient recovery.
4. The main disadvantages:
5. When multiple virtual machines are simultaneously running on a host computer, each virtual
machine may introduce an unstable performance, which depends on the workload on the
system by other running virtual machines;
6. Virtual machine is not that efficient as a real one when accessing the hardware.
Operating system | 337
(4) Explain the demand paging system. (2016,2012)
Answer: A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not in advance.
When a context switch occurs, the operating system does not copy any of the old program’s pages
out to the disk or any of the new program’s pages into the main memory Instead, it just begins
executing the new program after loading the first page and fetches that program’s pages as they
are referenced.
While executing a program, if the program references a page which is not available in the main
memory because it was swapped out a little ago, the processor treats this invalid memory
reference as a page fault and transfers control from the program to the operating system to
demand the page back into the memory.
Advantages
Following are the advantages of Demand Paging −
Large virtual memory.
More efficient use of memory.
There is no limit on degree of multiprogramming.
338 | Operating system
Disadvantages
Number of tables and the amount of processor overhead for handling page interrupts are
greater than in the case of the simple paged management techniques.
(5) Define the term page fault. Write down the steps in handling page fault.(2008)
Or, when do page fault occur? Describe the actions taken by the operating system.
(2017, 2014, 2012, 2010)
Answer: A page fault (sometimes called #PF, PF or hard fault[a]) is a type of exception raised by
computer hardware when a running program accesses a memory page that is not currently
mapped by the memory management unit (MMU) into the virtual address space of a process.
Logically, the page may be accessible to the process, but requires a mapping to be added to the
process page tables, and may additionally require the actual page contents to be loaded from a
backing store such as a disk.
Steps for handling page fault
The basic idea behind paging is that when a process is swapped in, the pager only loads into
memory those pages that it expects the process to need ( right away. )
Pages that are not loaded into memory are marked as invalid in the page table, using the
invalid bit. ( The rest of the page table entry may either be blank or contain information about
where to find the swapped-out page on the hard drive. )
If the process only ever accesses pages that are loaded in memory ( memory
resident pages ), then the process runs exactly as if all the pages were loaded in to memory.
On the other hand, if a page is needed that was not originally loaded up, then a page fault
trap is generated, which must be handled in a series of steps:
1. The memory address requested is first checked, to make sure it was a valid memory request.
2. If the reference was invalid, the process is terminated. Otherwise, the page must be paged in.
Operating system | 339
3. A free frame is located, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the necessary page from disk. ( This will usually
block the process on an I/O wait, allowing some other process to use the CPU in the meantime.
)
5. When the I/O operation is complete, the process’s page table is updated with the new frame
number, and the invalid bit is changed to indicate that this is now a valid page reference.
6. The instruction that caused the page fault must now be restarted from the beginning, ( as soon
as this process gets another turn on the CPU. )
In an extreme case, NO pages are swapped in for a process until they are requested by page
faults. This is known as pure demand paging.
In theory each instruction could generate multiple page faults. In practice this is very rare, due
to locality of reference, covered in section 9.6.1.
The hardware necessary to support virtual memory is the same as for paging and swapping: A
page table and secondary memory. ( Swap space, whose allocation is discussed in chapter
12. )
A crucial part of the process is that the instruction must be restarted from scratch once the
desired page has been made available in memory. For most simple instructions this is not a
major difficulty. However there are some architectures that allow a single instruction to
modify a fairly large block of data, ( which may span a page boundary ), and if some of the data
gets modified before the page fault occurs, this could cause problems. One solution is to access
both ends of the block before executing the instruction, guaranteeing that the necessary pages
get paged in before the instruction begins.
(6) What is paging? Draw the block diagram of paging table hardware scheme for memory
management. (2017)
Answer: Paging
A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard that's set up to
emulate the computer's RAM. Paging technique plays an important role in implementing virtual
memory.
Paging is a memory management technique in which process address space is broken into blocks
of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of
the process is measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum
utilization of the main memory and to avoid external fragmentation.
340 | Operating system
Address Translation
Page address is called logical address and represented by page number and the offset.
Logical Address = Page number + page offset
Frame address is called physical address and represented by a frame number and the offset.
Physical Address = Frame number + page offset
A data structure called page map table is used to keep track of the relation between a page of a
process to a frame in physical memory.
Operating system | 341
When the system allocates a frame to any page, it translates this logical address into a physical
address and creates entry into the page table to be used throughout execution of the program.
When a process is to be executed, its corresponding pages are loaded into any available memory
frames. Suppose you have a program of 8Kb but your memory can accommodate only 5Kb at a
given point in time, then the paging concept will come into picture. When a computer runs out of
RAM, the operating system (OS) will move idle or unwanted pages of memory to secondary
memory to free up RAM for other processes and brings them back when needed by the program.
This process continues during the whole execution of the program where the OS keeps removing
idle pages from the main memory and write them onto the secondary memory and bring them
back when required by the program.
Advantages and Disadvantages of Paging
Here is a list of advantages and disadvantages of paging −
Paging reduces external fragmentation, but still suffer from internal fragmentation.
Paging is simple to implement and assumed as an efficient memory management technique.
Due to equal size of the pages and frames, swapping becomes very easy.
Page table requires extra memory space, so may not be good for a system having small RAM.
342 | Operating system
(7) What is thrashing? Discuss about the FIFO page replacement algorithm, with its
advantages and disadvantages. (2010)
Answer: Thrashing
A process that is spending more time paging than executing is said to be thrashing. In other words
it means, that the process doesn't have enough frames to hold all the pages for its execution, so it
is swapping pages in and out very frequently to keep executing. Sometimes, the pages which will
be required in the near future have to be swapped out.
To prevent thrashing we must provide processes with as many frames as they really need "right
now".
(8) Discuss the hardware support for memory protection with base and limit registers.
Give suitable diagram. (2014)
Answer: Basic Hardware
It should be noted that from the memory chips point of view, all memory accesses are
equivalent. The memory hardware doesn't know what a particular part of memory is being
used for, nor does it care. This is almost true of the OS as well, although not entirely.
The CPU can only access its registers and main memory. It cannot, for example, make direct
access to the hard drive, so any data stored there must first be transferred into the main
memory chips before the CPU can work with it. ( Device drivers communicate with their
hardware via interrupts and "memory" accesses, sending short instructions for example to
transfer data from the hard drive to a specified location in main memory. The disk controller
monitors the bus for such instructions, transfers the data, and then notifies the CPU that the
data is there with another interrupt, but the CPU never gets direct access to the disk. )
Memory accesses to registers are very fast, generally one clock tick, and a CPU may be able to
execute more than one machine instruction per clock tick.
Memory accesses to main memory are comparatively slow, and may take a number of clock
ticks to complete. This would require intolerable waiting by the CPU if it were not for an
intermediary fast memory cache built into most modern CPUs. The basic idea of the cache is
Operating system | 343
to transfer chunks of memory at a time from the main memory to the cache, and then to access
individual memory locations one at a time from the cache.
User processes must be restricted so that they only access memory locations that "belong" to
that particular process. This is usually implemented using a base register and a limit register
for each process, as shown in Figures 8.1 and 8.2 below. Every memory access made by a user
process is checked against these two registers, and if a memory access is attempted outside
the valid range, then a fatal error is generated. The OS obviously has access to all existing
memory locations, as this is necessary to swap users' code and data in and out of memory. It
should also be obvious that changing the contents of the base and limit registers is a privileged
activity, allowed only to the OS kernel.
Advantages:
Both the Sequential and Direct Accesses are supported by this. For direct access, the address of
the kth block of the file which starts at block b can easily be obtained as (b+k).
This is extremely fast since the number of seeks are minimal because of contiguous allocation of
file blocks.
Operating system | 345
Disadvantages:
This method suffers from both internal and external fragmentation. This makes it inefficient in
terms of memory utilization.
Increasing file size is difficult because it depends on the availability of contiguous memory at a
particular instance.
2. Linked List Allocation
In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk
blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block
contains a pointer to the next block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last block
(25) contains -1 indicating a null pointer and does not point to any other block.
Advantages:
This is very flexible in terms of file size. File size can be increased easily since the system
does not have to look for a contiguous chunk of memory.
This method does not suffer from external fragmentation. This makes it relatively better in
terms of memory utilization.
Disadvantages:
Because the file blocks are distributed randomly on the disk, a large number of seeks are
needed to access every block individually. This makes linked allocation slower.
It does not support random or direct access. We can not directly access the blocks of a file.
A block k of a file can be accessed by traversing k blocks sequentially (sequential access )
from the starting block of the file via block pointers.
Pointers required in the linked allocation incur some extra overhead.
346 | Operating system
3. Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all the blocks
occupied by a file. Each file has its own index block. The ith entry in the index block contains the
disk address of the ith file block. The directory entry contains the address of the index block as
shown in the image:
Advantages:
This supports direct access to the blocks occupied by the file and therefore provides fast
access to the file blocks.
It overcomes the problem of external fragmentation.
Disadvantages:
The pointer overhead for indexed allocation is greater than linked allocation.
For very small files, say files that expand only 2-3 blocks, the indexed allocation would keep
one entire block (index block) for the pointers which is inefficient in terms of memory
utilization. However, in linked allocation we lose the space of only 1 pointer per block.
For files that are very large, single index block may not be able to hold all the pointers.
Following mechanisms can be used to resolve this:
1. Linked scheme: This scheme links two or more index blocks together for holding the pointers.
Every index block would then contain a pointer or the address to the next index block.
2. Multilevel index: In this policy, a first level index block is used to point to the second level
index blocks which inturn points to the disk blocks occupied by the file. This can be extended to 3
or more levels depending on the maximum file size.
3. Combined Scheme: In this scheme, a special block called the Inode (information
Node)contains all the information about the file such as the name, size, authority, etc and the
remaining space of Inode is used to store the Disk Block addresses which contain the actual fileas
shown in the image below. The first few of these pointers in Inode point to the direct blocksi.e
the pointers contain the addresses of the disk blocks that contain data of the file. The next few
pointers point to indirect blocks. Indirect blocks may be single indirect, double indirect or triple
indirect. Single Indirect block is the disk block that does not contain the file data but the disk
Operating system | 347
address of the blocks that contain the file data. Similarly, double indirect blocks do not contain
the file data but the disk address of the blocks that contain the address of the blocks containing
the file data.
F2 2 2 2 2 2 2 6 6 6 6 6 7 7 7 7 7 7 3 3
F3 3 3 3 3 3 3 2 2 2 2 2 6 6 6 6 6 6 6
F4 4 4 4 4 4 4 1 1 1 1 1 1 2 2 2 2 2
1 2 3 4 S S 5 6 7 8 s 9 10 11 s 12 13 s 14 S
F2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
F3 3 3 3 3 5 5 5 5 5 3 3 3 3 3 3 3 3 3
F4 4 4 4 4 6 6 6 6 6 7 7 7 7 1 1 1 1
1 2 3 4 S S 5 6 S S S 7 8 9 S S 10 S S S
F2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
F3 3 3 3 3 5 5 5 5 5 3 3 3 3 3 3 3 3 3
F4 4 4 4 4 6 6 6 6 6 7 7 7 7 1 1 1 1
1 2 3 4 S S 5 6 S S S 7 8 9 S S 10 S S S
Size of logical address space = = # of pages x page size = 256 × 4096= 28 x 212= 220
So the number of required bits in the logical address =20 bit
1. The calling environment is suspended, procedure parameters are transferred across the
network to the environment where the procedure is to execute, and the procedure is executed
there.
350 | Operating system
2. When the procedure finishes and produces its results, its results are transferred back to the
calling environment, where execution resumes as if returning from a regular procedure call.
Reference string
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 2 2 2 7
0 0 0 0 4 0 0 0
1 1 3 3 3 1 1
Page frames.
Number of page fault 9.
(iii) LRU Replacement
Reference string
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 4 4 4 0 1 1 1
0 0 0 0 0 0 3 3 3 0 0
1 1 3 3 2 2 2 2 2 7
Page frames.
Number of page fault 12.
Operating system | 351
CHAPTER 8
FILE CONCEPT
(1) Define file. (2013,2015,2010)
Answer: A file is a named collection of related information that is recorded on secondary storage
such as magnetic disks, magnetic tapes and optical disks. In general, a file is a sequence of bits,
bytes, lines or records whose meaning is defined by the files creator and user.
CHAPTER 9
FILE SYSTEM IMPLEMENT
(1) What are the different types of file allocation methods? Briefly explain
(2017,2016,2013,2012,2008)
Answer: The File System Architecture Specifies that how the Files will be stored into the
Computer system means how the Files will be Stored into the System. Means how the data of the
user will be Stored into the Files and how we will Access the data from the File. There are many
types of Storage or Space allocation techniques which specify the Criteria by using; the Files will
Stores the data into them.
1) Continues Space Allocations: - The Continues space allocation will be used for storing all the
data into the Sequence Manner. Means all the data will store by using the Single Memory Block. In
this all the data of the File will be stored by using the Continues Memory of the Computer Systems.
This makes fastest Accessing of data and this is used in the Sequential Access.
In this when System Finds out the First Address or base Address from the Set of Address of the
Files, then this will makes easy for the System to read all the data from the Computer Systems. But
for Storing the Data into the Continues Forms, CPU loss his Time because many Times Data any
Large from the Existing Space. So that this will create Some Difficulties to find out the Free Space
on the disk.
2) Linked Allocation: - This Technique is widely used for storing the contents into the System. In
this the Space which is provided to the Files may not be in the Continuous Form and the Data of
the Files will be Stored into the Different blocks of the Disks.
This Makes Accessing Difficult for the Processor. Because Operating System will Traverse all the
Different Locations and also use Some Jumping Mechanism for Reading the contents from the File
in this the First Location will be accessed and after that System will search for the other Locations.
But Always Remember that all the Locations will be linked with Each other and all the Locations
will be automatically traversed.
3) Index Allocation: - This is also called as Some Advancement into the Linked Space Allocation.
This is same as the Linked Space Allocation by it also maintains all the Disk Address into the Form
of Indexes. As Every Book Contains an index on the Front Page of the System Like this All the Disk
Addresses are Maintained and stored into the Computer System and When a user Request To read
the Contents of the File , then the Whole Address will be used by the System by using the index
numbers.
For this System also Maintains an index Table which contains the Entry for the data and the
Address Means in which Address, which Data has Stored So that this makes the Accessing Fastest
and Easy for the Users.
(2) Write short notes on Resource Allocation Graph; (2021,2016)
Answer: As Banker’s algorithm using some kind of table like allocation, request, available all that
thing to understand what is the state of the system. Similarly, if you want to understand the state
of the system instead of using those table, actually tables are very easy to represent and
understand it, but then still you could even represent the same information in the graph. That
graph is called Resource Allocation Graph (RAG).
358 | Operating system
So, resource allocation graph is explained to us what is the state of the system in terms
of processes and resources. Like how many resources are available, how many are allocated and
what is the request of each process. Everything can be represented in terms of the diagram. One of
the advantages of having a diagram is, sometimes it is possible to see a deadlock directly by using
RAG, but then you might not be able to know that by looking at the table. But the tables are better
if the system contains lots of process and resource and Graph is better if the system contains less
number of process and resource.
We know that any graph contains vertices and edges. So RAG also contains vertices and edges. In
RAG vertices are two type –
1. Process vertex – Every process will be represented as a process vertex. Generally, the process
will be represented with a circle.
2. Resource vertex – Every resource will be represented as a resource vertex. It is also two type –
Single instance type resource – It represents as a box, inside the box, there will be one dot.So
the number of dots indicate how many instances are present of each resource type.
Multi-resource instance type resource – It also represents as a box, inside the box, there will
be many dots present.
The VFS is the glue that enables system calls such as open(), read(), and write() to work regardless
of the file system or underlying physical medium.
Operating system | 359
The figure shows the flow from user-space’s write() call through the data arriving on the physical
media. On one side of the system call is the generic VFS interface, providing the frontend to user-
space; on the other side of the system call is the file system-specific backend, dealing with the
implementation details.
the kernel needs to understand the underlying details of the file systems, except the file systems
themselves. For example, consider a simple user-space program that does:
This system call writes the len bytes pointed to by buf into the current position in the file
represented by the file descriptor fd.
1. This system call is first handled by a generic sys_write() system call that determines the
actual file writing method for the file system on which fd resides.
2. The generic write system call then invokes this method, which is part of the file system
implementation, to write the data to the media (or whatever this file system does on write).
(4) Write down the advantages and disadvantages of Contiguous Linked and Indexed
Allocation methods. (2021,2015)
Answer:
Contiguous Linked and Indexed Allocation methods
Advantages:
Both the Sequential and Direct Accesses are supported by this. For direct access, the address of
the kth block of the file which starts at block b can easily be obtained as (b+k).
This is extremely fast since the number of seeks are minimal because of contiguous allocation of
file blocks.
Disadvantages:
This method suffers from both internal and external fragmentation. This makes it inefficient in
terms of memory utilization.
Increasing file size is difficult because it depends on the availability of contiguous memory at a
particular instance.
(5) Why must the bit map for file allocation be kept on mass storage rather than in main
memory? (2008)
Answer: In case of system crash (memory failure) the free-space list would not be lost as it would be
if the bit map had been stored in main memory.
(6) What problems could occur if a file system allowed a file system to be mounted
simultaneously at more than one location? (2008)
Answer: There would be multiple paths to the same file, which could confuse users or
encourage mistakes (deleting a file with one path deletes the file in all the other paths).
360 | Operating system
(7) What are the purposes of disk scheduling? (2013,2008)
Answer: Disk scheduling is is done by operating systems to schedule I/O requests arriving for
disk. Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:
Multiple I/O requests may arrive by different processes and only one I/O request can be
served at a time by disk controller. Thus other I/O requests need to wait in waiting queue and
need to be scheduled.
Two or more request may be far from each other so can result in greater disk arm movement.
Hard drives are one of the slowest parts of computer system and thus need to be accessed in
an efficient manner.
There are many Disk Scheduling Algorithms but before discussing them let’s have a quick look
at some of the important terms:
Seek Time:Seek time is the time taken to locate the disk arm to a specified track where the
data is to be read or write. So the disk scheduling algorithm that gives minimum average seek
time is better.
Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to
rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed
of the disk and number of bytes to be transferred.
Disk Access Time: Disk Access Time is:
Disk Access Time = Seek Time +
Rotational Latency +
Transfer Time
Disk Response Time: Response Time is the average of time spent by a request waiting to
perform its I/O operation. Average Response time is the response time of the all
requests. Variance Response Time is measure of how individual request are serviced with respect
to average response time. So the disk scheduling algorithm that gives minimum variance response
time is better.
10. Different between sequential and direct file access method. [2017]
Solution:
Sequential Access –It is the simplest access method. Information in the file is processed in order,
one record after the other. This mode of access is by far the most common; for example, editor and
compiler usually access the file in this fashion.
Read and write make up the bulk of the operation on a file. A read operation -read next- read the
next position of the file and automatically advance a file pointer, which keeps track I/O location.
Similarly, for the write writer next append to the end of the file and advance to the newly written
material.
Key points:
Data is accessed one record right after another record in an order.
When we use read command, it move ahead pointer by one
When we use write command, it will allocate memory and move the pointer to the end of the
file
Such a method is reasonable for tape.
Direct Access – Another method is direct access method also known as relative access method. A
filed-length logical record that allows the program to read and write record rapidly. The direct
access is based on the disk model of a file since disk allows random access to any file block. For
direct access, the file is viewed as a numbered sequence of block or record. Thus, we may read
block 14 then block 59 and then we can write block 17. There is no restriction on the order of
reading and writing for a direct access file.
A block number provided by the user to the operating system is normally a relative block number,
the first relative block of the file is 0 and then 1 and so on.
362 | Operating system
11. What is process control block? [2009]
Solution:
Process Control Block: All of the information needed to keep track of a process when
switching is kept in a data package called a process control block. The process control
block typically contains:
An ID number that identifies the process
Pointers to the locations in the program and its data where processing last occurred
Register contents
States of various flags and switches
Pointers to the upper and lower bounds of the memory required for the process
A list of files opened by the process
The priority of the process
Each process has a status associated with it. Many processes consume no CPU time until they get
some sort of input. For example, a process might be waiting for a keystroke from the user. While it
is waiting for the keystroke, it uses no CPU time. While it's waiting, it is "suspended". When the
keystroke arrives, the OS changes its status. When the status of the process changes, from pending
to active, for example, or from suspended to running, the information in the process control block
must be used like the data in any other program to direct execution of the task-switching portion
of the operating system.
CHAPTER 10
DISK I/O MANAGEMENT
1. Define Caching.
A cache is a region of fast memory that holds copies of data. Access to the cached copy is
more efficient than access to the original. Caching and buffering are distinct functions, but
sometimes a region of memory can be used for both purposes.
2. Define Spooling.
A spool is a buffer that holds output for a device, such as printer, that cannot accept
interleaved data streams. When an application finishes printing, the spooling system queues the
corresponding spool file for output to the printer. The spooling system copies the queued spool
files to the printer one at a time.
SCAN
In this algorithm, the disk arm moves in a particular direction till the end and serves all the
requests in its path, then it returns to the opposite direction and moves till the last request is
found in that direction and serves all of them.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60. And it is given that the disk arm should move towards the larger value.
Operating system | 367