0% found this document useful (0 votes)
33 views126 pages

Operating System

The document outlines topics related to operating systems including processes and threads, CPU scheduling, concurrency and synchronization, resource management, file management, memory management, and disk I/O management. It provides definitions and descriptions of operating system concepts and lists reference books for further reading on the subject. The document appears to be a table of contents or outline for a textbook on operating systems.

Uploaded by

MD Anis Mia
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
33 views126 pages

Operating System

The document outlines topics related to operating systems including processes and threads, CPU scheduling, concurrency and synchronization, resource management, file management, memory management, and disk I/O management. It provides definitions and descriptions of operating system concepts and lists reference books for further reading on the subject. The document appears to be a table of contents or outline for a textbook on operating systems.

Uploaded by

MD Anis Mia
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 126

Operating system | 243

OPERATING SYSTEM
1. Introduction: Operating system overview, computer system structure, structure and
components of an operating system.
2. System calls: class of system calls and description.
3. Process and threads: process and thread model, process and thread creation and
termination, user and kernel level thread, scheduling, scheduling algorithms,
dispatcher, context switch, real time scheduling.
4. Concurrency and synchronization: IPC and inter-thread communication, critical
region, critical section problems and solutions.
5. Resource management: introduction to deadlock, ostrich algorithm, deadlock
detection and recovery, deadlock avoidance, deadlock prevention, starvation.
6. File management: File Naming and structure, file access and attributes, system calls,
File organization: OS and user perspective view of file, memory mapped file, file
directories organization,
7. File System Implementation: implementing file, allocation strategy, method of
allocation, directory implementation, UNIX i-node, block management, quota.
8. Memory management: basic memory management, fixed and dynamic partition,
virtual memory, segmentation, paging and swapping, MMU.
9. Virtual memory management: paging, page table structure, page replacement, TLB,
exception vector, demand paging and segmentation, thrashing and performance.
10. Disk I/O management: structure, performance, low-level disk formatting, Disk arm
scheduling algorithm, error handling, stable storage.
Reference Book:
1. Silberschatz, Galvin, Peterson, Operating system Concepts, sixth Edition.
2. 2.A.S.Tanenbaum,OS,Prentice Hall
3. P.B. Hausen,OS Concepts, Prentice Hall
4. S. Madnick and J.Donovon, OS, McGraw Hill
244 | Operating system

CHAPTER 1 PAGE NO: 253


INTRODUCTION
1. What is an operating system? [2008,2009,2012,2013,2014,2017,2018,2020,2021]
2. What are the goals of operating system? [2013]
Or, Write down the important goals of an operating system. [2012, 2018]
3. Describe the major functions of operating system. [2013]
Or, Mention the major functions of operating system in regard to process management.
[2015, 2017, 2018]
4. Explain the service provided by an operating system. [2009,2021]
5. Why operating system called government of any computer system? [2008]
6. Figure out the abstract views of a computer system and describe the importance of operating
system. [2015]
Or, Write about the main components of an operating system. [2017, 2020]
Or, what are basic components of an operating system? [2014]
7. The operating system can be view as a government and a resource allocator- Explain.
[2014]
8. What is Multiprocessor Systems?
9. What is the advantage and disadvantage of multiprocessor systems?
10. Explain the symmetric and asymmetric multiprocessing. [2017]
11. Distinguish between symmetric and asymmetric multiprocessing system. [2009, 2012, 2020]
12. What is Distributed systems
13. Differentiate between parallel and distributed system. [2008]
14. Why distributed system is desirable? [2008]
Or, Discuss the desirable properties of distributed system. [2014]
15. What are the advantages of distributed system? Explain. [2012]
16. Write down the advantages of multiprogramming system. [2018]
17. Define spooling. [2012,2015]
18. Discuss about the use of spooling. [2012]
19. What is the main advantage of multiprogramming? Under what circumstance would a user be
better off using time-sharing system, rather than a personal computer or single workstation?
[2014]
20. Define clustered systems [2018]
21. What do you mean by asymmetric and symmetric clustering? Which one is more efficient and
way? [2009]
22. Define real-time systems.
23. Differentiate between time sharing and real time system. [2017]
24. Discuss about hard and soft real-time systems. [2009,2013,2021]
25. Define handheld systems.
26. What is computing environments?
27. What is the main difficulty that a programmer must overcome in writing an operating system?
[2008]
28. What is the purpose of command-interpreter? [2013]
Operating system | 245
29. Write down the important features of command line interface and graphical user interface.
[2013]
30. Difference between command line interface and graphical user interface. [2020,2021]
31. Define UNIX. [2018]
32. What are the different directory structure generally used? [2014]

CHAPTER 2 PAGE NO: 271


OPERATING SYSTEM CALLS
1. Mention some common operating system component.
2. What do you know about command line interpreter?
3. Describe three methods to passing parameters between user program and parameter.
4. What are the three major activates of an operating system in regard to memory management.
[2014]
5. Define system call. Mention major categories of system calls with examples. [2020]
6. Write short notes on Microkernel based OS structure; [2021,2016]
7. Define batch operating. [2018]
8. Define Resource allocation graph. [2016]
9. What is co-operating process? [2012]
10. Discuss the basic organization of file system. [2013]
11. Describe overlay technique with example. [2012]

CHAPTER 3 PAGE NO: 278


PROCESS AND THREADS
1. What is process? [2015,2013,2012,2008]
2. What are the different states of a process? [2021,2013,2012,2008]
Or, Describe the operation of different process states with diagram. [2015, 2020]
3. What does ‘PCB’ stand for? [2015]
4. Mention the types of process- specific information associated with PCB. [2021,2015]
Or, what kinds of information’s are contained in a PCB? [2008]
Or, briefly explain about the contents of the Process Control Block [PCB].
[2013, 2012]
5. What do you understand about ‘Context Switch’ [2021,2015,2013,2008,2018]
6. What type of possibilities exists in term of execution and in terms of the address space when a
new process is created? [2008]
7. What do you mean by co-operating process? [2014,2012,2010]
8. Explain the following terms : [2021]
i. Process: [2014]
ii. Thread. [2014]
iii. Producer-Consumer Problem; [2016]
9. Explain the different Types of Thread. [2021]
10. Describe the Difference between Process and Thread
246 | Operating system
11. Difference between User-Level & Kernel-Level Thread.
12. Discuss about client server communication via Remote Procedure Calls (RPC). [2020]
13. Write short note on remote procedure call. [2021,2018]
14. Distinguish between “Light weight process” and “Heavy weight process”. [2018]
15. Why do you think CPU scheduling is the basis of multi programmed operating system?
[2017]
16. Describe the different CPU scheduling criteria. [2020,2016]
Or, Write down the main criteria of scheduling algorithm. [2015]
17. Distinguish between preemptive and non-preemptive CPU scheduling.
[2021,2020, 2016,2015]
18. Describe the difference among short-time, medium time and long time scheduling.
[2017]
19. What do you mean by dispatcher? [2015,2013]
20. Discuss about multilevel queue scheduling. [2021,2013,2018]
21. Comparison of Scheduling Algorithms
22. Consider the following set of processes, with the length of the CPU burst time given in
milliseconds: [2016, 2015, 2009]
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
i. Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, a non-
preemptive priority and RR [quantum=1] scheduling.
ii. What is the turnaround time of each process for each of the scheduling algorithms in part [i]?
iii. What is the waiting time of each process for each of the scheduling algorithm in part [i]?
23. Consider the following set of processes, with the length of the CPU burst time given in
milliseconds : [2017,2012,2010]
Process Burst time Priority
P1 8 3
P2 3 1
P3 2 3
P4 1 4
P5 4 2
The processes are assumed to have arrived in the order:
P1,P2,P3,P4,P5, all at time 0.
i. Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF , a no
preemptive priority and RR [quantum=1] scheduling.
ii. What is the turnaround time of each process for each of the scheduling algorithms in part [i]?
iii. What is the waiting time of each process for each of the scheduling algorithm in part [i] ?
Operating system | 247
24. Consider the following set of processes with the length of the CPU burst given in milliseconds:
[2021,2014]
Process Burst time Priority
P1 2 2
P2 1 1
P3 8 4
P4 4 2
P5 5 3
The processes are assumed to have arrived in the order:
P1,P2,P3,P4,P5, all at time 0.
i. Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, a non-
preemptive priority [a large number implies a higher priority] and RR [quantum=2]
scheduling.
ii. What is the turnaround time of each process for each of the scheduling algorithms?
iii. What is the waiting time of each process for each of the scheduling algorithms?
iv. Which of the algorithms results in the minimum average waiting time [over all processes]?
25. Define Throughput & Waiting time. [2018]
26. Illustrate the advantage of multilevel feedback game scheduling. [2016]
27. What are the purpose of disk scheduling? [2013]

CHAPTER 4 PAGE NO: 304


PROCESS SYNCHRONIZATION
1. Explain dining philosopher problem. [2014]
2. Describe Dinning—philosopher problem. How this can be solved by using semaphore?
[2013,2018,2021]
3. Discuss the critical section problem with its solution. [2014]
Or, Figure out the requirements to solve the critical-section problem. [2010]
Or, Write down the requirements that should satisfy to solve the critical- section [2008]
4. What do you mean by process synchronization? And explain it. [2018]
5. Define semaphore. Write down the implementation of semaphore. [2018]
6. What do you understand about ‘IPC’ [2015]
7. Write the advantages of Inter Process Communication [IPC]. [2014,2012,2010]
248 | Operating system

CHAPTER 5 PAGE NO: 307


RESOURCE MANAGEMENT
[DEADLOCK]
1. What is deadlock? [2020, 2017,2016,2015,2014,2012,2010,2008]
2. What do you mean by starvation? [2015]
3. Describe the necessary conditions for deadlock. [2016,2014,2008]
Or, Briefly explain four necessary conditions for deadlock. [2012, 2010, 2018, 2020]
4. Write down at least two real example of deadlock. [2020, 2012]
5. Is it possible to have a deadlock involving only one single process? Explain your answer.
[2016]
6. What are the different methods for handling deadlock? [2016,2008]
7. Explain the banker’s algorithm for deadlock avoidance. [2021,2015]
8. Describe a resource-allocation graph with appropriate diagram that can be used to describe
deadlock more precisely. [2021,2017]
9. How can you ensure that Hold and Wait and circular wait never occur in deadlock system?
[2017]
10. What is mutual exclusion? [2015]
11. Explain the solutions for mutual exclusion. [2017]
12. Explain Safety Algorithm
13. Explain Resource-Request Algorithm
14. Consider the following snapshot of a system: [2017,2015,2010,2008,2018]
Allocation Max Available
A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Answer the following questions:-
i. What is the content of the matrix Need?
ii. Is the system in a safe state?
iii. If a request from process P1 arrives for [1 0 2], can the request be granted immediately?
15. Consider the following snapshot of a system :— [2012, 2020]
Allocation Max Available
A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Answer the following questions using the Banker’s algorithm:-
i. Is the system in a safe state?
Operating system | 249
ii. If a request from process P4 arrives for [0, 1, 1] can the request be granted immediately?
16. What is infinite blocking? [2020, 2012]
17. Explain different types of process scheduling queues. [2009]
18. Write one algorithm that determines the system is in the safe or not? [2010]
19)Consider the following snapshot of a system:-
Allocation Max Available
A B C D A B C D 1 5 2 0
0 0 1 2 0 0 1 2
1 0 0 0 1 7 5 0
1 3 5 4 2 3 5 6
0 6 3 2 0 6 5 2
0 0 1 4 0 6 5 6
i) Determine the need matrix
ii) Is the system in safe state?
iii) If a request from process P1
Arrives for (0, 4, 2, 0) can the request be granted immediate [2021, 2014, 2011]

CHAPTER 6 PAGE NO: 324


MEMORY MANAGEMENT
1. Define logical address, physical address and virtual address. [2020, 2017, 2015]
2. Write down the implementation process of a page table. [2017]
3. Describe paging address translation architecture with figure. [2021,2016,2013]
4. What is segmentation? [2013]
5. Why segmentation and paging sometimes combine into one scheme?[2020, 2017, 2012, 2013]
6. What is swapping? [2014,2013,2010]
7. What is paging? Why are page sizes always power of 2? [2021,2014]
8. Define address binding and dynamic loading. [2016,2013,2010]
9. What is the advantage of dynamic loading? [2014]
10. Explain the difference between logical and physical addresses. [2015]
11. Discuss about internal and external fragmentation. Which fragmentation can be solved by
compaction? [2021,2010]
12. What are the differences between internal and external fragmentation?
[2021,2016,2015,2013,2012]
13. Explain the following allocation algorithms:- [2021,2015,2012]
First-fit;
Best-fit;
Worst-fit
14. Describe different types of page table structure. [2013]
15. Consider the following segment table:- [2013]
Segment Base Length
0 219 600
1 2300 14
2 90 100
250 | Operating system
3 1327 580
4 1952 96
What are the physical address for the following logical addresses?
(i) 0, 430
(ii) 1, 10
(iii) 2, 500
(iv) 3, 400
(v) 4, 112
(vi) 1, 11
16. Define local address. [2017]
17. Define TLB hit and TLB miss. Why TLB is used? [2016]

CHAPTER 7 PAGE NO: 336


VIRTUAL MEMORY
1. What is virtual memory? [2020, 2016,2012]
2. What are the advantages of virtual memory'? [2021,2017,2008]
3. Explain the virtual machine structure of operating system with its advantages and
disadvantages. [2015]
4. Explain the demand paging system. [2016,2012, 2018, 2020]
5. Define the term page fault. Write down the steps in handling page fault. [2008]
Or, when do page fault occur? Describe the actions taken by the operating system.
[2020, 2017, 2014, 2012, 2010]
6. What is paging? Draw the block diagram of paging table hardware scheme for memory
management. [2017]
7. What is thrashing? Discuss about the FIFO page replacement algorithm, with its advantages
and disadvantages. [2010]
8. Discuss the hardware support for memory protection with base and limit registers. Give
suitable diagram. [2014]
9. Briefly explain basic disk space allocation methods with advantages and disadvantages.
[2008]
10. Consider the following page reference string : [2017,2015,2012, 2018, 2020]
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
How many page faults would occur for the following replacement algorithms, assuming four
frames are available?
(i) FIFO replacement;
(ii) LRU replacement;
(iii) Optimal replacement.
11. Consider a logical address space of 256 pages with a 4 KB page size, mapped on to a physical
memory of 64 frames—
i. How many bits are required in the logical address
ii. How many bits are required in the physical address? [2016,2014]
12. Consider a logical address space of eight pages of 1024 words each mapped onto a physical
memory of 32 frames.
Operating system | 251
[i] How many bits are there in the logical address?
[ii] How many bits are there in the physical address? [2013, 2009]
13. Explain remote control calls. [2009]
14. Consider the following page reference string:-
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 1, 2, 0, 1, 7, 0, 1
How many page faults would occur for the following replacement algorithms, (assuming
four frames are available)? [2021, 2016, 2013]
(i)FIFO replacement
(ii)Optimal repayment
(iii) LRU replacement

CHAPTER 8 PAGE NO: 341


FILE MANAGEMENT
1. Define file. [2020,2013,2015,2010]
2. Explain different types of file. [2021,2010]
3. What is file attribute? Discuss about typical file attributes. [2017, 2012, 2010, 2013, 2015]
4. Explain the different types of file access method. [2015, 2018, 2020]
5. Describe the basic directory operations. [2021,2020,2015]
6. Explain file system mounting. [2014]
7. What are the different directory structure generally used? [2021,2018]
8. What information is associated with an open file? [2016]
9. Explain fist fit. [2015]
10. What are the attribute of a file? [2020, 2015]
11. Write down the concept of file. [2010]

CHAPTER 9 PAGE NO: 357


FILE SYSTEM IMPLEMENT
1. What are the different types of file allocation methods? Briefly explain
[2020, 2017, 2016, 2013, 2012, 2008]
2. Write short notes on Resource Allocation Graph; [2021,2016]
3. Write short notes on Virtual File System. [2016, 2018]
4. Write down the advantages and disadvantages of Contiguous Linked and Indexed Allocation
methods. [2021,2015]
5. Why must the bit map for file allocation be kept on mass storage rather than in main memory?
[2008]
6. What problems could occur if a file system allowed a file system to be mounted
simultaneously at more than one location? [2008]
7. What are the purposes of disk scheduling? [2013,2008]
8. What is DNS?
9. Define FCB. [2018]
10. Different between sequential and direct file access method. [2017]
252 | Operating system
11. What is process control block? [2009]
12. Describe PCB with diagram. [2009]

CHAPTER 10 PAGE NO: 364


DISK I/O MANAGEMENT
1. Define Caching.
2. Define Spooling.
3. What are the various Disk-Scheduling Algorithms?
4. What is Low-Level Formatting?
5. What is the use of Boot Block?
6. What is Sector Sparing?
7. What Does Error Handling Mean?
8. Techopedia Explains Error Handling
9. What is Disk Scheduling Algorithm?
10. Why Disk Scheduling Algorithm is needed?
11. Define Important Terms related to Disk Scheduling Algorithms
Operating system | 253

CHAPTER 1
INTRODUCTION
1) What is an operating system? (2008,2009,2012,2013,2014,2017,2021)
Answer: An operating system (OS) is system software that manages computer hardware and
software resources and provides common services for computer programs by acts as an
interface between the user and the computer hardware and controls the execution of all kinds of
programs.

User 1 User 2 User n

System Application
Software software software

Operating system

Hardware
CPU RAM I/O

Figure: Operating system

2) What are the goals of operating system? (2013)


Or, Write down the important goals of an operating system. (2012)
Answer: The goals of operating system convenience and efficiency and ability to evolve.
Describe them below:
Convenience: An OS makes a computer more convenient to use. The primary goal of an operating
system is a convenience for the user. Operating systems exit because they are supposed to make it
easier to compute with an operating system than without an operating system. This is particularly
clear when you look at operating system for small personal computers.
Efficiency: An OS allows the computer system resources to be used in an efficient manner. A
secondary goal is the efficient operation of an computer system. This goal is particularly
important for large, shared multi-user systems. Operating systems can solve this goal.
Ability to evolve: An OS should be constructed in such a way as to permit the effective
development, testing, and introduction of new system functions without interfering with service.
254 | Operating system
3) Describe the major functions of operating system. (2013)
Or, Mention the major functions of operating system in regard to process management.
(2015, 2017)
Answer:
The major functions of OS:
Program execution:
A number of steps need to be performed to execute a program. Instructions and data must be
loaded into main memory, I/O devices and files must be initialized, and other resources must be
prepared.
The OS handles these scheduling duties for the user.
Access to I/O devices:
Each I/O device requires its own peculiar set of instructions or control signals for operation.
The OS provides a uniform interface that hides these details so that programmers can access such
devices using simple reads and writes.
Controlled access to files:
For file access, the OS must reflect a detailed understanding of not only the nature of the I/O
device (disk drive, tape drive) but also the structure of the data contained in the files on the
storage medium.
In the case of a system with multiple users, the OS may provide protection mechanisms to control
access to the files.
Process Management:
The Operating System also Treats the Process Management means all the Processes those are
given by the user or the Process those are System‘s own Process are Handled by the Operating
System .
The Operating System will create the Priorities for the user and also Start or Stops the Execution
of the Process and Also Makes the Child Process after dividing the Large Processes into the Small
Processes.
Memory Management:
Operating System also Manages the Memory of the Computer System means Provide the Memory
to the Process and Also Deallocate the Memory from the Process. And also defines that if a Process
gets completed then this will deallocate the Memory from the Processes.
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to error
handling −
The OS constantly checks for possible errors.
The OS takes an appropriate action to ensure correct and consistent computing.
Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles
and files storage are to be allocated to each user or job. Following are the major activities of an
operating system with respect to resource management −
The OS manages all kinds of resources using schedulers.
CPU scheduling algorithms are used for better utilization of CPU.
Operating system | 255
Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in the
network.
The OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication −
Two processes often require data to be transferred between them
Both the processes can be on one computer or on different computers, but are connected through
a computer network.
Communication may be implemented by two methods, either by Shared Memory or by Message
Passing.
Accounting:
A good OS will collect usage statistics for various resources and monitor performance parameters
such as response time.
On any system, this information is useful in anticipating the need for future enhancements and in
tuning the system to improve performance.
On a multiuser system, the information can be used for billing purposes.
Protection
Protection refers to a mechanism or a way to control the access of programs, processes, or users
to the resources defined by a computer system. Following are the major activities of an operating
system with respect to protection −
The OS ensures that all access to system resources is controlled.
The OS ensures that external I/O devices are protected from invalid access attempts.

4) Explain the service provided by an operating system. (2021,2009)


Answer: An operating system performs these services for applications:
1. In a multitasking operating system where multiple programs can be running at the same time,
the operating system determines which applications should run in what order and how much
time should be allowed for each application before giving another application a turn.
2. It manages the sharing of internal memory among multiple applications.
3. It handles input and output to and from attached hardware devices, such as hard disks,
printers, and dial-up ports.
4. It sends messages to each application or interactive user (or to a system operator) about the
status of operation and any errors that may have occurred.
5. It can offload the management of what are called batch jobs (for example, printing) so that the
initiating application is freed from this work.
6. On computers that can provide parallel processing, an operating system can manage how to
divide the program so that it runs on more than one processor at a time.
7. All major computer platforms (hardware and software) require and sometimes include an
operating system, and operating systems must be developed with different features to meet
the specific needs of various form factors.
256 | Operating system
5) Why operating system called government of any computer system? (2008)
Answer: Operating system as the government of any computer system
Operating system is called the government of any computer system. Because
1. Like government systems differ based on rule like democracy, bureaucracy, autocracy etc
operating systems also differ by permissions in shell and kernel.
2. Government issues license and passes rules, makes law etc likewise an OS permits users to run
programs by granting access and permissions.
3. Government tries to create jobs while OS executes and creates jobs. Common man cannot
access certain units of the government due to security reasons; likewise Kernel operations are
controlled by OS in monitor mode.
4. Bribe and conspiracy are special system calls through which rich people try to run kernel
programs in a government system. OS also allows intelligent programmers to access the kernel to
program it.
5. The government assigns specific tasks to smaller units called state government which in turn
creates tasks for city and district offices. OS on the other hand spawns processes and in turn
spawns threads for their smooth functioning.
6. At any instant Government can lose its stability by losing the faith from the people and
dissolves itself. OS can also crash trying to execute a fatal process.
7. Bad people try to create havoc thus overloading the government. They employ police or
military to handle those. OS troublemakers are called viruses, worms, spams etc and they employ
system level security and cryptographic techniques to handle them.

6) Figure out the abstract views of a computer system and describe the importance of
operating system. (2015)
Or, Write about the main components of an operating system. (2017)
Or, what are basic components of an operating system? (2014)
Answer:

Figure: Abstract view of operating system


Operating system | 257
1. Hardware – provides basic computing resources (CPU, memory, I/O devices).
2. Operating system – controls and coordinates the use of the hardware among the various
application programs for the various users.
3. Applications programs – define the ways in which the system resources are used to solve the
computing problems of the users (compilers, database systems, video games, business
programs).
4. Users (people, machines, other computers).

7) The operating system can be view as a government and a resource allocator- Explain.
(2014)
Answer: The operating system as a government
Operating system is called the government of any computer system because
1. Like government systems differ based on rule like democracy, bureaucracy, autocracy etc
operating systems also differ by permissions in shell and kernel.
2. Government issues license and passes rules, makes law etc likewise an OS permits users to run
programs by granting access and permissions.
3. Government tries to create jobs while OS executes and creates jobs. Common man cannot
access certain units of the government due to security reasons; likewise Kernel operations are
controlled by OS in monitor mode.
4. Bribe and conspiracy are special system calls through which rich people try to run kernel
programs in a government system. OS also allows intelligent programmers to access the kernel
to program it.
5. The government assigns specific tasks to smaller units called state government which in turn
creates tasks for city and district offices. OS on the other hand spawns processes and in turn
spawns threads for their smooth functioning.
6. At any instant Government can lose its stability by losing the failth from the people and
dissolves itself. OS can also crash trying to execute a fatal process.
7. Bad people try to create havoc thus overloading the government. They employ police or
military to handle those. OS troublemakers are called viruses, worms, spams etc and they
employ system level security and cryptographic techniques to handle them.
The operating system as a resource manager
Modern computers consist of processors, memory, clocks, records, monitors, network
interfaces, printers, and other devices that can be used by multiple users simultaneously. The
work consists of the operating system to direct and control the allocation of processors, memory
and peripheral devices the various programs that use it.
Imagine what would happen if three programs running on a computer trying simultaneously to
print the results on the same printer. The first printed lines could come from program 1, the
following program 2, and then the program 3 and so on. This would result in the total disorder.
The operating system can avoid this potential chaos by transferring the results to be printed in a
buffer file disk. When printing is completed, the operating system can then print the files in the
buffer. Simultaneously, another program can continue generate results without realizing that it
does not send them (yet) to the printer.
258 | Operating system
8) What is Multiprocessor Systems.
Answer: Multiprocessor Operating System refers to the use of two or more central processing
units (CPU) within a single computer system. These multiple CPUs are in a close communication
sharing the computer bus, memory and other peripheral devices. These systems are referred
as tightly coupled systems.
These types of systems are used when very high speed is required to process a large volume of
data. These systems are generally used in environment like satellite control, weather forecasting
etc. The basic organization of multiprocessing system is shown in fig.

Figure: Multiprocessor operating system


Multiprocessing system is based on the symmetric multiprocessing model, in which each
processor runs an identical copy of operating system and these copies communicate with each
other.

9) What is the advantage and disadvantage of multiprocessor systems?


Answer: Systems which have more than one processor are called multiprocessor system. These
systems are also known as parallel systems or tightly coupled systems.
Multiprocessor systems have the following advantages.
1. Increased Throughput: Multiprocessor systems have better performance than single
processor systems. It has shorter response time and higher throughput. User gets more work in
less time.
2. Reduced Cost: Multiprocessor systems can cost less than equivalent multiple single processor
systems. They can share resources such as memory, peripherals etc.
3. Increased reliability: Multiprocessor systems have more than one processor, so if one
processor fails, complete system will not stop. In these systems, functions are divided among
the different processors.
Multiprocessor systems have the following Disadvantage:
1. If one processor fails then it will affect in the speed
2. multiprocessor systems are expensive
3. complex OS is required
4. Large main memory required.
Operating system | 259
10) Explain the symmetric and asymmetric multiprocessing. (2017)
Answer: Symmetric Multiprocessing
Symmetric Multiprocessing is one in which all the processor run the tasks in the operating system.
It has no master-slave relationship like asymmetric multiprocessing. All the processors here
communicate using the shared memory.

Figure: Symmetric Multiprocessor operating system


The processors start executing the processes from the common ready queue. Each processor may
also have its own private queue of ready processes to get executed. It must be taken care by
the scheduler that no two processors execute the same process.
Symmetric Multiprocessing has proper load balancing; better fault tolerance and also reduces the
chance of CPU bottleneck. It is complex as the memory is shared among all the processors. In
Symmetric Multiprocessing, a processor failure results in reduced computing capacity.
Asymmetric Multiprocessing
Asymmetric Multiprocessing has the master-slave relationship among the processors. There is
one master processor that controls remaining slave processor. The master processor allots
processes to slave processor, or they may have some predefined task to perform.

Figure: Asymmetric Multiprocessor operating system


The master processor controls the data structure. The scheduling of processes, I/O processing and
other system activities are controlled by the master processor.
In case a master processor fails, one processor among the slave processor is made the master
processor to continue the execution. In case if a slave processor fails, the other slave processor
260 | Operating system
take over its job. Asymmetric Multiprocessing is simple as there only one processor that is
controlling data structure and all the activities in the system.

11) Distinguish between symmetric and asymmetric multiprocessing system.


(2009,2012)
Answer:
Basis for Symmetric multiprocessing Asymmetric multiprocessing
comparison
Basic Each processor runs the tasks in Only Master processor runs the tasks
Operating System. of Operating System.
Process Processor takes processes from a Master processor assigns processes to
common ready queue, or there the slave processors, or they have
may be a private ready queue for some predefined processes.
each processor.
Architecture All processor in Symmetric All processor in Asymmetric
Multiprocessing has the same Multiprocessing may have same or
architecture. different architecture.
Communication All processors communicate with Processors need not communicate as
another processor by a shared they are controlled by the master
memory. processor.
Failure If a processor fails, the computing If a master processor fails, a slave is
capacity of the system reduces. turned to the master processor to
continue the execution. If a slave
processor fails, its task is switched to
other processors.
Ease Symmetric Multiprocessor is Asymmetric Multiprocessor is simple
complex as all the processors need as master processor access the data
to be synchronized to maintain structure.
the load balance.

12) What is Distributed systems


Answer: A distributed operating system is a software over a collection of independent,
networked, communicating, and physically separate computational nodes. They handle jobs
which are serviced by multiple CPUs. Each individual node holds a specific software subset of the
global aggregate operating system.
Operating system | 261

Figure: Distributed operating system


Each subset is a composite of two distinct service provisionary. The first is a ubiquitous
minimal kernel, or microkernel, that directly controls that node’s hardware. Second is a higher-
level collection of system management components that coordinate the node's individual and
collaborative activities. These components abstract microkernel functions and support user
applications

13) Differentiate between parallel and distributed system. (2008)


Answer:
Basis of comparison parallel system distributed system
Computing Parallel computing is a Distributed computing is a
computation type in which computation type in which
multiple processors execute networked computers
multiple tasks simultaneously communicate and coordinate the
work through message passing to
achieve a common goal.
Number of Parallel computing occurs on Distributed computing occurs
Computers one computer. between multiple computers.
Required
Processing In parallel computing multiple In distributed computing,
Mechanism processors perform computers rely on message
processing. passing.
Synchronization All processors share a single There is no global clock in
master clock for distributed computing, it uses
synchronization. synchronization algorithms.
Memory In Parallel computing, In Distributed computing, each
computers can have shared computer has their own memory.
memory or distributed
memory.
Usage Parallel computing is used to Distributed computing is used to
increase performance and for share resources and to increase
scientific computing. scalability.
262 | Operating system
14) Why distributed system is desirable? (2008)
Or, Discuss the desirable properties of distributed system. (2014)
Answer: The characteristics of a distributed system may be summarized as follows:
1. Concurrency
The components of a distributed computation may run at the same time.
2. Independent failure modes
The components of a distributed computation and the network connecting them may fail
independently of each other.
3. No global time
We assume that each component of the system has a local clock but the clocks might not record
the same time. The hardware on which the clocks are based is not guaranteed to run at precisely
the same rate on all components of the system, a feature called clock drift.
4. Communications delay
It takes time for the effects of an event at one point in a distributed system to propagate
throughout.

15) What are the advantages of distributed system? Explain. (2012)


Answer: Advantages of Distributed System:
1. Sharing Data: There is a provision in the environment where user at one site may be able to
access the data residing at other sites.
2. Autonomy: Because of sharing data by means of data distribution each site is able to retain a
degree of control over data that are stored locally.
3. In distributed system there is a global database administrator responsible for the entire system.
A part of global data base administrator responsibilities is delegated to local data base
administrator for each site. Depending upon the design of distributed database
4. Each local database administrator may have different degree of local autonomy.
5. Availability: If one site fails in a distributed system, the remaining sites may be able to continue
operating. Thus a failure of a site doesn't necessarily imply the shutdown of the System.

16) Write down the advantages of multiprogramming system.


Answer: Multiprogramming or multitasking operating systems are those which consume CPU or
ram efficiently. That mean the CPU keep all times busy and all tasks are given time. In these
systems users get quick response time. But if there are many tasks running on the RAM then it
stops loading more tasks and in that case hard drive will be used for storing some processes.
Advantages of multiprogramming are −
Increased CPU Utilization − Multiprogramming improves CPU utilization as it organizes a
number of jobs where CPU always has one to execute.
Increased Throughput − Throughput means total number of programs executed over a fixed
period of time. In multiprogramming, CPU does not wait for I/O for the program it is executing,
thus resulting in an increased throughput.
Shorter Turnaround Time − Turnaround time for short jobs is improved greatly in
multiprogramming.
Improved Memory Utilization − In multiprogramming, more than one program resides in main
memory. Thus memory is optimally utilized.
Operating system | 263
Increased Resources Utilization − In multiprogramming, multiple programs are actively
competing for resources resulting in higher degree of resource utilization.
Multiple Users − Multiprogramming supports multiple users.

17) Define spooling. (2012,2015)


Answer: Spooling
Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers to putting
data of various I/O jobs in a buffer. This buffer is a special area in memory or hard disk which is
accessible to I/O devices.
An operating system does the following activities related to distributed environment −
 Handles I/O device data spooling as devices have different data access rates.
 Maintains the spooling buffer which provides a waiting station where data can rest while
the slower device catches up.
 Maintains parallel computation because of spooling process as a computer can perform
I/O in parallel fashion. It becomes possible to have the computer read data from a tape, write data
to disk and to write out to a tape printer while it is doing its computing task.

18) Discuss about the use of spooling. (2012)


Answer:
The use of spooling
1. The spooling operation uses a disk as a very large buffer.
2. Spooling is capable of overlapping I/O operation for one job with processor operations for
another job.
3. Spooling is also used to mediate access to punched card readers and punches, magnetic
tape drives, and other slow, sequential I/O devices. It allows the application to run at the speed
of the CPU while operating peripheral devices at their full rates speed.
4. A batch processing system uses spooling to maintain a queue of ready-to-run tasks, which can
be started as soon as the system has the resources to process them.
5. Some store and forward messaging systems, such as uucp, used "spool" to refer to their
inbound and outbound message queues, and this terminology is still found in the
documentation for email and Usenet software, even though messages are often delivered
immediately nowadays.
264 | Operating system
19) What is the main advantage of multiprogramming? Under what circumstance would a
user be better off using time-sharing system, rather than a personal computer or single
workstation? (2014)
Answer:
The circumstance when a user be better off using time-sharing system, rather than a
personal computer or single workstation
When there are few other users, the task is large, and the hardware is fast, time-sharing makes
sense. The full power of the system can be brought to bear on the user’s problem. The problem
can be solved faster than on a personal computer. Another case occurs when lots of other users
need resources at the same time. A personal computer is best when the job is small enough to be
executed reasonably on it and when performance is sufficient to execute the program to the user’s
satisfaction.
So, A user is better off under three situations: when it is cheaper, faster, or easier. For example:
1. When the user is paying for management costs and the costs are cheaper for a time-sharing
system than for a single-user computer.
2. When running a simulation or calculation that takes too long to run on a single PC or
workstation.
3. When a user is travelling and doesn't have laptop to carry around, they can connect remotely to
a time-shared system and do their work.

20) Define clustered systems


Answer: A computer cluster is a single logical unit consisting of multiple computers that are
linked through a LAN. The networked computers essentially act as a single, much more powerful
machine. A computer cluster provides much faster processing speed, larger storage capacity,
better data integrity, superior reliability and wider availability of resources.
Computer clusters are, however, much more costly to implement and maintain. This results in
much higher running overhead compared to a single computer.

21) What do you mean by asymmetric and symmetric clustering? Which one is more
efficient and way? (2009)
Answer:
Asymmetric Clustering - In this, one machine is in hot standby mode while the other is running
the applications. The hot standby host (machine) does nothing but monitor the active server. If
that server fails, the hot standby host becomes the active server.
Symmetric Clustering - In this, two or more hosts are running applications, and they are
monitoring each other. This mode is obviously more efficient, as it uses all of the available
hardware.

22) Define real-time systems.


Answer:
A real-time system is defined as a data processing system in which the time interval required to
process and respond to inputs is so small that it controls the environment. The time taken by the
system to respond to an input and display of required updated information is termed as
the response time. So in this method, the response time is very less as compared to online
processing.
Operating system | 265
Real-time systems are used when there are rigid time requirements on the operation of a
processor or the flow of data and real-time systems can be used as a control device in a dedicated
application. A real-time operating system must have well-defined, fixed time constraints,
otherwise the system will fail. For example, Scientific experiments, medical imaging systems,
industrial control systems, weapon systems, robots, air traffic control systems, etc.
There are two types of real-time operating systems.
Hard real-time systems
Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems,
secondary storage is limited or missing and the data is stored in ROM. In these systems, virtual
memory is almost never found.
Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets priority over other tasks
and retains the priority until it completes. Soft real-time systems have limited utility than hard
real-time systems. For example, multimedia, virtual reality, Advanced Scientific Projects like
undersea exploration and planetary rovers, etc.

23) Differentiate between time sharing and real time system. (2017)
Answer: Following are the differences between Real Time system and Timesharing System.
Sr. Real Time System Timesharing System
No.
1 In this system, events mostly external In this system, many users are
to computer system are accepted and allowed to simultaneously share the
processed within certain deadlines. computer resources.
2 Real time processing is mainly Time sharing processing deals with
devoted to one application. many different applications.
3 User can make inquiry only and Users can write and modify
cannot write or modify programs. programs.
4 User must get a response within the User should get a response within
specified time limit; otherwise it may fractions of seconds but if not, the
result in a disaster. results are not disastrous.
5 No context switching takes place in The CPU switches from one process
this system. to another as a time slice expires or
a process terminates.

24) Discuss about hard and soft real-time systems. (2009,2013,2021)


Answer: Hard real-time systems
Hard real-time systems guarantee that critical tasks complete on time. In hard real-time systems,
secondary storage is limited or missing and the data is stored in ROM. In these systems, virtual
memory is almost never found.
Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets priority over other tasks
and retains the priority until it completes. Soft real-time systems have limited utility than hard
real-time systems. For example, multimedia, virtual reality, Advanced Scientific Projects like
undersea exploration and planetary rovers, etc.
266 | Operating system
25) Define handheld systems.
Answer: Handheld systems include Personal Digital Assistants (PDAs), such as Palm-
Pilots or Cellular Telephones with connectivity to a network such as the Internet. They are usually
of limited size due to which most handheld devices have a small amount of memory, include slow
processors, and feature small display screens.
 Many handheld devices have between 512 KB and 8 MB of memory. As a result, the operating
system and applications must manage memory efficiently. This includes returning all allocated
memory back to the memory manager once the memory is no longer being used.
 Currently, many handheld devices do not use virtual memory techniques, thus forcing
program developers to work within the confines of limited physical memory.
 Processors for most handheld devices often run at a fraction of the speed of a processor in a
PC. Faster processors require more power. To include a faster processor in a handheld device
would require a larger battery that would have to be replaced more frequently.
 The last issue confronting program designers for handheld devices is the small display screens
typically available. One approach for displaying the content in web pages is web clipping,
where only a small subset of a web page is delivered and displayed on the handheld device.
Some handheld devices may use wireless technology such as Bluetooth, allowing remote access
to e-mail and web browsing. Cellular telephones with connectivity to the Internet fall into this
category. Their use continues to expand as network connections become more available and other
options such as cameras and MP3 players, expand their utility

26) What is computing environments?


Answer: Computing Environment is a collection of computers which are used to process
and exchange the information to solve various types of computing problems.
Types of Computing Environments
The following are the various types of computing environments...
1. Personal Computing Environment
2. Time Sharing Computing Environment
3. Client Server Computing Environment
4. Distributed Computing Environment
5. Grid Computing Environment
6. Cluster Computing Environment

27) What is the main difficulty that a programmer must overcome in writing an operating
system? (2008)
Answer: The main difficulty is keeping the operating system within the fixed time constraints of a
real-time system. If the system does not complete a task in a certain time frame, it may cause a
breakdown of the entire system it is running. Therefore when writing an operating system for a
real-time system, the writer must be sure that his scheduling schemes don't allow response time
to exceed the time constraint.
Operating system | 267
28) What is the purpose of command-interpreter? (2013)
Answer: A command interpreter is the part of a computer operating system that understands and
executes commands that are entered interactively by a human being or from a program. In some
operating systems, the command interpreter is called the shell.
The main features/purpose of the command interpreter are :
1. The possibility to add new commands in a very easy way. It contains 81 built-in commands.
2. The use of an expression evaluator, written by Mark Morley, which can be used to parse
numeric arguments, or make direct computations, and define variables. It is possible to add easily
new expression evaluators. One using complex numbers is implemented in the library.
3. The possibility to write, load and execute programs, which are sequences of commands, using
loops and jumps.
4. The definition of objects which are arrays of several types of numbers, having names. So it is
possible to refer to objects in arguments of commands for instance, by giving their name. It is also
possible to define structures, whose members are objects, other structures or variables of the
expression evaluator.
5. There is an implementation of complex numbers in two ways. The library contains also some
functions that simplify the use of arrays of numbers.
6. it is possible to run several programs simultaneously, and these programs can communicate
with each other (threads).

29) Write down the important features of command line interface and graphical user
interface. (2013)
Answer: The main features/purpose of the command interpreter are :
1. The possibility to add new commands in a very easy way. It contains 81 built-in commands.
2. The use of an expression evaluator, written by Mark Morley, which can be used to parse
numeric arguments, or make direct computations, and define variables. It is possible to add easily
new expression evaluators. One using complex numbers is implemented in the library.
3. The possibility to write, load and execute programs, which are sequences of commands, using
loops and jumps.
4. The definition of objects which are arrays of several types of numbers, having names. So it is
possible to refer to objects in arguments of commands for instance, by giving their name. It is also
possible to define structures, whose members are objects, other structures or variables of the
expression evaluator.
5. There is an implementation of complex numbers in two ways. The library contains also some
functions that simplify the use of arrays of numbers.
6. It is possible to run several programs simultaneously, and these programs can communicate
with each other (threads).
Features of the Graphical User Interface (GUI)
Entering dates
 A graphical representation of a calendar that allows you to enter the date in your form by
clicking on the desired date in the calendar.
 Access the calendar in date fields by using the LOV icon or through the menu under Edit, List
of Values.
268 | Operating system
Folders are special blocks that allow you to:
 Only display the fields you are interested in.
 Arrange the fields to best meet your needs.
 Define query parameters to automatically call the records you need when opening the folder.
 Sort in any order relevant to your needs.
 Toolbar
 Most commonly used menu items are duplicated as icons at the top of the Applications
window.

Attachments
 Used to link non-structured data such as images, word processing documents, or video to
application data.
 Multiple windows
 Allows you to display all elements of a business flow on the same screen.
 Does not require that you complete entering data in one form before navigating to another
form. Each form can be committed independently.
 On-line Help
 Help is now based on the functional flow of the task rather than according to the form's
structure.
 Lets you select the task you want to perform and provides a step by step description of the
task.
 Allows navigation to any part of the Help system.

30) Difference between command line interface and graphical user interface.(2021)
Solution:
BASIS FOR CLI GUI
COMPARISON
Basic Command line interface allows a Graphical User interface allows a
user to interact with the system user to interact with the system
through commands. through graphics which includes
images, icons, etc.
Device used Keyboard Mouse and keyboard
Ease of performing Hard to perform an operation and Easy to perform tasks and does
tasks require expertise. not require expertise.
Precision High Low
Flexibility Intransigent More flexible

Memory Low High


consumption
Appearance Can't be changed Custom changes can be
employed
Speed Fast Slow
Integration and Scope of potential improvements Bounded
extensibility
Operating system | 269
31. Define UNIX. [2018]
Solution:
UNIX: UNIX operating systems are a family of computer operating systems that are derived from
the original Unix System from Bell Labs. Initial proprietary derivatives included the HP-UX and
the SunOS systems. However, growing incompatibility between these systems led to the creation
of interoperability standards like POSIX. Modern POSIX systems include Linux, its variants, and
Mac OS.
Salient Features of UNIX:
 It is a multi-user system where the same resources can be shared by different users.
 It provides multi-tasking, wherein each user can execute many processes at the same time.
 It was the first operating system that was written in a high-level language (C Language). This
made it easy to port to other machines with minimum adaptations.
 It provides a hierarchical file structure which allows easier access and maintenance of data.
 UNIX has built-in networking functions so that different users can easily exchange
information.
 UNIX functionality can be extended through user programs built on a standard programming
interface.

32. What are the different directory structure generally used? [2014]
Solution:
Directory: Directory can be defined as the listing of the related files on the disk. The
directory may store some or the entire file attributes.
To get the benefit of different file systems on the different operating systems, A hard disk can be
divided into the number of partitions of different sizes. The partitions are also called volumes or
mini disks.
Each partition must have at least one directory in which, all the files of the partition can be listed.
A directory entry is maintained for each file in the directory which stores all the information
related to that file.

A directory can be viewed as a file which contains the Meta data of the bunch of files.
270 | Operating system
Every Directory supports a number of common operations on the file:
1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files
Operating system | 271

CHAPTER 2
OPERATING SYSTEM CALLS
(1) Mention some common operating system component.
Answer: From the virtual machine point of view (also resource management)
These components reflect the services made available by the O.S.
Process Management
 Process is a program in execution --- numerous processes to choose from in
a multi-programmed system,
 Process creation/deletion (bookkeeping)
 Process suspension/resumption (scheduling, system vs. user)
 Process synchronization
 Process communication
 Deadlock handling
Memory Management
Maintain bookkeeping information
Map processes to memory locations
Allocate/deallocate memory space as requested/required
I/O Device Management
Disk management functions such as free space management, storage allocation, fragmentation
removal, head scheduling
 Consistent, convenient software to I/O device interface through buffering/caching,
custom drivers for each device.
 File System
Built on top of disk management
 File creation/deletion.
 Support for hierarchical file systems
 Update/retrieval operations: read, write, append, seek
 Mapping of files to secondary storage
 Protection
Controlling access to the system
 Resources --- CPU cycles, memory, files, devices
 Users --- authentication, communication
 Mechanisms, not policies
 Network Management
Often built on top of file system
 TCP/IP, IPX, IPng
 Connection/Routing strategies
 ``Circuit'' management --- circuit, message, packet switching
 Communication mechanism
 Data/Process migration
 Network Services (Distributed Computing)
Built on top of networking
272 | Operating system
 Email, messaging (GroupWise)
 FTP
 gopher, www
 Distributed file systems --- NFS, AFS, LAN Manager
 Name service --- DNS, YP, NIS
 Replication --- gossip, ISIS
 Security --- Kerberos
User Interface
 Character-Oriented shell --- sh, csh, command.com ( User replaceable)
 GUI --- X, Windows 95

(2) What do you know about command line interpreter?


Answer: A command line interpreter is any program that allows the entering of commands and
then executes those commands to the operating system. It's literally an interpreter of commands.
Unlike a program that has a graphical user interface (GUI) like buttons and menus that are
controlled my a mouse, a command line interpreter accepts lines of text from a keyboard as the
commands and then converts those commands into functions that the operating system
understands.
Any command line interpreter program is also often referred to in general as a command line
interface. Less commonly, a command line interpreter is also called a CLI, command language
interpreter, console user interface, command processor, shell, command line shell, or a command
interpreter.

(3) Describe three methods to passing parameters between user program and parameter.
Answer: Three general methods exist for passing parameters to the OS:
1. Parameters can be passed in registers.
2. When there are more parameters than registers, parameters can be stored in a block and the
block address can be passed as a parameter to a register.
3. Parameters can also be pushed on or popped off the stack by the operating system.
Operating system | 273
(4) What are the three major activates of an operating system in regard to memory
management. (2014)
Answer: Memory management refers to management of Primary Memory or Main Memory. Main
memory is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program to
be executed, it must in the main memory. An Operating System does the following activities for
memory management −
1. Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in
use.
2. In multiprogramming, the OS decides which process will get memory when and how much.
3. Allocates the memory when a process requests it to do so and De-allocates the memory when a
process no longer needs it or has been terminated.

(5) Define system call. Mention major categories of system calls with examples.
A system call is the programmatic way in which a computer program requests a service from the
kernel of the operating system it is executed on. A system call is a way for programs to interact
with the operating system. A computer program makes a system call when it makes a request to
the operating system’s kernel. System call provides the services of the operating system to the
user programs via Application Program Interface(API). It provides an interface between a process
and operating system to allow user-level processes to request services of the operating system.
Types of System Calls
There are 5 different categories of system calls:
Process control, file manipulation, device manipulation, information maintenance and
communication.
Process Control
A running program needs to be able to stop execution either normally or abnormally. When
execution is stopped abnormally, often a dump of memory is taken and can be examined with a
debugger.
File Management
Some common system calls are create, delete, read, write, reposition, or close. Also, there is a need
to determine the file attributes – get and set file attribute. Many times the OS provides an API to
make these system calls.
Device Management
Process usually require several resources to execute, if these resources are available, they will be
granted and control returned to the user process. These resources are also thought of as devices.
Some are physical, such as a video card, and others are abstract, such as a file.
User programs request the device, and when finished they release the device. Similar to files, we
can read, write, and reposition the device.
Information Management
Some system calls exist purely for transferring information between the user program and the
operating system. An example of this is time, or date.
The OS also keeps information about all its processes and provides system calls to report this
information.
274 | Operating system
Communication
There are two models of interprocess communication, the message-passing model and the shared
memory model.
 Message-passing uses a common mailbox to pass messages between processes.
 Shared memory use certain system calls to create and gain access to create and gain access to
regions of memory owned by other processes. The two processes exchange information by
reading and writing in the shared data.

(6) Write short notes on Microkernel based OS structure; (2021,2016)


Answer: Microkernel is one of the classification of the kernel. Being a kernel it manages all system
resources. But in a microkernel, the user services and kernel services are implemented in
different address space. The user services are kept in user address space, and kernel services are
kept under kernel address space, thus also reduces the size of kernel and size of operating system
as well.

It provides minimal services of process and memory management. The communication between
client program/application and services running in user address space is established through
message passing, reducing the speed of execution microkernel. The Operating System remains
unaffected as user services and kernel services are isolated so if any user service fails it does not
affect kernel service. Thus it adds to one of the advantages in a microkernel. It is
easily extendable i.e. if any new services are to be added they are added to user address space
and hence requires no modification in kernel space. It is also portable, secure and reliable.
Microkernel Architecture –
Since kernel is the core part of the operating system, so it is meant for handling the most
important services only. Thus in this architecture only the most important services are inside
kernel and rest of the OS services are present inside system application program. Thus users are
able to interact with those not-so important services within the system application. And the
microkernel is solely responsible for the most important services of operating system they are
named as follows:
 Inter process-Communication
 Memory Management
 CPU-Scheduling
Operating system | 275
Advantages of Microkernel –
 The architecture of this kernel is small and isolated hence it can function better.
 Expansion of the system is easier, it is simply added in the system application without
disturbing the kernel.

7. Define batch operating. [2018]


Solution:
Batch operating system: The users of a batch operating system do not interact with the computer
directly. Each user prepares his job on an off-line device like punch cards and submits it to the
computer operator. To speed up processing, jobs with similar needs are batched together and run
as a group. The programmers leave their programs with the operator and the operator then sorts
the programs with similar requirements into batches.
The problems with Batch Systems are as follows −
 Lack of interaction between the user and the job.
 CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.
 Difficult to provide the desired priority.

8. Discuss Resource allocation graph. [2016]


Solution:
Resource allocation graph: The resource allocation graph is the pictorial representation of the
state of a system. As its name suggests, the resource allocation graph is the complete information
about all the processes which are holding some resources or waiting for some resources.
It also contains the information about all the instances of all the resources whether they are
available or being used by the processes.
In Resource allocation graph, the process is represented by a Circle while the Resource is
represented by a rectangle. Let's see the types of vertices and edges in detail.

Vertices are mainly of two types, Resource and process. Each of them will be represented by a
different shape. Circle represents process while rectangle represents resource.
A resource can have more than one instance. Each instance will be represented by a dot inside the
rectangle.
276 | Operating system

Edges in RAG are also of two types, one represents assignment and other represents the wait of a
process for a resource. The above image shows each of them.
A resource is shown as assigned to a process if the tail of the arrow is attached to an instance to
the resource and the head is attached to a process.
A process is shown as waiting for a resource if the tail of an arrow is attached to the process while
the head is pointing towards the resource.

9. What is co-operating process? [2012]


Solution:
Co-operating processes: In the computer system, there are many processes which may be either
independent processes or cooperating processes that run in the operating system. A process is
said to be independent when it cannot affect or be affected by any other processes that are
running the system. It is clear that any process which does not share any data (temporary or
persistent) with any another process then the process independent. On the other hand, a
cooperating process is one which can affect or affected by any another process that is running on
the computer. The co-operating process is one which shares data with another process.
Several process cooperation:
 Information sharing: In the information sharing at the same time, many users may want the
same piece of information (for instance, a shared file) and we try to provide that environment
in which the users are allowed to concurrent access to these types of resources.
 Computation speed up: When we want a task that our process run faster so we break it into
a subtask, and each subtask will be executing in parallel with another one. It is noticed that the
speedup can be achieved only if the computer has multiple processing elements (such as CPUs
or I/O channels).
 Modularity: In the modularity, we are trying to construct the system in such a modular
fashion, in which the system dividing its functions into separate processes.
 Convenience: An individual user may have many tasks to perform at the same time and the
user is able to do his work like editing, printing and compiling.
Operating system | 277
10. Discuss the basic organization of file system. [2013]
Solution:
File Systems organization: File system is the part of the operating system which is responsible
for file management. It provides a mechanism to store the data and access to the file contents
including data and programs. Some Operating systems treats everything as a file for example
Ubuntu.
The File system takes care of the following issues
o File Structure: We have seen various data structures in which the file can be stored. The task
of the file system is to maintain an optimal file structure.
o Recovering Free space: Whenever a file gets deleted from the hard disk, there is a free space
created in the disk. There can be many such spaces which need to be recovered in order to
reallocate them to other files.
o Disk space assignment to the files: The major concern about the file is deciding where to
store the files on the hard disk. There are various disks scheduling algorithm which will be
covered later in this tutorial.
o Tracking data location: A File may or may not be stored within only one block. It can be
stored in the non-contiguous blocks on the disk. We need to keep track of all the blocks on
which the part of the files reside.

11. Describe overlay technique with example. [2012]


Solution:
Overlaying means "the process of transferring a block of program code or other data into internal
memory, replacing what is already stored". Overlaying is a technique that allows programs to be
larger than the computer's main memory. An embedded would normally use overlays because of
the limitation of physical memory which is internal memory for a system-on-chip and the lack of
virtual memory facilities.
Overlaying requires the programmers to split their object code to into multiple completely-
independent sections, and the overlay manager that linked to the code will load the required
overlay dynamically & will swap them when necessary.

This technique requires the programmers to specify which overlay to load at different
circumstances.
278 | Operating system

CHAPTER 3
PROCESS AND THREADS
(1) What is process? (2015,2013,2012,2008)
Answer: A process is basically a program in execution. The execution of a process must progress
in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in the
system.
To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data. The following image shows a simplified layout of a process
inside main memory −

Component & Description


Stack: The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
Heap: This is dynamically allocated memory to a process during its run time.
Text: This includes the current activity represented by the value of Program
Counter and the contents of the processor's registers.
Data : This section contains the global and static variables.
Operating system | 279
(2) What are the different states of a process?(2021,2013,2012,2008)
Or, Describe the operation of different process states with diagram. (2015)
Answer: When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.

Figure Diagram of process state


In general, a process can have one of the following five states at a time.
State & Description
Start/New
This is the initial state when a process is first started/created.
Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have
the processor allocated to them by the operating system so that they can run. Process may
come into this state after Start state or while running it by but interrupted by the
scheduler to assign CPU to some other process.
Running
Once the process has been assigned to a processor by the OS scheduler, the process state is
set to running and the processor executes its instructions.
Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.
Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
280 | Operating system

(3) What does ‘PCB’ stand for? (2015)


Answer: A Process Control Block is a data structure maintained by the Operating System for
every process. The PCB is identified by an integer process ID (PID).

(4) Mention the types of process- specific information associated with PCB.(2021,2015)
Or, what kinds of information’s are contained in a PCB? (2008)
Or, briefly explain about the contents of the Process Control Block (PCB). (2013, 2012)
Answer: A PCB keeps all the information needed to keep track of a process as listed below in the
table −
Information & Description
Process State: The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
Process privileges: This is required to allow/disallow access to system resources.

Process ID: Unique identification for each of the process in the operating system.
Pointer: A pointer to parent process.
Program Counter: Program Counter is a pointer to the address of the next instruction to be
executed for this process.
CPU registers
Various CPU registers where process need to be stored for execution for running state.
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the process.
Memory management information
This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
IO status information
This includes a list of I/O devices allocated to the process.
Operating system | 281
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −

The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.

(5) What do you understand about ‘Context Switch’ (2021,2015,2013,2008)


Answer: A context switch is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at a later
time. Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.

(6) What type of possibilities exists in term of execution and in terms of the address
space when a new process is created? (2008)
Answer:
Process Creation
Through appropriate system calls, such as fork or spawn, processes may create other processes.
The process which creates other process, is termed the parent of the other process, while the
created sub-process is termed its child.
Each process is given an integer identifier, termed as process identifier, or PID. The parent PID
(PPID) is also stored for each process.
On a typical UNIX systems the process scheduler is termed as sched, and is given PID 0. The first
thing done by it at system start-up time is to launch init, which gives that process PID 1. Further
282 | Operating system
Init launches all the system daemons and user logins, and becomes the ultimate parent of all other
processes.

A child process may receive some amount of shared resources with its parent depending on
system implementation. To prevent runaway children from consuming all of a certain system
resource, child processes may or may not be limited to a subset of the resources originally
allocated to the parent.
There are two options for the parent process after creating the child:
 Wait for the child process to terminate before proceeding. Parent process makes
a wait() system call, for either a specific child process or for any particular child process,
which causes the parent process to block until the wait() returns. UNIX shells normally wait
for their children to complete before issuing a new prompt.
 Run concurrently with the child, continuing to process without waiting. When a UNIX shell
runs a process as a background task, this is the operation seen. It is also possible for the
parent to run for a while, and then wait for the child later, which might occur in a sort of a
parallel processing operation.
There are also two possibilities in terms of the address space of the new process:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
Process Termination
By making the exit(system call), typically returning an int, processes may request their own
termination. This int is passed along to the parent if it is doing a wait(), and is typically zero on
successful completion and some non-zero code in the event of any problem.
Processes may also be terminated by the system for a variety of reasons, including:
 The inability of the system to deliver the necessary system resources.
 In response to a KILL command or other unhandled process interrupts.
 A parent may kill its children if the task assigned to them is no longer needed i.e. if the need of
having a child terminates.
 If the parent exits, the system may or may not allow the child to continue without a parent (In
UNIX systems, orphaned processes are generally inherited by init, which then proceeds to kill
them.)
When a process ends, all of its system resources are freed up, open files flushed and closed, etc.
Operating system | 283
(7) What do you mean by co-operating process? (2014,2012,2010)
Answer: Cooperating Processes are those that can affect or be affected by other processes.
There are several reasons why cooperating processes are allowed:
 Information Sharing - There may be several processes which need access to the same file for
example. ( e.g. pipelines. )
 Computation speedup - Often a solution to a problem can be solved faster if the problem can be
broken down into sub-tasks to be solved simultaneously (particularly when multiple processors
are involved.)
 Modularity - The most efficient architecture may be to break a system down into cooperating
modules. ( E.g. databases with a client-server architecture. )
 Convenience - Even a single user may be multi-tasking, such as editing, compiling, printing, and
running the same code in different windows.

(8) Explain the following terms :


1. Process: (2021,2014)
A process is basically a program in execution. The execution of a process must progress in a
sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in the
system.
To put it in simple terms, we write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data. The following image shows a simplified layout of a process
inside main memory −

2.Thread. (2021,2014)
Answer: A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its current
working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open
files. When one thread alters a code segment memory item, all other threads see that.
284 | Operating system
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.
3.Producer-Consumer Problem; (2016)
There are two processes: Producer and Consumer. Producer produces some item and Consumer
consumes that item. The two processes shares a common space or memory location known as
buffer where the item produced by Producer is stored and from where the Consumer consumes
the item if needed. There are two version of this problem: first one is known as unbounded buffer
problem in which Producer can keep on producing items and there is no limit on size of buffer, the
second one is known as bounded buffer problem in which producer can produce up to a certain
amount of item and after that it starts waiting for consumer to consume it. We will discuss the
bounded buffer problem. First, the Producer and the Consumer will share some common memory,
then producer will start producing items. If the total produced item is equal to the size of buffer,
producer will wait to get it consumed by the Consumer. Similarly, the consumer first check for the
availability of the item and if no item is available, Consumer will wait for producer to produce it. If
there are items available, consumer will consume it.

(9) Explain the different Types of Thread (2021)


Answer: Threads are implemented in following two ways −
 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on kernel, an operating
system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The thread
library contains code for creating and destroying threads, for passing message and data between
threads, for scheduling thread execution and for saving and restoring thread contexts. The
application starts with a single thread.
Operating system | 285
Advantages
 Thread switching does not require Kernel mode privileges.
 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.
Disadvantages
 In a typical operating system, most system calls are blocking.
 Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management code in
the application area. Kernel threads are supported directly by the operating system. Any
application can be programmed to be multithreaded. All of the threads within an application are
supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals threads
within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs
thread creation, scheduling and management in Kernel space. Kernel threads are generally slower
to create and manage than the user threads.
Advantages
 Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
 Kernel routines themselves can be multithreaded.
Disadvantages
 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.

(10) Describe the Difference between Process and Thread


Process Thread
Process is heavy weight or resource intensive. Thread is light weight, taking lesser
resources than a process.
Process switching needs interaction with Thread switching does not need to interact
operating system. with operating system.
In multiple processing environments, each All threads can share same set of open files,
process executes the same code but has its child processes.
own memory and file resources.
If one process is blocked, then no other While one thread is blocked and waiting, a
process can execute until the first process is second thread in the same task can run.
unblocked.
Multiple processes without using threads use Multiple threaded processes use fewer
more resources. resources.
In multiple processes each process operates One thread can read, write or change
independently of the others. another thread's data.
286 | Operating system
(11) Difference between User-Level & Kernel-Level Thread
Answer:
User-Level Threads Kernel-Level Thread
User-level threads are faster to create and Kernel-level threads are slower to
manage. create and manage.
Implementation is by a thread library at the Operating system supports
user level. creation of Kernel threads.
User-level thread is generic and can run on Kernel-level thread is specific to
any operating system. the operating system.
Multi-threaded applications cannot take Kernel routines themselves can be
advantage of multiprocessing. multithreaded.

(12) Discuss about client server communication via Remote Procedure Calls (RPC). [2020]
A remote procedure call is an inter process communication technique that is used for client-server
based applications. It is also known as a subroutine call or a function call.
A client has a request message that the RPC translates and sends to the server. This request may
be a procedure or a function call to a remote server. When the server receives the request, it sends
the required response back to the client. The client is blocked while the server is processing the
call and only resumed execution after the server is finished.
The sequence of events in a remote procedure call are given as follows −
 The client stub is called by the client.
 The client stub makes a system call to send the message to the server and puts the parameters
in the message.
 The message is sent from the client to the server by the client’s operating system.
 The message is passed to the server stub by the server operating system.
 The parameters are removed from the message by the server stub.
 Then, the server procedure is called by the server stub.
A diagram that demonstrates this is as follows −
Operating system | 287
(13) Write short note on remote procedure call. [2018]
Solution:
Remote Procedure Call (RPC): Remote Procedure call is an inter process communication
technique. It is used for client-server applications. RPC mechanisms are used when a computer
program causes a procedure or subroutine to execute in a different address space, which is coded
as a normal procedure call without the programmer specifically coding the details for the remote
interaction. This procedure call also manages low-level transport protocol, such as User Datagram
Protocol, Transmission Control Protocol/Internet Protocol etc. It is used for carrying the message
data between programs. The Full form of RPC is Remote Procedure Call.

(14) Distinguish between “Light weight process” and “Heavy weight process”. [2018]
Solution:
Lightweight and heavyweight processes refer the mechanics of a multi-processing system.
In a lightweight process, threads are used to divvy up the workload. Here you would see one
process executing in the OS (for this application or service.)

This process would process 1 or more threads. Each of the threads in this process shares the same
address space. Because threads share their address space, communication between the threads is
simple and efficient. Each thread could be compared to a process in a heavyweight scenario.
In a heavyweight process, new processes are created to perform the work in parallel. Here (for the
same application or service), you would see multiple processes running. Each heavyweight
process contains its own address space. Communication between these processes would involve
additional communications mechanisms such as sockets or pipes.
The benefits of a lightweight process come from the conservation of resources. Since threads use
the same code section, data section and OS resources, less overall resources are used. The
drawback is now you have to ensure your system is thread-safe. You have to make sure the
threads don't step on each other. Fortunately, Java provides the necessary tools to allow you to do
this.
288 | Operating system
(15) Why do you think CPU scheduling is the basis of multi-programmed operating system?
(2017)
Answer: CPU scheduling is a process which allows one process to use the CPU while the execution
of another process is on hold(in waiting state) due to unavailability of any resource like I/O etc,
thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient, fast and
fair.
Whenever the CPU becomes idle, the operating system must select one of the processes in
the ready queue to be executed. The selection process is carried out by the short-term scheduler
(or CPU scheduler). The scheduler selects from among the processes in memory that are ready to
execute, and allocates the CPU to one of them.
CPU scheduling is the basis of multiprogramming. Whenever a computer CPU becomes idle, the
operating system must select a process in the ready queue to be executed. One application of
priority queues in operating systems is scheduling jobs on a CPU.

(16) Describe the different CPU scheduling criteria. (2021,2016)


Or, Write down the main criteria of scheduling algorithm. (2015)
Answer: There are many different criterias to check when considering the "best" scheduling
algorithm, they are:
CPU Utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most of
the time (Ideally 100% of the time). Considering a real system, CPU usage should range from 40%
(lightly loaded) to 90% (heavily loaded.)
Throughput
It is the total number of processes completed per unit time or rather say total amount of work
done in a unit of time. This may range from 10/second to 1/hour depending on the specific
processes.
Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from time of
submission of the process to the time of completion of the process (Wall clock time).
Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.
Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get into
the CPU.
Response Time
Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution (final
response).
In general CPU utilization and Throughput are maximized and other factors are reduced for
proper optimization.
Operating system | 289
(17) Distinguish between preemptive and non-preemptive CPU scheduling.
(2016,2015)
BASIS FOR PREEMPTIVE SCHEDULING NON PREEMPTIVE SCHEDULING
COMPARISON
Basic The resources are allocated to a Once resources are allocated to a
process for a limited time. process, the process holds it till it
completes its burst time or switches
to waiting state.
Interrupt Process can be interrupted in Process cannot be interrupted till it
between. terminates or switches to waiting
state.
Starvation If a high priority process If a process with long burst time is
frequently arrives in the ready running CPU, then another process
queue, low priority process may with less CPU burst time may starve.
starve.
Overhead Preemptive scheduling has Non-preemptive scheduling does not
overheads of scheduling the have overheads.
processes.
Flexibility Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.

Cost Preemptive scheduling is cost Non-preemptive scheduling is not


associated. cost associative.

(18) Describe the difference among short-time, medium time and long time scheduling.
(2017)
Answer: Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.
3 It controls the degree of It provides lesser control It reduces the degree of
multiprogramming over degree of multiprogramming.
multiprogramming
4 It is almost absent or It is also minimal in time It is a part of Time sharing
minimal in time sharing sharing system systems.
system
5 It selects processes from It selects those processes It can re-introduce the
pool and loads them into which are ready to execute process into memory and
memory for execution execution can be continued.
290 | Operating system
(19) What do you mean by dispatcher? (2021,2015,2013)
Answer: The dispatcher is the module that gives control of the CPU to the process selected by
the short-term scheduler. This function involves:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program from where it
left last time.
The dispatcher should be as fast as possible, given that it is invoked during every process switch.
The time taken by the dispatcher to stop one process and start another process is known as
the Dispatch Latency. Dispatch Latency can be explained using the below figure:

(20) Discuss about multilevel queue scheduling. (2013)


Answer: A multi-level queue scheduling algorithm partitions the ready queue into several
separate queues. The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type. Each queue has its
own scheduling algorithm.
separate queues might be used for foreground and background processes. The foreground queue
might be scheduled by Round Robin algorithm, while the background queue is scheduled by an
FCFS algorithm.
In addition, there must be scheduling among the queues, which is commonly implemented as
fixed-priority preemptive scheduling.
For example: The foreground queue may have absolute priority over the background queue.
Let us consider an example of a multilevel queue-scheduling algorithm with five queues:
1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes
Operating system | 291
Each queue has absolute priority over lower-priority queues. No process in the batch queue, for
example, could run unless the queues for system processes, interactive processes, and interactive
editing processes were all empty. If an interactive editing process entered the ready queue while a
batch process was running, the batch process will be preempted.

(21) Comparison of Scheduling Algorithms


Answer:
First Come First Serve (FCFS)
Advantages:
 FCFS algorithm doesn't include any complex logic, it just puts the process requests in a queue
and executes it one by one.
 Hence, FCFS is pretty simple and easy to implement.
 Eventually, every process will get a chance to run, so starvation doesn't occur.
Disadvantages:
 There is no option for pre-emption of a process. If a process is started, then CPU executes the
process until it ends.
 Because there is no pre-emption, if a process executes for a long time, the processes in the
back of the queue will have to wait for a long time before they get a chance to be executed.
Shortest Job First (SJF)
Advantages: of Shortest Job first scheduling algorithm.
 According to the definition, short processes are executed first and then followed by longer
processes.
 The throughput is increased because more processes can be executed in less amount of
time.
Disadvantages:
 The time taken by a process must be known by the CPU beforehand, which is not possible.
 Longer processes will have more waiting time, eventually they'll suffer starvation.
Note: Preemptive Shortest Job First scheduling will have the same advantages and disadvantages
as those for SJF.
292 | Operating system
Round Robin (RR)
Advantages: of using the Round Robin Scheduling:
 Each process is served by the CPU for a fixed time quantum, so all processes are given the
same priority.
 Starvation doesn't occur because for each round robin cycle, every process is given a fixed
time to execute. No process is left behind.
Disadvantages:
 The throughput in RR largely depends on the choice of the length of the time quantum. If
time quantum is longer than needed, it tends to exhibit the same behavior as FCFS.
 If time quantum is shorter than needed, the number of times that CPU switches from one
process to another process, increases. This leads to decrease in CPU efficiency.

Priority based Scheduling


Advantages of Priority Scheduling:
 The priority of a process can be selected based on memory requirement, time requirement
or user preference. For example, a high end game will have better graphics that means the
process which updates the screen in a game will have higher priority so as to achieve
better graphics performance.
Disadvantages:
 A second scheduling algorithm is required to schedule the processes which have same
priority.
 In preemptive priority scheduling, a higher priority process can execute ahead of an
already executing lower priority process. If lower priority process keeps waiting for
higher priority processes, starvation occurs.

(22) Consider the following set of processes, with the length of the CPU –burst time given
in milliseconds : (2016,2015,2009)
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
(i) Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF , a non-
preemptive priority and RR (quantum=1) scheduling.
(ii) What is the turnaround time of each process for each of the scheduling algorithms in part (i)?
(iii) What is the waiting time of each process for each of the scheduling algorithm in part (i) ?
Answer: (i) Gantt charts
a) FCFS(first come first served) scheduling:

P1 P2 P3 P4 P5

0 0 11 13 4 9
Operating system | 293
b) SJF(Shortest Job first)

P1 P2 P3 P4 P5

0 110 211 413 914 1919

c) Non-preemptive priority

P1 P2 P3 P4 P5

0 1 6 16 18 19

d) Round Robin(RR) scheduling (quantum=1)


P1 P2 P3 P4 P5 P1 P3 P5 P1 P5 P1 P5 P1 P5 P1 P1 P1 P1 P1 P1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

(ii) Turnaround time


Turnaround time = Burst time+ waiting time

a) FCFS(first come first served) scheduling:


Process Turnaround time = Burst time+ waiting time
P1 10+0 =10 ms
P2 1 +10 = 11 ms
P3 2+11 = 13 ms
P4 1+ 13=14 ms

P5 5+ 14=19 ms

Average turnaround time = (10+11+13+14+19)/5=13.4 ms

b) SJF(Shortest Job first)


Process Turnaround time = Burst time+ waiting time

P1 10+9 =19 ms

P2 1 +0 = 1 ms

P3 2+2 = 4 ms

P4 1+ 1=2 ms

P5 5+ 4=9 ms
294 | Operating system
Average turnaround time = (19+1+4+2+9)/5=7 ms
c) Non-preemptive priority
Process Turnaround time = Burst time+ waiting time

P1 10+6 =16 ms

P2 1 +0 = 1 ms

P3 2+16 = 18 ms

P4 1+ 18=19 ms

P5 5+ 1=6 ms

Average turnaround time = (16+1+18+19+6)/5=12 ms

d) Round Robin(RR) scheduling (quantum=1)


Process Turnaround time = Burst time+ waiting time

P1 10+9 =19 ms

P2 1 +1 = 2 ms

P3 2+5 = 7 ms

P4 1+ 3=4 ms

P5 5+ 9=14 ms

Average turnaround time = (19+2+7+4+14)/5=9.2 ms

(iii) Waiting time


Waiting Time: Service Time - Arrival Time
Here, arrival time is 0 ms.

a) FCFS(first come first served) scheduling:


Process Wait Time : Service Time - Arrival Time
P1 0 - 0 = 0 ms
P2 10 - 0 = 10 ms
P3 11 - 0 = 11 ms
P4 13 - 0 = 13 ms
P5 14 - 0 = 14 ms
Average Waiting Time: (0+10+11+13+14) / 5 = 9.6 ms
Operating system | 295
b) SJF(Shortest Job first)
Process Wait Time : Service Time - Arrival Time
P1 9 - 0 = 9 ms
P2 0 - 0 = 0 ms
P3 2 - 0 = 2 ms
P4 1 - 0 = 1 ms
P5 4 - 0 = 4 ms
Average Waiting Time: (9+0+2+1+4) / 5 = 3.2 ms

c) Non-preemptive priority
Process Wait Time : Service Time - Arrival Time
P1 6 - 0 = 6 ms
P2 0 - 0 = 0 ms
P3 16 - 0 = 16 ms
P4 18 - 0 = 18 ms
P5 1 - 0 = 1 ms
Average Waiting Time: (6+0+16+18+1) / 5 = 8.2 ms

d) Round Robin(RR) scheduling (quantum=1)


Process Wait Time : Service Time - Arrival Time
P1 (5-1)+(8-6)+(10-9)+(12-11)+(14-13)=4+2+1+1+1=9 ms
P2 1 - 0 = 1 ms
P3 (2-0)+(6 – 3) = 5 ms
P4 3 - 0 = 3 ms
P5 (4-0)+(7-5)+(9-8)+(11-10)+(13-12)=4+2+1+1+1=9 ms

Average Waiting Time: (9+1+5+3+9) / 4 = 5.4 ms

(23) Consider the following set of processes, with the length of the CPU –burst time given
in milliseconds : (2021,2017,2012,2010)
Process Burst time Priority
P1 8 3
P2 3 1
P3 2 3
P4 1 4
P5 4 2
The processes are assumed to have arrived in the order:
P1,P2,P3,P4,P5, all at time 0.
i. Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF , a no
preemptive priority and RR (quantum=1) scheduling.
ii. What is the turnaround time of each process for each of the scheduling algorithms in part (i)?
iii. What is the waiting time of each process for each of the scheduling algorithm in part (i) ?
296 | Operating system
Answer: Gantt charts

a) FCFS(first come first served) scheduling:

P1 P2 P3 P4 P5

0 8 11 13 14 18

b) SJF(Shortest Job first)

P4 P3 P2 P5 P1

0 1 3 6 14 18

c) Non-preemptive priority

P2 P5 P1 P3 P4

0 3 7 15 17 18

d) Round Robin(RR) scheduling (quantum=1)


P1 P2 P3 P4 P5 P1 P3 P5 P1 P5 P1 P5 P1 P5 P1 P1 P1 P1 P1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

(ii) Turnaround time


Turnaround time = Burst time+ waiting time
a) FCFS(first come first served) scheduling:
Process Turnaround time = Burst time+ waiting time
P1 8+0 =8 ms
P2 3+10 = 13 ms
P3 2+11 = 13 ms
P4 1+ 13=14 ms
P5 4+ 14=18 ms
Average turnaround time = (8+13+13+14+18)/5=13.2 ms
Operating system | 297
b) SJF(Shortest Job first)
Process Turnaround time = Burst time+ waiting time
P1 8+14 =22 ms
P2 3+3= 6 ms
P3 2+1 = 3 ms
P4 1+ 0=1 ms
P5 4+ 6=10 ms

Average turnaround time = (22+6+3+1+10)/5=8.4 ms

c) Non-preemptive priority
Process Turnaround time = Burst time+ waiting time
P1 8+7 =15 ms
P2 3+0 = 3 ms
P3 2+15 = 17 ms
P4 1+ 17=18 ms
P5 4+ 3=7 ms

Average turnaround time = (15+3+17+18+7)/5=12 ms

d) Round Robin(RR) scheduling (quantum=1)


Process Turnaround time = Burst time+ waiting time
P1 8+10 =18 ms
P2 3+8= 11 ms
P3 2+6 = 8 ms
P4 1+ 3=4 ms
P5 4+ 11=15 ms
Average turnaround time = (18+11+8+4+15)/5=11.2 ms

(iii) Waiting time


Waiting Time: Service Time - Arrival Time
Here, arrival time is 0 ms.
a) FCFS(first come first served) scheduling:
Process Wait Time : Service Time - Arrival Time
P1 0 - 0 = 0 ms
P2 8 - 0 = 10 ms
P3 11 - 0 = 11 ms
P4 13 - 0 = 13 ms
P5 14 - 0 = 14 ms

Average Waiting Time: (0+8+11+13+14) / 5 = 9.2 ms


298 | Operating system
b) SJF(Shortest Job first)
Process Wait Time : Service Time - Arrival Time
P1 14 - 0 = 14 ms
P2 3 - 0 = 3 ms
P3 1 - 0 = 1 ms
P4 0 - 0 = 0 ms
P5 6 - 0 = 6 ms

Average Waiting Time: (14+3+1+0+6) / 5 = 4.8 ms

c) Non-preemptive priority

Process Wait Time : Service Time - Arrival Time


P1 7 - 0 = 7 ms
P2 0 - 0 = 0 ms
P3 15 - 0 = 15 ms
P4 17 - 0 = 17 ms
P5 3 - 0 = 3 ms
Average Waiting Time: (7+0+15+17+3) / 5 = 8.4 ms

d) Round Robin(RR) scheduling (quantum=1)


Process Wait Time : Service Time - Arrival Time
P1 (5-1)+(9-6)+(12-10)+(14-13)=4+3+2+1=10 ms

P2 (1 – 0)+(6-2)+(10-7) = 1+4+3=8 ms

P3 (2-0)+(7 – 3) = 2+4=6 ms

P4 3 - 0 = 3 ms

P5 (4-0)+(8-5)+(9-8)+(11-9)+(13-12)=4+3+1+2+1=11 ms

Average Waiting Time: (10+8+6+3+11) / 5 = 7.6 ms

(24) Consider the following set of processes with the length of the CPU burst given in
milliseconds: (2014)
Process Burst time Priority
P1 2 2
P2 1 1
P3 8 4
P4 4 2
P5 5 3
Operating system | 299
The processes are assumed to have arrived in the order:
P1,P2,P3,P4,P5, all at time 0.
(i) Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF , a
non-preemptive priority(a large number implies a higher priority) and RR (quantum=2)
scheduling.
(ii) What is the turnaround time of each process for each of the scheduling algorithms ?
(iii) What is the waiting time of each process for each of the scheduling algorithms?
(iv) Which of the algorithms results in the minimum average waiting time (over all processes)?
Answer: Gantt charts
a) FCFS(first come first served) scheduling:

P1 P2 P3 P4 P5

0 2 3 11 15 20

b) SJF(Shortest Job first)

P2 P1 P4 P5 P3

0 1 3 7 12 20

c) Non-preemptive priority

P3 P5 P1 P4 P2

0 8 13 15 19 20

d) Round Robin(RR) scheduling (quantum=1)

P1 P2 P3 P4 P5 P3 P4 P5 P3 P5 P3
0 2 3 5 7 9 11 13 15 17 18 20
(ii) Turnaround time
Turnaround time = Burst time+ waiting time
a) FCFS(first come first served) scheduling:
Process Turnaround time = Burst time+ waiting time
P1 2+0 =2 ms

P2 1+2 = 3 ms
P3 8+3 = 11 ms
P4 4+ 11=15 ms
P5 5+ 15=20 ms
300 | Operating system
Average turnaround time = (2+3+11+15+20)/5=10.2 ms

b) SJF(Shortest Job first)


Process Turnaround time = Burst time+ waiting time
P1 2+1 =3 ms
P2 1+0 = 1 ms
P3 8+12= 20 ms
P4 4+ 3=7 ms
P5 5+ 7=12 ms
Average turnaround time = (3+1+20+7+12)/5=8.6 ms

c) Non-preemptive priority
Process Turnaround time = Burst time+ waiting time
P1 2+13 =15 ms
P2 1+19 = 20 ms
P3 8+0= 8 ms
P4 4+ 15=19 ms
P5 5+ 8=13 ms
Average turnaround time = (15+20+8+19+13)/5=15 ms

d) Round Robin(RR) scheduling (quantum=2)

Process Turnaround time = Burst time+ waiting time


P1 2+0 =2 ms
P2 1+2 = 3 ms
P3 8+12= 20 ms
P4 4+ 9=13 ms
P5 5+ 13= 18 ms

Average turnaround time = (2+3+20+13+18)/5=11.2 ms

(iii) Waiting time


Waiting Time : Service Time - Arrival Time
Here, arrival time is 0 ms.
a) FCFS(first come first served) scheduling:
Process Wait Time : Service Time - Arrival Time
P1 0 - 0 = 0 ms
P2 2- 0 = 2 ms
P3 3 - 0 = 3 ms
P4 11 - 0 = 11 ms
P5 15 - 0 = 15 ms
Average Waiting Time: (0+2+3+11+15) / 5 = 6.2 ms
Operating system | 301
b) SJF(Shortest Job first)

Process Wait Time : Service Time - Arrival Time


P1 1 - 0 = 1 ms
P2 0 - 0 = 0 ms
P3 12 - 0 = 12 ms
P4 3- 0 = 3 ms
P5 7 - 0 = 7 ms

Average Waiting Time: (1+0+12+3+7) / 5 = 4.6 ms


c) Non-preemptive priority

Process Wait Time : Service Time - Arrival Time


P1 13 - 0 = 13 ms
P2 19 - 0 = 19 ms
P3 0 - 0 = 0 ms
P4 15- 0 = 15 ms
P5 8 - 0 = 8 ms

Average Waiting Time: (13+19+0+15+8) / 5 = 11 ms

d) Round Robin(RR) scheduling (quantum=2)

P1 P2 P3 P4 P5 P3 P4 P5 P3 P5 P3
0 2 3 5 7 9 11 13 15 17 18 20

Process Wait Time : Service Time - Arrival Time


P1 0-0=0 ms
P2 (2-0)=2ms
P3 (3-0)+(9-5)+(15-11)+(18-17) = 3+4+4+1=12 ms
P4 (5-0)+(11-7) =5+4= 9 ms
P5 (7-0)+(13-9)+ (17-15)=7+4+2=13 ms

Average Waiting Time: (0+2+12+9+13) / 5 = 7.2 ms


(v) the minimum average waiting time
both SJF scheduling provide minimum average waiting time= 4.6 ms.

11. Define Throughput & Waiting time. [2018]


Solution:
Throughput: Imagine a post office with multiple counters serving people. Throughput in this case
is the number of people served in an hour (for example). As you can see, throughput gives us a
sense of how efficient the post office is even though different customers take different times
302 | Operating system
depending on their specific needs. Similarly, in OS context, throughput refers to the number of
completed processes in a given amount of time. Let us take an example, assume we have 3
processes that we need to run in the order they arrive. The first process (P1) takes 5 seconds to
finish, the second process (P2) takes 15 seconds and the third process (P3) takes 10 seconds.
Throughput in this case: (5 + 15 + 10)/3 = 10 (on average, one process is completed every 10
seconds).
Waiting time: This parameter refers to the amount of time a given ready process sits in the
waiting queue before getting the attention of the CPU. Let us compute the waiting time for our
example (Order of arrival P1, P2, P3 each taking 5, 15, 10 seconds). The waiting time for P1 is 0
because it is the first process that has arrived. P2 waits 5 seconds and P3 waits 5 + 15 = 20
seconds. So the average waiting time is (0 + 5 + 20)/3 = 8.3 seconds.

12. Illustrate the multilevel feedback queue scheduling. [2016]


Solution:
Multilevel feedback Queue scheduling: It is an enhancement of multilevel queue scheduling
where process can move between the queues. In approach, the ready queue is partitioned into
multiple queues of different priorities. The system use to assign processes to queue based on their
CPU burst characteristic. If a process consumes too much CPU time, it is placed into a lower
priority queue. It favors I/O bound jobs to get good input/output device utilization. A technique
called aging promotes lower priority processes to the next higher priority queue after a suitable
interval of time.
In figure, the process queue is displayed from top to bottom in order of decreasing priority. The
top priority queue has the smallest CPU-time quantum. After a process from the top queue
exhausts its time quantum on the CPU, it is placed on the next lower queue. The process is next
serviced when it reaches on the top queue if the top queue is empty.

Advantages: A process that waits too long in a lower priority queue may be moved to a higher
priority queue.
Operating system | 303
13. What are the purpose of disk scheduling? [2013]
Solution:
Disk Scheduling: As we know, a process needs two type of time, CPU time and IO time. For I/O, it
requests the Operating system to access the disk.
However, the operating system must be fair enough to satisfy each request and at the same time,
operating system must maintain the efficiency and speed of process execution.
The technique that operating system uses to determine the request which is to be satisfied next is
called disk scheduling.
Goal of Disk Scheduling Algorithm:
o Fairness
o High throughout
o Minimal traveling head time
304 | Operating system

CHAPTER 4
PROCESS SYNCHRONIZATION
(1) Explain dining philosopher problem. (2014)
Answer: The Dining Philosopher Problem – The Dining Philosopher Problem states that K
philosophers seated around a circular table with one chopstick between each pair of philosophers.
There is one chopstick between each philosopher. A philosopher may eat if he can pick up the two
chopsticks adjacent to him. One chopstick may be picked up by any one of its adjacent followers
but not both.

The problem was designed to illustrate the challenges of avoiding deadlock, a system state in
which no progress is possible. To see that a proper solution to this problem is not obvious,
consider a proposal in which each philosopher is instructed to behave as follows:
 think until the left chopstick is available; when it is, pick it up;
 think until the right chopstick is available; when it is, pick it up;
 when both chopsticks are held, eat for a fixed amount of time;
 then, put the right chopstick down;
 then, put the left chopstick down;
 Repeat from the beginning.
This attempted solution fails because it allows the system to reach a deadlock state, in which no
progress is possible. This is a state in which each philosopher has picked up the chopstick to the
left, and is waiting for the chopstick to the right to become available, or vice versa. With the given
instructions, this state can be reached, and when it is reached, the philosophers will eternally wait
for each other to release a chopstick
Mutual exclusion is the basic idea of the problem; the dining philosophers create a generic and
abstract scenario useful for explaining issues of this type. The failures these philosophers may
experience are analogous to the difficulties that arise in real computer programming when
multiple programs need exclusive access to shared resources.
Operating system | 305
(2) Describe Dinning—philosopher problem. How this can be solved by using
semaphore? (2021,2013)
Answer: Semaphore Solution to Dining Philosopher –
Each philosopher is represented by the following pseudocode:
process P
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}
There are three states of philosopher : THINKING, HUNGRY and EATING. Here there are two
semaphores : Mutex and a semaphore array for the philosophers. Mutex is used such that no two
philosophers may access the pickup or putdown at the same time. The array is used to control the
behavior of each philosopher. But, semaphores can result in deadlock due to programming errors.

(3) Discuss the critical section problem with its solution. (2014)
Or, Figure out the requirements to solve the critical-section problem. (2010)
Or, Write down the requirements that should satisfy to solve the critical- section (2008)
Answer: A Critical Section is a code segment that accesses shared variables and has to be
executed as an atomic action. It means that in a group of cooperating processes, at a given point of
time, only one process must be executing its critical section. If any other process also wants to
execute its critical section, it must wait until the first one finishes.

Solution to Critical Section Problem


A solution to the critical section problem must satisfy the following three conditions:
1. Mutual Exclusion
Out of a group of cooperating processes, only one process can be in its critical section at a given
point of time.
2. Progress
If no process is in its critical section, and if one or more threads want to execute their critical
section then any one of these threads must be allowed to get into its critical section.
3. Bounded Waiting
After a process makes a request for getting into its critical section, there is a limit for how many
other processes can get into their critical section, before this process's request is granted. So after
the limit is reached, system must grant the process permission to get into its critical section.
306 | Operating system
4. What do you mean by process synchronization? [2018]
Solution:
Process Synchronization: Process Synchronization means sharing system resources by
processes in such a way that, Concurrent access to shared data is handled thereby minimizing the
chance of inconsistent data. Maintaining data consistency demands mechanisms to ensure
synchronized execution of cooperating processes.
Process Synchronization was introduced to handle problems that arose while multiple process
executions. Some of the problems are discussed below.

5. Define semaphore. Write down the implementation of semaphore. [2018]


Solution:
Semaphores are integer variables that are used to solve the critical section problem by using two
atomic operations, wait and signal that are used for process synchronization.
Types of Semaphores: There are two main types of semaphores i.e. counting semaphores
and binary semaphores. Details about these are given as follows −
 Counting Semaphores: These are integer value semaphores and have an unrestricted value
domain. These semaphores are used to coordinate the resource access, where the semaphore
count is the number of available resources. If the resources are added, semaphore count
automatically incremented and if the resources are removed, the count is decremented.
 Binary Semaphores: The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement binary
semaphores than counting semaphores.

6. What do you understand about ‘IPC’ (2015)


Inter process communication (IPC) is a mechanism which allows processes to communicate each
other and synchronize their actions. The communication between these processes can be seen as
a method of co-operation between them. Processes can communicate with each other using these
two ways:
1. Shared Memory
2. Message passing

7. Write the advantages of Inter Process Communication (IPC). (2014,2012,2010)


Answer: Advantages
1. simplicity : kernel does channel management and synchronization
2. System calls only for setup data copies potentially reduced (but not eliminated)
3. Information sharing: Allow concurrent access to same information.
4. Computation speedup: Break task into subtasks, each of which will be executing in parallel with
the others that speedup the execution.
5. Modularity: can dividing the system functions into separate processes or threads.
6. Convenience: Many tasks work at the same time.
Operating system | 307

CHAPTER 5
RESOURCE MANAGEMENT
[DEADLOCK]

1) What is deadlock? (2017,2016,2015,2014,2012,2010,2008)


Answer: Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on same track and there is
only one track, none of the trains can move once they are in front of each other. Similar situation
occurs in operating systems when there are two or more processes hold some resources and wait
for resources held by other(s).
For example, in the below diagram, Process 1 is holding Resource 1 and waiting for resource 2
which is acquired by process 2, and process 2 is waiting for resource 1.

2) What do you mean by starvation? (2015)


Answer: Starvation or indefinite blocking is phenomenon associated with the Priority scheduling
algorithms, in which a process ready to run for CPU can wait indefinitely because of low priority.
In heavily loaded computer system, a steady stream of higher-priority processes can prevent a
low-priority process from ever getting the CPU.

3) Describe the necessary conditions for deadlock. (2016,2014,2008)


Or, Briefly explain four necessary conditions for deadlock. (2012,2010)
Answer: Deadlocks can be avoided by avoiding at least one of the four conditions, because all this
four conditions are required simultaneously to cause deadlock.
1. Mutual Exclusion
Resources shared such as read-only files do not lead to deadlocks but resources, such as printers
and tape drives, requires exclusive access by a single process.
2. Hold and Wait
In this condition processes must be prevented from holding one or more resources while
simultaneously waiting for one or more others.
308 | Operating system
3. No Preemption
Preemption of process resource allocations can avoid the condition of deadlocks, where ever
possible.
4. Circular Wait
Circular wait can be avoided if we number all resources, and require that processes request
resources only in strictly increasing(or decreasing) order.

4) Write down at least two real example of deadlock. (2012)


Answer: two real example of deadlock are:
1. You can't get the job without having the (professional) experience and you can't get the
experience without having a job
2. If you stay up all night to study, you will be too tired and unfocused on the classes the next day
and you'll have to stay up another night to make up for it.

5) Is it possible to have a deadlock involving only one single process? Explain your
answer. (2016)
Answer: A deadlock situation can only arise if the following four conditions hold simultaneously
in a system:
 Mutual Exclusion
 Hold and Wait
 No Preemption
 Circular-wait
It is impossible to have circular-wait when there is only one single-threaded process. There is no
second process to form a circle with the first one. One process cannot hold a resource, yet be
waiting for another resource that it is holding.
So it is not possible to have a deadlock involving only one process.

6) What are the different methods for handling deadlock? (2016,2008)


Answer: The above points focus on preventing deadlocks. But what to do once a deadlock has
occurred. Following three strategies can be used to remove deadlock after its occurrence.
1. Preemption
We can take a resource from one process and give it to other. This will resolve the deadlock
situation, but sometimes it does causes problems.
2. Rollback
In situations where deadlock is a real possibility, the system can periodically make a record of the
state of each process and when deadlock occurs, roll everything back to the last checkpoint, and
restart, but allocating resources differently so that deadlock does not occur.
3. Kill one or more processes
This is the simplest way, but it works.

7) Explain the banker’s algorithm for deadlock avoidance. (2021,2015)


Answer: The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that
tests for safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible activities, before deciding whether
allocation should be allowed to continue.
Operating system | 309
Following Data structures are used to implement the Banker’s Algorithm:
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.
Available :
 It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
 Available[ j ] = k means there are ‘k’ instances of resource type Rj
Max :
 It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a system.
 Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type Rj.
Allocation :
 It is a 2-d array of size ‘n*m’ that defines the number of resources of each type currently
allocated to each process.
 Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of resource type Rj
Need :
 It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each process.
 Need [ i, j ] = k means process Pi currently allocated ‘k’ instances of resource type Rj
 Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]
Allocationi specifies the resources currently allocated to process Pi and Needi specifies the
additional resources that process Pi may still request to complete its task.

8) Describe a resource-allocation graph with appropriate diagram that can be used to


describe deadlock more precisely. (2021,2017)
Answer: resource allocation graph is explained to us what is the state of the system in terms
of processes and resources. Like how many resources are available, how many are allocated and
what is the request of each process. Everything can be represented in terms of the diagram. One of
the advantages of having a diagram is, sometimes it is possible to see a deadlock directly by using
RAG, but then you might not be able to know that by looking at the table. But the tables are better
if the system contains lots of process and resource and Graph is better if the system contains less
number of process and resource.
We know that any graph contains vertices and edges. So RAG also contains vertices and edges. In
RAG vertices are two type –
1. Process vertex – Every process will be represented as a process vertex.Generally, the process
will be represented with a circle.
2. Resource vertex – Every resource will be represented as a resource vertex. It is also two type –
 Single instance type resource – It represents as a box, inside the box, there will be one dot. So
the number of dots indicate how many instances are present of each resource type.
 Multi-resource instance type resource – It also represents as a box, inside the box, there will
be many dots present.
310 | Operating system

Now coming to the edges of RAG.There are two types of edges in RAG –
Assign Edge – If you already assign a resource to a process then it is called Assign edge.
2. Request Edge – It means in future the process might want some resource to complete the
execution, that is called request edge.

So, if a process is using a resource, an arrow is drawn from the resource node to the process node.
If a process is requesting a resource, an arrow is drawn from the process node to the resource
node.
Example 1 (Single instances RAG) –

If there is a cycle in the Resource Allocation Graph and each resource in the cycle provides only
one instance, then the processes will be in deadlock. For example, if process P1 holds resource R1,
process P2 holds resource R2 and process P1 is waiting for R2 and process P2 is waiting for R1,
then process P1 and process P2 will be in deadlock.
Operating system | 311

Here’s another example, that shows Processes P1 and P2 acquiring resources R1 and R2 while
process P3 is waiting to acquire both resources. In this example, there is no deadlock because
there is no circular dependency.
So cycle in single-instance resource type is the sufficient condition for deadlock.
Example 2 (Multi-instances RAG) –

From the above example, it is not possible to say the RAG is in a safe state or in an unsafe state.So
to see the state of this RAG, let’s construct the allocation matrix and request matrix.

 The total number of processes are three; P1, P2 & P3 and the total number of resources are
two; R1 & R2.
312 | Operating system
Allocation matrix –
 For constructing the allocation matrix, just go to the resources and see to which process it is
allocated.
 R1 is allocated to P1, therefore write 1 in allocation matrix and similarly, R2 is allocated to P2
as well as P3 and for the remaining element just write 0.
Request matrix –
 In order to find out the request matrix, you have to go to the process and see the outgoing
edges.
 P1 is requesting resource R2, so write 1 in the matrix and similarly, P2 requesting R1 and for
the remaining element write 0.
So now available resource is = (0, 0).
Checking deadlock (safe or not) –

So, there is no deadlock in this RAG. Even though there is a cycle, still there is no deadlock.
Therefore in multi-instance resource cycle is not sufficient condition for deadlock.

Above example is the same as the previous example except that, the process P3 requesting for
resource R1.
So the table becomes as shown in below.
Operating system | 313

So,the Available resource is = (0, 0), but requirement are (0, 1), (1, 0) and (1, 0).So you can’t fulfill
any one requirement. Therefore, it is in deadlock.
Therefore, every cycle in a multi-instance resource type graph is not a deadlock, if there has to be
a deadlock, there has to be a cycle. So, in case of RAG with multi-instance resource type, the cycle
is a necessary condition for deadlock, but not sufficient.

9) How can you ensure that Hold and Wait and circular wait never occur in deadlock
system? (2017)
Answer: Hold and Wait
 To prevent this condition processes must be prevented from holding one or more resources
while simultaneously waiting for one or more others. There are several possibilities for this:
 Require that all processes request all resources at one time. This can be wasteful of system
resources if a process needs one resource early in its execution and doesn't need some other
resource until much later.
 Require that processes holding resources must release them before requesting new resources,
and then re-acquire the released resources along with the new ones in a single new request.
This can be a problem if a process has partially completed an operation using a resource and
then fails to get it re-allocated after releasing it.
 Either of the methods described above can lead to starvation if a process requires one or more
popular resources.
 Allocate all required resources to the process before start of its execution, this way hold and
wait condition is eliminated but it will lead to low device utilization. for example, if a process
requires printer at a later time and we have allocated printer before the start of its execution
printer will remained blocked till it has completed its execution.
314 | Operating system

Eliminate Circular Wait


Circular Wait
 One way to avoid circular wait is to number all resources, and to require that processes request
resources only in strictly increasing ( or decreasing ) order.
 In other words, in order to request resource Rj, a process must first release all Ri such that i >=
j.
 One big challenge in this scheme is determining the relative ordering of the different resources
Each resource will be assigned with a numerical number. A process can request for the resources
only in increasing order of numbering.
For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than
R5 such request will not be granted, only request for resources more than R5 will be granted.

10) What is mutual exclusion? (2015)


Answer: mutual exclusion is a property of concurrency control, which is instituted for the
purpose of preventing race conditions; it is the requirement that one thread of execution never
enter its critical section at the same time that another concurrent thread of execution enters its
own critical section.

11) Explain the solutions for mutual exclusion. (2017)


Answer: In case if no resource were ever assigned exclusively to a single process, then we would
never have any deadlocks. But allowing two processes to write on the printer at the same instance
of time will lead to chaos. By spooling printer output, several process can produce output at the
same time. In this model, the only process that actually requests the physical printer is the printer
daemon. Since the daemon never requests any other resources, we can eliminate deadlock for the
printer.

12) Explain Safety Algorithm


Answer:
The algorithm for finding out whether or not a system is in a safe state can be described as
follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
Operating system | 315
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work if no such i exists goto step (4)
3) Work = Work + Allocation
Finish[i] = true
goto step (2)
4) if finish [i] = true for all i
then the system is in a safe state

13) ExplainResource-Request Algorithm


Answer:
Let Requesti be the request array for process Pi. Requesti [j] = k means process Pi wants k
instances of resource type Rj. When a request for resources is made by process Pi, the following
actions are taken:
1) If Requesti <= Needi
Goto step (2) ; otherwise, raise an error condition, since the process has exceeded its maximum
claim.
2) If Requesti <= Available Goto step (3); otherwise, Pi must wait, since the resources are not
available.
3) Have the system pretend to have allocated the requested resources to process Pi by modifying
the state as
follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti

13) Consider the following snapshot of a system: (2017,2015,2010,2008)


Allocation Max Available
A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Answer the following questions:-
i. What is the content of the matrix Need?
ii. Is the system in a safe state?
iii. If a request from process P1 arrives for (1 0 2) , can the request be granted immediately?
316 | Operating system
Answer:
(i) the content of the matrix Need
Here, matrix need= Max- Allocation
Allocation Max Need
A B C A B C A B C
P0 0 1 0 7 5 3 7 4 3
P1 2 0 0 3 2 2 1 2 2
P2 3 0 2 9 0 2 6 0 0
P3 2 1 1 2 2 2 0 1 1
P4 0 0 2 4 3 3 4 3 1

Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
(ii) Is the system in a safe state?
Here , we have
Work=Available = (3 3 2)
If [Need (n) <= Work ]
Then , Work=Work + Allocation
P0 (Need0 7 4 3)> (Work 3 3 2) Doesn’t work & try later
Finish = 0 0 0 0 0

P1 (Need1 1 2 2)<= (Work 3 3 2) Finish = 0 1 0 0 0


Work = 3 3 2+ 2 0 0= 5 3 2

P2 (Need2 6 0 0)> (Work 5 3 2) Doesn’t work & try later


Finish = 0 1 0 0 0

P3 (Need3 0 1 1)<= (Work 5 3 2) Finish = 0 1 0 1 0


Work = 5 3 2+ 2 1 1= 7 4 3

P4 (Need4 4 3 1) <= (Work 7 4 3) Finish = 0 1 0 0 0


Work = 7 4 3+ 0 0 2= 7 4 5

P0 (Need0 7 4 3) <= (Work 7 4 5) Finish = 0 1 0 0 0


Work = 7 4 5+ 0 1 0= 7 5 5

P2 (Need2 6 0 0) <= (Work 7 5 5) Finish = 0 1 0 0 0


Work = 7 5 5+ 3 0 2= 10 5 7
Operating system | 317
State is safe : safe sequence is < P1, P3, P4, P0, P2 >
(iii) a request from process P1 arrives for (1 0 2)
step1: (Request1= 1 0 2)<= (Need 1= 1 2 2),
step2: Request1= 1 0 2<= Available = 3 3 2),
step3: A Pretend = to make the allocation and check to see if new state is safe ;
The changes are:
Allocation1= 2 0 0+ 1 0 2= 3 0 2
Need1 = 1 2 2- 1 0 2= 0 2 0
Available = 3 3 2- 1 0 2= 2 3 0
The new resource allocation state is now
Allocation Max Available
A B C A B C A B C
P0 0 1 0 7 5 3 2 3 0
P1 3 0 2 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Need
A B C
P0 7 4 3
P1 0 2 0
P2 6 0 0
P3 0 1 1
P4 4 3 1

And search for a safe sequence:


Work= Available = 2 3 0 , finish = 0 0 0 0 0
P1 (Need1 0 2 0)<= (Work 2 3 0) Finish = 0 1 0 0 0
Work = 2 3 0+ 3 0 2= 5 3 2

P3 (Need3 0 1 1)<= (Work 5 3 2) Finish = 0 1 0 1 0


Work = 5 3 2+ 2 1 1= 7 4 3

P4 (Need4 4 3 1) <= (Work 7 4 3) Finish = 0 1 0 0 0


Work = 7 4 3+ 0 0 2= 7 4 5

P0 (Need0 7 4 3) <= (Work 7 4 5) Finish = 0 1 0 0 0


Work = 7 4 5+ 0 1 0= 7 5 5

P2 (Need2 6 0 0) <= (Work 7 5 5) Finish = 0 1 0 0 0


Work = 7 5 5+ 3 0 2= 10 5 7

New state is safe thus grant the request of p1.


318 | Operating system
14) Consider the following snapshot of a system :— (2012)
Allocation Max Available
A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Answer the following questions using the Banker’s algorithm:-
i. Is the system in a safe state?
ii. If a request from process P4 arrives for (0, 1, 1) , can the request be granted
immediately?
Answer: The content of the matrix Need
Here, matrix need= Max- Allocation

Allocation Max Need


A B C A B C A B C
P0 0 1 0 7 5 3 7 4 3
P1 2 0 0 3 2 2 1 2 2
P2 3 0 2 9 0 2 6 0 0
P3 2 1 1 2 2 2 0 1 1
P4 0 0 2 4 3 3 4 3 1

Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
(i) Is the system in a safe state?
Here , we have
Work=Available = (3 3 2)
If [Need (n) <= Work ]
Then , Work=Work + Allocation
Operating system | 319
P0 (Need0 7 4 3)> (Work 3 3 2) Doesn’t work & try later
Finish = 0 0 0 0 0

P1 (Need1 1 2 2)<= (Work 3 3 2) Finish = 0 1 0 0 0


Work = 3 3 2+ 2 0 0= 5 3 2

P2 (Need2 6 0 0)> (Work 5 3 2) Doesn’t work & try later


Finish = 0 1 0 0 0

P3 (Need3 0 1 1)<= (Work 5 3 2) Finish = 0 1 0 1 0


Work = 5 3 2+ 2 1 1= 7 4 3

P4 (Need4 4 3 1) <= (Work 7 4 3) Finish = 0 1 0 0 0


Work = 7 4 3+ 0 0 2= 7 4 5

P0 (Need0 7 4 3) <= (Work 7 4 5) Finish = 0 1 0 0 0


Work = 7 4 5+ 0 1 0= 7 5 5

P2 (Need2 6 0 0) <= (Work 7 5 5) Finish = 0 1 0 0 0


Work = 7 5 5+ 3 0 2= 10 5 7
State is safe: safe sequence is < P1, P3, P4, P0, P2 >

(ii) a request from process P4 arrives for (0 1 1)


step1: (Request4= 0 1 1)<= (Need 4= 1 2 2),
step2: Request4= 0 1 1<= Available = 3 3 2),
step3: A Pretend = to make the allocation and check to see if new state is safe ;
The changes are:
Allocation4= 0 0 2+ 0 1 1= 0 1 3
Need4 = 4 3 1- 0 1 1= 4 2 0
Available = 3 3 2- 0 1 1= 3 2 1
The new resource allocation state is now
Allocation Max Available
A B C A B C A B C
P0 0 1 0 7 5 3 3 2 1
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 1 3 4 3 3
320 | Operating system
Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 2 0
And search for a safe sequence:
Work= Available = 3 2 1 , finish = 0 0 0 0 0
P1 (Need1 1 2 2)> (Work 3 2 1) Doesn’t work & try later
Work = 3 2 1+ 2 0 0= 5 2 1
P3 (Need3 0 1 1)<= (Work 3 2 1) Finish = 0 0 0 1 0
Work = 3 2 1+ 2 1 1= 5 3 2
P4 (Need4 4 2 0) <= (Work 5 3 2) Finish = 0 0 0 1 1
Work = 5 3 2+ 0 1 3= 5 4 7
P0 (Need0 7 4 3) > (Work 5 4 7) Doesn’t work & try later
Work = 5 4 7+ 0 1 0= 5 5 7
P2 (Need2 6 0 0) >(Work 5 5 7) Doesn’t work & try later
Work = 5 5 7+ 3 0 2= 8 5 7
P1 (Need1 1 2 2)<= (Work 8 5 7) Finish = 0 1 0 1 1
Work = 8 5 7+ 2 0 0= 10 5 7
P0 (Need0 7 4 3) <= (Work 10 5 7) Finish = 1 1 0 1 1
Work = 10 5 7+ 0 1 0= 10 6 7
P2 (Need2 6 0 0) <=(Work 10 6 7 Finish = 1 1 1 1 1
) Work = 10 6 7 + 3 0 2= 13 6 9

New state is safe . ans sequence is (P3>P4>P1>P0>P2).thus grant the request of p1.

16. What is infinite blocking? [2012]


Solution:
Infinite Block: Infinite Block is a unique gaming experience that compiles classic, group and party
games in a convenient small electronic block? It is also fully configurable so you can input any
game you want! Forget about boring board games. Infinite Block allows you to play alone or with
your friends; from party games, drinking games, guessing challenges, to anything you can think of.
Infinite Block will ensure you have a great time! Just shake it and get to play.

17. Explain different types of process scheduling queues. [2009]


Solution:
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the executable memory at a time and the
loaded process shares the CPU using time multiplexing.
Operating system | 321
Process Scheduling Queues: The OS maintains all PCBs in Process Scheduling Queues. The
OS maintains a separate queue for each of the process states and PCBs of all processes in
the same execution state are placed in the same queue. When the state of a process is
changed, its PCB is unlinked from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
 Job queue −This queue keeps all the processes in the system.
 Ready queue −This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
 Device queues −The processes which are blocked due to unavailability of an I/O device
constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The
OS scheduler determines how to move processes between the ready and run queues which can
only have one entry per processor core on the system; in the above diagram, it has been merged
with the CPU.

18. Write one algorithm that determines the system is in the safe or not? [2010]
Solution:
The algorithm for finding out whether or not a system is in a safe state can be described as
follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
3) Work = Work + Allocation[i]
Finish[i] = true
goto step (2)
4) if Finish [i] = true for all i
then the system is in a safe state

Math 01:
322 | Operating system
19)Consider the following snapshot of a system:-
Allocation Max Available
A B C D A B C D 1 5 2 0
0 0 1 2 0 0 1 2
1 0 0 0 1 7 5 0
1 3 5 4 2 3 5 6
0 6 3 2 0 6 5 2
0 0 1 4 0 6 5 6
iv) Determine the need matrix
v) Is the system in safe state?
vi) If a request from process P1
Arrives for (0, 4, 2, 0) can the request be granted immediate.
[2021, 2016, 2014, 2011]
Answer:
i) Need matrix:
Need
A B C D
0 0 0 0
0 7 5 0
1 0 0 2
0 0 2 0
0 6 4 2
ii) Initialization:
Work = available= [ 1, 5, 2, 0]
Finish= 0 0 0 0
Search for safe state:
P0:( need= 0 0 0 0)
Finish = 1 0 0 0 0, work= 1 5 2 0 + 0 0 1 2
=1 5 3 2
P1: ( need= 0 7 5 0) > ( work= 1 5 3 2) ,
Doesn’t work - wait try later,
Finish= 1 0 0 0 0

P2: ( need= 1 0 0 2) < =( work= 1 5 3 2) ,


Finish= 1 0 1 0 0, work= 1 5 3 2 + 1 3 5 4
=2 8 8 6

P3: ( need= 0 0 2 0) <= ( work= 1 5 3 2) ,


Finish= 1 0 1 1 0, work= 2 8 8 6 + 0 6 3 2
= 2 14 11 8

P4: (need= 1 6 4 2) < =(work= 2 14 11 8),


Finish= 1 0 1 1 1, work= 2 14 11 8 + 0 0 1 4
= 2 14 12 12
Operating system | 323

P1: (need= 0 7 5 0) < =(work= 2 14 10 12),


Finish= 1 1 1 1 1, work= 2 14 12 12 + 1 0 0 0
= 3 14 12 12
State is safe:
Safe sequence is : < P0, P2, P3, P4, P1 >

(iii) Yes, because new request from process P1 in less than available request < = Available
( 0 4 2 0 ) < = ( 1 5 2 0)
So the request granted immediately.
324 | Operating system

CHAPTER 6
MEMORY MANAGEMENT
(1) Define logical address, physical address and virtual address. (2017,2015)
Answer: a logical address is the address at which an item (memory cell, storage element, network
host) appears to reside from the perspective of an executing application program. A logical
address may be different from the physical address due to the operation of an address translator
or mapping function.
A physical address is a binary number in the form of logical high and low states on an address bus
that corresponds to a particular cell of primary storage(also called main memory), or to a
particular register in a memory-mapped I/O(input/output) device.
A virtual address is a binary number in virtual memory that enables a process to use a location in
primary storage (main memory) independently of other processes and to use more space than
actually exists in primary storage by temporarily relegating some contents to a hard disk or
internal flash drive.

(2) Write down the implementation process of a page table. (2017)


Answer: A page table is the data structure used by a virtual memory system in
a computer operating system to store the mapping between virtual addresses and physical
addresses. Virtual addresses are used by the program executed by the accessing process, while
physical addresses are used by the hardware, or more specifically, by the RAM subsystem.

(3) Describe paging address translation architecture with figure. (2021,2016,2013)


The translation process

The CPU's memory management unit (MMU) stores a cache of recently used mappings from the
operating system's page table. This is called the translation look aside buffer (TLB), which is an
associative cache.
When a virtual address needs to be translated into a physical address, the TLB is searched first. If
a match is found (a TLB hit), the physical address is returned and memory access can continue.
However, if there is no match (called a TLB miss), the memory management unit, or the operating
system TLB miss handler, will typically look up the address mapping in the page table to see
Operating system | 325
whether a mapping exists (a page walk). If one exists, it is written back to the TLB (this must be
done, as the hardware accesses memory through the TLB in a virtual memory system), and the
faulting instruction is restarted (this may happen in parallel as well). This subsequent translation
will find a TLB hit, and the memory access will continue.

(4) What is segmentation? (2013)


Answer:
Segmentation
Segmentation is a memory management technique in which each job is divided into several
segments of different sizes, one for each module that contains pieces that perform related
functions. Each segment is actually a different logical address space of the program.
When a process is to be executed, its corresponding segmentation are loaded into non-contiguous
memory though every segment is loaded into a contiguous block of available memory.
Segmentation memory management works very similar to paging but here segments are of
variable-length where as in paging pages are of fixed size.
A program segment contains the program's main function, utility functions, data structures, and
so on. The operating system maintains a segment map table for every process and a list of free
memory blocks along with segment numbers, their size and corresponding memory locations in
main memory. For each segment, the table stores the starting address of the segment and the
length of the segment. A reference to a memory location includes a value that identifies a segment
and an offset.
326 | Operating system
(5) Why segmentation and paging sometimes combine into one scheme?
(2017, 2012, 2013)
Answer: Segmentation and paging are often combined in order to improve upon each other.
Segmented paging is helpful when the page table becomes very large. A large contiguous section
of the page table that is unused can be collapsed into a single-segment table entry with a page-
table address of zero. Paged segmentation handles the case of having very long segments that
require a lot of time for allocation. By paging the segments, we reduce wasted memory due to
external fragmentation as well as simplify the allocation.

(6) What is swapping? (2014,2013,2010)


Answer:
Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or
move) to secondary storage (disk) and make that memory available to other processes. At some
later time, the system swaps back the process from the secondary storage to main memory.
Though performance is usually affected by swapping process but it helps in running multiple and
big processes in parallel and that's the reason Swapping is also known as a technique for memory
compaction.

The total time taken by swapping process includes the time it takes to move the entire process to
a secondary disk and then to copy the process back to memory, as well as the time the process
takes to regain main memory.
Operating system | 327
(7) What is paging? Why are page sizes always power of 2? (2021,2014)
Answer: A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a hard that's set
up to emulate the computer's RAM. Paging technique plays an important role in implementing
virtual memory.
Paging is a memory management technique in which process address space is broken into blocks
of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of
the process is measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum
utilization of the main memory and to avoid external fragmentation.
328 | Operating system
(8) Define address binding and dynamic loading. (2016,2013,2010)
Answer:
Address binding is the process of mapping the program's logical or virtual addresses to
corresponding physical or main memory addresses. In other words, a given logical address is
mapped by the MMU (Memory Management Unit) to a physical address.
Dynamic loading is a mechanism by which a computer program can, at run time, load a library
(or other binary) into memory, retrieve the addresses of functions and variables contained in the
library, execute those functions or access those variables, and unload the library from memory.

(9) What is the advantage of dynamic loading? (2014)


Answer: The advantage of dynamic loading is that an unused routine is never loaded. This
method is particularly useful when large amounts of code are needed to handle infrequently
occurring cases, such as error routines. In this case, although the total program size may be large,
the portion that is used(and hence loaded) may be much smaller. Dynamic loading does not
require special support from the operating system. It is the responsibility of the users to design
their programs to take advantage of such a method. Operating systems may help the programmer,
however, by providing library routines to implement dynamic loading

(10) Explain the difference between logical and physical addresses. (2015)
Answer:
BASIS FOR LOGICAL ADDRESS PHYSICAL ADDRESS
COMPARISON
Basic It is the virtual address generated The physical address is a location in a
by CPU memory unit.
Address Space Set of all logical addresses Set of all physical addresses mapped to
generated by CPU in reference to a the corresponding logical addresses is
program is referred as Logical referred as Physical Address.
Address Space.
Visibility The user can view the logical The user can never view physical
address of a program. address of program
Access The user uses the logical address to The user can not directly access
access the physical address. physical address.
Generation The Logical Address is generated by Physical Address is Computed by MMU
the CPU

(11) Discuss about internal and external fragmentation.Which fragmentation can be


solved by compaction?? (2021,2010)
Answer: Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into little
pieces. It happens after sometimes that processes cannot be allocated to memory blocks
considering their small size and memory blocks remains unused. This problem is known as
Fragmentation.
Operating system | 329
Fragmentation is of two types −
Fragmentation & Description
1 External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it,
but it is not contiguous, so it cannot be used.
2 Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left
unused, as it cannot be used by another process.

(12) What are the differences between internal and external fragmentation?
(2021,2016,2015,2013,2012)
Answer: Internal Fragmentation occurs when a fixed size memory allocation technique is used.
External fragmentation occurs when a dynamic memory allocation technique is used.
 Internal fragmentation occurs when a fixed size partition is assigned to a program/file with
less size than the partition making the rest of the space in that partition unusable. External
fragmentation is due to the lack of enough adjacent space after loading and unloading of
programs or files for some time because then all free space is distributed here and there.
 External fragmentation can be mined by compaction where the assigned blocks are moved to
one side, so that contiguous space is gained. However, this operation takes time and also
certain critical assigned areas for example system services cannot be moved safely. We can
observe this compaction step done on hard disks when running the disk defragmenter in
Windows.
 External fragmentation can be prevented by mechanisms such as segmentation and paging.
Here a logical contiguous virtual memory space is given while in reality the files/programs are
splitted into parts and placed here and there.
 Internal fragmentation can be maimed by having partitions of several sizes and assigning a
program based on the best fit. However, still internal fragmentation is not fully eliminated.

(13) Explain the following allocation algorithms :- (2021,2015,2012)


i. First-fit;
ii. Best-fit;
iii. Worst-fit.
Answer:
First Fit
In the first fit approach is to allocate the first free partition or hole large enough which can
accommodate the process. It finishes after finding the first suitable free partition.
Advantage
Fastest algorithm because it searches as little as possible.
Disadvantage
The remaining unused memory areas left after allocation become waste if it is too smaller. Thus
request for larger memory requirement cannot be accomplished.
Best Fit
The best fit deals with allocating the smallest free partition which meets the requirement of the
requesting process. This algorithm first searches the entire list of free partitions and considers the
330 | Operating system
smallest hole that is adequate. It then tries to find a hole which is close to actual process size
needed.
Advantage
Memory utilization is much better than first fit as it searches the smallest free partition first
available.
Disadvantage
It is slower and may even tend to fill up memory with tiny useless holes.
Worst fit
In worst fit approach is to locate largest available free portion so that the portion left will be big
enough to be useful. It is the reverse of best fit.
Advantage
Reduces the rate of production of small gaps.
Disadvantage
If a process requiring larger memory arrives at a later stage then it cannot be accommodated as
the largest hole is already split and occupied.

(14) Describe different types of page table structure. (2013)


Answer: The page tables associated with the paging concept may have various page table
structures. They are:
I. Hierarchical/Multi-level paging
 Most modern computer architectures support larger address spaces. A single page table
itself, corresponding to a process can only take up space in megabytes. If so then, a large amount
of space would be required to accommodate the page tables of all processes contiguously.
 A solution to it is to make it non-contiguous and maintain another table which keeps the
record of where in memory is the table stored. This is called two-level paging. Here the page table
is also paged.
 However if this second level memory is also not sufficient for our needs, then we create
another level. And so on it goes. This is called as Hierarchical or Multi-level paging. In two-level
page table, the page table is paged and its data is scattered in the memory. There is another table
which contain the entries of page table. This table is called as directory table. Refer figure below.
Operating system | 331

 To implement a two-level page structure, the logical address is modified into two parts, one
for the Directory table (Outer page table) and other for the inner page table.
 It is as follows: (for 32-bits)
p1 (10 bits) p2(10 bits) d(12 bits)
- - -
Here,
p1 →→ index to the outer page table
p2 →→ displacement within the page of the outer page table
d →→ page offset
 This method is not considered appropriate for 64-bit architectures.
 The disadvantage of this is that increases that number of memory accesses.

II. Inverted page table structure


 The page table size in OS is directly proportional to the virtual address space. The page table
has one entry for each page that the process is using.
 In this design method, a real page frame is taken as the page table entry
 An inverted page table has one entry for each real page(or frame) of memory.
 Each of the entry contains the virtual address of the page stored in the real memory location
with information about the process that owns the page.
 Thus, there is only one page table in the system and for each page of physical memory it has
only one entry.
 Although this scheme decreases the amount of memory needed to store each page table, the
time taken to search in case of a page reference increases.
332 | Operating system

III. Hashed Page tables


 For handling address spaces larger than 32-bits we use hashed page table where has value is
the virtual page number.
 Every entry in the hashed table has a linked list of elements that hash to the same location
 The linked list element has three fields
 The virtual page number
 Value of mapped page-frame
 Pointer to next element
 Working: The virtual page number in the virtual address is hashed into the table.
 Now, this virtual page number is compared with field 1 of the first element of the linked list.
 If there is a match, the next page frame (field 2) is used to form the desired physical address.
 If the matching failed, then the next entries in the linked list are searched to find the matching
pair.
Operating system | 333

(15) Consider the following segment table:- (2013)


Segment Base Length
0 219 600
1 2300 14
2 90 100
3 1327 580
4 1952 96

What are the physical address for the following logical addresses?
i. 0, 430
ii. 1, 10
iii. 2, 500
iv. 3, 400
v. 4, 112
vi. 1, 11
Answer:
(1) 0, 430
Here the number of segment is =0
Offset d=430
The length for segment 0 is = 600
Since , 430<600
The physical address is,
Base+d=219+430=649 and the memory word 649 is accessed.
(2) 1, 10
Here the number of segment is =1
Offset d=10
The length for segment 1 is = 14
334 | Operating system
Since , 10<14
The physical address is,
Base+d=2300+10=2310 and the memory word 2310 is accessed.

(3) 2, 500
Here the number of segment is =2
Offset d=500
The length for segment 2 is = 100
Since , 500>100
The logcal address is invalid. There is no physical address.
(4) 3, 400
Here the number of segment is =3
Offset d=400
The length for segment 3 is = 580
Since , 400<580
The physical address is,
Base+d=1327+400=1727 and the memory word 1727 is accessed.

(5) 4, 112
Here the number of segment is =4
Offset d=122
The length for segment 4 is = 96
Since , 122>96
The logcal address is invalid. There is no physical address.

(6) 1, 11
Here the number of segment is =0
Offset d=11
The length for segment 0 is = 14
Since , 11<14
The physical address is,
Base+d=2300+11=2311 and the memory word 2311 is accessed.

16. Define local address. [2017]


Solution:
Physical Address identifies a physical location of required data in a memory. The user never
directly deals with the physical address but can access by its corresponding logical address. The
user program generates the logical address and thinks that the program is running in this logical
address but the program needs physical memory for its execution, therefore, the logical address
must be mapped to the physical address by MMU before they are The term Physical Address Space
is used for all physical addresses corresponding to the logical addresses in a Logical address
space. 0
Operating system | 335

17. Define TLB hit and TLB miss. [2016]


Solution:
TLB hit: A translation look aside buffer (TLB) is a memory cache that is used to reduce the time
taken to access a user memory location. It is a part of the chip's memory-management
unit (MMU). The TLB stores the recent translations of virtual memory to physical memory and can
be called an address-translation cache. A TLB may reside between the CPU and the CPU cache,
between CPU cache and the main memory or between the different levels of the multi-level cache.
The majority of desktop, laptop, and server processors include one or more TLBs in the memory-
management hardware, and it is nearly always present in any processor that
utilizes paged or segmented virtual memory.
TLB miss: If it is a TLB miss, then the CPU checks the page table for the page table entry. If the
present bit is set, then the page is in main memory, and the processor can retrieve the frame
number from the page-table entry to form the physical address. The processor also updates
the TLB to include the new page-table entry.
336 | Operating system

CHAPTER 7
VIRTUAL MEMORY
(1) What is virtual memory? (2016,2012)
Answer: Virtual Memory is a storage allocation scheme in which secondary memory can be
addressed as though it were part of main memory. The addresses a program may use to reference
memory are distinguished from the addresses the memory system uses to identify physical
storage sites, and program generated addresses are translated automatically to the corresponding
machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and
amount of secondary memory is available not by the actual number of the main storage locations.

(2) What are the advantages of virtual memory'? (2021,2017,2008)


Answer:
Advantages:
1. More processes may be maintained in the main memory: Because we are going to load only
some of the pages of any particular process, there is room for more processes. This leads to
more efficient utilization of the processor because it is more likely that at least one of the
more numerous processes will be in the ready state at any particular time.
2. A process may be larger than all of main memory: One of the most fundamental restrictions in
programming is lifted. A process larger than the main memory can be executed because of
demand paging. The OS itself loads pages of a process in main memory as required.
3. It allows greater multiprogramming levels by using less of the available (primary) memory for
each process.

(3) Explain the virtual machine structure of operating system with its advantages and
disadvantages. (2015)
Answer: Virtual machine is a software implementation of a physical machine - computer - that
works and executes analogically to it. Virtual machines are divided in two categories based on
their use and correspondence to real machine: system virtual machines and process virtual
machines. First category provides a complete system platform that executes complete operating
system, second one will run a single program.
The main advantages of virtual machines:
1. Multiple OS environments can exist simultaneously on the same machine, isolated from each
other;
2. Virtual machine can offer an instruction set architecture that differs from real computer's;
3. Easy maintenance, application provisioning, availability and convenient recovery.
4. The main disadvantages:
5. When multiple virtual machines are simultaneously running on a host computer, each virtual
machine may introduce an unstable performance, which depends on the workload on the
system by other running virtual machines;
6. Virtual machine is not that efficient as a real one when accessing the hardware.
Operating system | 337
(4) Explain the demand paging system. (2016,2012)
Answer: A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not in advance.
When a context switch occurs, the operating system does not copy any of the old program’s pages
out to the disk or any of the new program’s pages into the main memory Instead, it just begins
executing the new program after loading the first page and fetches that program’s pages as they
are referenced.

While executing a program, if the program references a page which is not available in the main
memory because it was swapped out a little ago, the processor treats this invalid memory
reference as a page fault and transfers control from the program to the operating system to
demand the page back into the memory.
Advantages
Following are the advantages of Demand Paging −
 Large virtual memory.
 More efficient use of memory.
 There is no limit on degree of multiprogramming.
338 | Operating system
Disadvantages
 Number of tables and the amount of processor overhead for handling page interrupts are
greater than in the case of the simple paged management techniques.

(5) Define the term page fault. Write down the steps in handling page fault.(2008)
Or, when do page fault occur? Describe the actions taken by the operating system.
(2017, 2014, 2012, 2010)
Answer: A page fault (sometimes called #PF, PF or hard fault[a]) is a type of exception raised by
computer hardware when a running program accesses a memory page that is not currently
mapped by the memory management unit (MMU) into the virtual address space of a process.
Logically, the page may be accessible to the process, but requires a mapping to be added to the
process page tables, and may additionally require the actual page contents to be loaded from a
backing store such as a disk.
Steps for handling page fault

 The basic idea behind paging is that when a process is swapped in, the pager only loads into
memory those pages that it expects the process to need ( right away. )
 Pages that are not loaded into memory are marked as invalid in the page table, using the
invalid bit. ( The rest of the page table entry may either be blank or contain information about
where to find the swapped-out page on the hard drive. )
 If the process only ever accesses pages that are loaded in memory ( memory
resident pages ), then the process runs exactly as if all the pages were loaded in to memory.
 On the other hand, if a page is needed that was not originally loaded up, then a page fault
trap is generated, which must be handled in a series of steps:
1. The memory address requested is first checked, to make sure it was a valid memory request.
2. If the reference was invalid, the process is terminated. Otherwise, the page must be paged in.
Operating system | 339
3. A free frame is located, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the necessary page from disk. ( This will usually
block the process on an I/O wait, allowing some other process to use the CPU in the meantime.
)
5. When the I/O operation is complete, the process’s page table is updated with the new frame
number, and the invalid bit is changed to indicate that this is now a valid page reference.
6. The instruction that caused the page fault must now be restarted from the beginning, ( as soon
as this process gets another turn on the CPU. )

 In an extreme case, NO pages are swapped in for a process until they are requested by page
faults. This is known as pure demand paging.
 In theory each instruction could generate multiple page faults. In practice this is very rare, due
to locality of reference, covered in section 9.6.1.
 The hardware necessary to support virtual memory is the same as for paging and swapping: A
page table and secondary memory. ( Swap space, whose allocation is discussed in chapter
12. )
 A crucial part of the process is that the instruction must be restarted from scratch once the
desired page has been made available in memory. For most simple instructions this is not a
major difficulty. However there are some architectures that allow a single instruction to
modify a fairly large block of data, ( which may span a page boundary ), and if some of the data
gets modified before the page fault occurs, this could cause problems. One solution is to access
both ends of the block before executing the instruction, guaranteeing that the necessary pages
get paged in before the instruction begins.

(6) What is paging? Draw the block diagram of paging table hardware scheme for memory
management. (2017)
Answer: Paging
A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard that's set up to
emulate the computer's RAM. Paging technique plays an important role in implementing virtual
memory.
Paging is a memory management technique in which process address space is broken into blocks
of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of
the process is measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum
utilization of the main memory and to avoid external fragmentation.
340 | Operating system

Address Translation
Page address is called logical address and represented by page number and the offset.
Logical Address = Page number + page offset
Frame address is called physical address and represented by a frame number and the offset.
Physical Address = Frame number + page offset
A data structure called page map table is used to keep track of the relation between a page of a
process to a frame in physical memory.
Operating system | 341

When the system allocates a frame to any page, it translates this logical address into a physical
address and creates entry into the page table to be used throughout execution of the program.
When a process is to be executed, its corresponding pages are loaded into any available memory
frames. Suppose you have a program of 8Kb but your memory can accommodate only 5Kb at a
given point in time, then the paging concept will come into picture. When a computer runs out of
RAM, the operating system (OS) will move idle or unwanted pages of memory to secondary
memory to free up RAM for other processes and brings them back when needed by the program.
This process continues during the whole execution of the program where the OS keeps removing
idle pages from the main memory and write them onto the secondary memory and bring them
back when required by the program.
Advantages and Disadvantages of Paging
Here is a list of advantages and disadvantages of paging −
 Paging reduces external fragmentation, but still suffer from internal fragmentation.
 Paging is simple to implement and assumed as an efficient memory management technique.
 Due to equal size of the pages and frames, swapping becomes very easy.
 Page table requires extra memory space, so may not be good for a system having small RAM.
342 | Operating system
(7) What is thrashing? Discuss about the FIFO page replacement algorithm, with its
advantages and disadvantages. (2010)
Answer: Thrashing
A process that is spending more time paging than executing is said to be thrashing. In other words
it means, that the process doesn't have enough frames to hold all the pages for its execution, so it
is swapping pages in and out very frequently to keep executing. Sometimes, the pages which will
be required in the near future have to be swapped out.

To prevent thrashing we must provide processes with as many frames as they really need "right
now".

(8) Discuss the hardware support for memory protection with base and limit registers.
Give suitable diagram. (2014)
Answer: Basic Hardware
 It should be noted that from the memory chips point of view, all memory accesses are
equivalent. The memory hardware doesn't know what a particular part of memory is being
used for, nor does it care. This is almost true of the OS as well, although not entirely.
 The CPU can only access its registers and main memory. It cannot, for example, make direct
access to the hard drive, so any data stored there must first be transferred into the main
memory chips before the CPU can work with it. ( Device drivers communicate with their
hardware via interrupts and "memory" accesses, sending short instructions for example to
transfer data from the hard drive to a specified location in main memory. The disk controller
monitors the bus for such instructions, transfers the data, and then notifies the CPU that the
data is there with another interrupt, but the CPU never gets direct access to the disk. )
 Memory accesses to registers are very fast, generally one clock tick, and a CPU may be able to
execute more than one machine instruction per clock tick.
 Memory accesses to main memory are comparatively slow, and may take a number of clock
ticks to complete. This would require intolerable waiting by the CPU if it were not for an
intermediary fast memory cache built into most modern CPUs. The basic idea of the cache is
Operating system | 343
to transfer chunks of memory at a time from the main memory to the cache, and then to access
individual memory locations one at a time from the cache.
 User processes must be restricted so that they only access memory locations that "belong" to
that particular process. This is usually implemented using a base register and a limit register
for each process, as shown in Figures 8.1 and 8.2 below. Every memory access made by a user
process is checked against these two registers, and if a memory access is attempted outside
the valid range, then a fatal error is generated. The OS obviously has access to all existing
memory locations, as this is necessary to swap users' code and data in and out of memory. It
should also be obvious that changing the contents of the base and limit registers is a privileged
activity, allowed only to the OS kernel.

Figure - A base and a limit register define a logical addresss space

Figure - Hardware address protection with base and limit registers


344 | Operating system
(9) Briefly explain basic disk space allocation methods with advantages and disadvantages.
he allocation methods define how the files are stored in the disk blocks. There are three main disk
space or file allocation methods.
 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.
All the three methods have their own advantages and disadvantages as discussed below:
1. Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the disk. For example, if a file
requires n blocks and is given a block b as the starting location, then the blocks assigned to the file
will be: b, b+1, b+2,……b+n-1. This means that given the starting block address and the length of
the file (in terms of blocks required), we can determine the blocks occupied by the file.
The directory entry for a file with contiguous allocation contains
 Address of starting block
 Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks. Therefore, it
occupies 19, 20, 21, 22, 23, 24 blocks.

Advantages:
 Both the Sequential and Direct Accesses are supported by this. For direct access, the address of
the kth block of the file which starts at block b can easily be obtained as (b+k).
 This is extremely fast since the number of seeks are minimal because of contiguous allocation of
file blocks.
Operating system | 345
Disadvantages:
 This method suffers from both internal and external fragmentation. This makes it inefficient in
terms of memory utilization.
 Increasing file size is difficult because it depends on the availability of contiguous memory at a
particular instance.
2. Linked List Allocation
In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk
blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block
contains a pointer to the next block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last block
(25) contains -1 indicating a null pointer and does not point to any other block.

Advantages:
 This is very flexible in terms of file size. File size can be increased easily since the system
does not have to look for a contiguous chunk of memory.
 This method does not suffer from external fragmentation. This makes it relatively better in
terms of memory utilization.
Disadvantages:
 Because the file blocks are distributed randomly on the disk, a large number of seeks are
needed to access every block individually. This makes linked allocation slower.
 It does not support random or direct access. We can not directly access the blocks of a file.
A block k of a file can be accessed by traversing k blocks sequentially (sequential access )
from the starting block of the file via block pointers.
 Pointers required in the linked allocation incur some extra overhead.
346 | Operating system
3. Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all the blocks
occupied by a file. Each file has its own index block. The ith entry in the index block contains the
disk address of the ith file block. The directory entry contains the address of the index block as
shown in the image:

Advantages:
 This supports direct access to the blocks occupied by the file and therefore provides fast
access to the file blocks.
 It overcomes the problem of external fragmentation.
Disadvantages:
 The pointer overhead for indexed allocation is greater than linked allocation.
 For very small files, say files that expand only 2-3 blocks, the indexed allocation would keep
one entire block (index block) for the pointers which is inefficient in terms of memory
utilization. However, in linked allocation we lose the space of only 1 pointer per block.
For files that are very large, single index block may not be able to hold all the pointers.
Following mechanisms can be used to resolve this:
1. Linked scheme: This scheme links two or more index blocks together for holding the pointers.
Every index block would then contain a pointer or the address to the next index block.
2. Multilevel index: In this policy, a first level index block is used to point to the second level
index blocks which inturn points to the disk blocks occupied by the file. This can be extended to 3
or more levels depending on the maximum file size.
3. Combined Scheme: In this scheme, a special block called the Inode (information
Node)contains all the information about the file such as the name, size, authority, etc and the
remaining space of Inode is used to store the Disk Block addresses which contain the actual fileas
shown in the image below. The first few of these pointers in Inode point to the direct blocksi.e
the pointers contain the addresses of the disk blocks that contain data of the file. The next few
pointers point to indirect blocks. Indirect blocks may be single indirect, double indirect or triple
indirect. Single Indirect block is the disk block that does not contain the file data but the disk
Operating system | 347
address of the blocks that contain the file data. Similarly, double indirect blocks do not contain
the file data but the disk address of the blocks that contain the address of the blocks containing
the file data.

(10) Consider the following page reference string : (2017,2015,2012)


1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
How many page faults would occur for the following replacement algorithms, assuming
four frames are available?
i. FIFO replacement;
ii. LRU replacement;
iii. Optimal replacement.
Answer: FIFO replacement
Reference string:
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
F1 1 1 1 1 1 1 5 5 5 5 5 3 3 3 3 3 1 1 1 1

F2 2 2 2 2 2 2 6 6 6 6 6 7 7 7 7 7 7 3 3

F3 3 3 3 3 3 3 2 2 2 2 2 6 6 6 6 6 6 6

F4 4 4 4 4 4 4 1 1 1 1 1 1 2 2 2 2 2

1 2 3 4 S S 5 6 7 8 s 9 10 11 s 12 13 s 14 S

Default page =14


348 | Operating system
LRU replacement;
reference string :
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
F1 1 1 1 1 1 1 1 1 1 1 1 1 1 6 6 6 6 6 6 6

F2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

F3 3 3 3 3 5 5 5 5 5 3 3 3 3 3 3 3 3 3

F4 4 4 4 4 6 6 6 6 6 7 7 7 7 1 1 1 1

1 2 3 4 S S 5 6 S S S 7 8 9 S S 10 S S S

Default page =10

(i) Optimal replacement.


Reference string :
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
F1 1 1 1 1 1 1 1 1 1 1 1 1 1 6 6 6 6 6 6 6

F2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

F3 3 3 3 3 5 5 5 5 5 3 3 3 3 3 3 3 3 3

F4 4 4 4 4 6 6 6 6 6 7 7 7 7 1 1 1 1

1 2 3 4 S S 5 6 S S S 7 8 9 S S 10 S S S

Default page =10


(11) Consider a logical address space of 256 pages with a 4 KB page size, mapped on to
a physical memory of 64 frames—
i. How many bits are required in the logical address
ii. How many bits are required in the physical address? (2016,2014)
Answer:
How many bits are required in the logical address

Size of logical address space = = # of pages x page size = 256 × 4096= 28 x 212= 220
So the number of required bits in the logical address =20 bit

How many bits are required in the physical address


Size of physical address space = # of frames × frame size (frame size = page size)
Size of physical address space = 64 × 4039 = 26x 212= 218
So the number of required bits in the physical address =18 bit
Operating system | 349
(12) Consider a logical address space of eight pages of 1024 words each mapped onto a
physical memory of 32 frames. (i) How many bits are there in the logical address? (ii) How
many bits are there in the physical address? (2013,2009)

Addressing within a 1024-word page requires 10 bits because 1024 = 210.


Since the logical address space consists of 8 = 23 pages,
the logical addresses must be 10+3 = 13 bits.
Similarly, since there are 32 = 25 physical pages,
phyiscal addresses are 5 + 10 = 15 bits long.

Physical Address (P = page number bits)


P P P P P - - - - - - - - - -

Logical Address (P = page number bits)


P P P - - - - - - - - - -

13. Explain remote control calls. [2009]


Solution:
Remote Procedure Call (RPC): Remote Procedure Call (RPC) is a powerful technique for
constructing distributed, client-server based applications. It is based on extending the
conventional local procedure calling so that the called procedure need not exist in the same
address space as the calling procedure. The two processes may be on the same system, or they
may be on different systems with a network connecting them.
When making a Remote Procedure Call:

1. The calling environment is suspended, procedure parameters are transferred across the
network to the environment where the procedure is to execute, and the procedure is executed
there.
350 | Operating system
2. When the procedure finishes and produces its results, its results are transferred back to the
calling environment, where execution resumes as if returning from a regular procedure call.

14)Consider the following page reference string:-


7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 1, 2, 0, 1, 7, 0, 1
How many page faults would occur for the following replacement algorithms, (assuming
four four frames are available)? [2021, 2016, 2013]
(i) FIFO replacement
(ii) Optimal repayment
(iii) LRU replacement
Answer:
(i) FIFO Replacement
Reference string
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 2 4 4 4 0 0 0 7 7 7
0 0 0 3 3 3 2 2 2 1 1 1 0 0
1 1 1 0 0 0 3 3 3 2 2 2 1
Page frames.
Number of page fault 15.

(ii) Optimal Replacement

Reference string
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 2 2 2 7
0 0 0 0 4 0 0 0
1 1 3 3 3 1 1
Page frames.
Number of page fault 9.
(iii) LRU Replacement

Reference string
7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
7 7 7 2 2 4 4 4 0 1 1 1
0 0 0 0 0 0 3 3 3 0 0
1 1 3 3 2 2 2 2 2 7
Page frames.
Number of page fault 12.
Operating system | 351

CHAPTER 8
FILE CONCEPT
(1) Define file. (2013,2015,2010)
Answer: A file is a named collection of related information that is recorded on secondary storage
such as magnetic disks, magnetic tapes and optical disks. In general, a file is a sequence of bits,
bytes, lines or records whose meaning is defined by the files creator and user.

(2) Explain different types of file. (2021,2010)


Answer: Different types of file is given below:
FILE TYPE USUAL EXTENSION FUNCTION
Executable exe, com, bin Read to run machine language program
Object obj, o Compiled, machine language not linked
Source Code C, java, pas, asm, a Source code in various languages
Batch bat, sh Commands to the command interpreter
Text txt, doc Textual data, documents
Word Processor wp, tex, rrf, doc Various word processor formats

Archive arc, zip, tar Related files grouped into one


compressed file
Multimedia mpeg, mov, rm For containing audio/video information
(3) What is file attribute? Discuss about typical file attributes.
(2017,2012,2010,2013,2015)
Answer: File attributes are settings associated with computer files that grant or deny certain
rights to how a user or the operating system can access that file. For example, IBM compatible
computers running MS-DOS or Microsoft Windows have capabilities of having read, archive,
system, and hidden attributes.
The attributes of a file are:
 Name only information kept in human-readable form.
 Identifier -- a unique tag (i.e., an internal number) that identifies the file within the file system.
 Type - needed for systems that support different types.
 Location — a pointer to file location on device.
 Size -- current file size.
 Protect ion — controls who can do reading, writing, executing.
 Time, date, and user identification — data for protection, security, and usage monitoring.
Information about files is kept in the directory structure, which is maintained on the disk.
352 | Operating system
(4) Explain the different types of file access methods. (2015)
Answer: File access mechanism refers to the manner in which the records of a file may be
accessed. There are several ways to access files −
 Sequential access
 Direct/Random access
 Indexed sequential access
Sequential access
A sequential access is that in which the records are accessed in some sequence, i.e., the
information in the file is processed in order, one record after the other. This access method is the
most primitive one. Example: Compilers usually access files in this fashion.
Direct/Random access
 Random access file organization provides, accessing the records directly.
 Each record has its own address on the file with by the help of which it can be directly
accessed for reading or writing.
 The records need not be in any sequence within the file and they need not be in adjacent
locations on the storage medium.
 Indexed sequential access
 This mechanism is built up on base of sequential access.
 An index is created for each file which contains pointers to various blocks.
 Index is searched sequentially and its pointer is used to access the file directly.

(5) Describe the basic directory operations. (2021,2015)


Answer: The allowed system calls for managing the directories exhibit more variation from
system to system than system calls for the files.
Here are the list of some common samples taken from the UNIX system, shows what they are and
how they work.
Here are the list of the common directory operations:
 Create
 Delete
 Opendir
 Closedir
 Readdir
 Rename
 Link
 Unlink
Let's describe briefly about all the above directory operations one by one.
Create
A directory is created. It is empty except for dot and dotdot, which are put there automatically by
the system.
Delete
A directory is delete. Here, only those directory can be deleted which are empty.
Opendir
Directories can be read. But before reading any directory, it must be opened first.
Therefore to list all the files present in a directory, a listing program opens that required directory
to read out the name of all files that this directory contains.
Operating system | 353
Closedir
Directory should be closed just to free up the internal table space when it has been read.
Readdir
This call returns the next entry in an open directory.
Rename
Directory can also be renamed just like the files.
Link
Linking is a technique that allows a file to appear in more than one directory.
Unlink
A directory entry is removed.

(6) Explain file system mounting. (2014)


Answer: Mounting is a process by which the operating system makes files and directories on
a storage device (such as hard drive, CD-ROM, or network share) available for user to access via
the computer's file system.
In general, the process of mounting comprises operating system acquiring access to the storage
medium; recognizing, reading, processing file system structure and metadata on it; before
registering them to the virtual file system (VFS) component.
354 | Operating system
The exact location in VFS that the newly-mounted medium got registered is called mount point;
when the mounting process is completed, the user can access files and directories on the medium
from there.
An opposite process of mounting is called unminting, in which the operating system cuts off all
user access to files and directories on the mount point, writes the remaining queue of user data to
the storage device, refreshes file system metadata, then relinquishes access to the device; making
the storage device safe for removal.
Normally, when the computer is shutting down, every mounted storage will undergo an
unminting process to ensure that all queued data got written, and to preserve integrity of file
system structure on the media.

7. Explain Purpose of the directory structure. [2021,2018]


Solution:
Purpose of the directory structure: Although catalog software is often the best way, a
directory structure can be used to keep track of camera originals, copies and derivatives.
Some may choose to either replace proprietary raw with DNG, or Convert to DNG with the
proprietary raw files embedded in them. Consequently, they may feel secure in eliminating the
original proprietary raw files.
Why archive the original captures or converted DNGs, with or without the original raw files? To
maintain file integrity and avoid accidental deletions.
It is best practice to put camera originals away once and treat that portion of your archive as
“read-only”. Write-once optical media (CD, DVD,Blu-Ray) fits this criterion. However it can be
more cumbersome and time consuming (storing and properly labeling optical media) compared
to hard drives for image retrieval.
Hard drives make image retrieval quick and easy, but it is also easy to delete or overwrite image
files unless you have procedures and safeguards in place.
Keeping original image files separate from derivatives is one such safeguard. A good plan is to
determine all the categories of derivative files you normally generate and then create a folder
structure to accommodate them.

8. What information are associated with a file? [2016]


Solution:
Operations on the File:There are various operations which can be implemented on a file. We will
see all of them in detail.
1. Create: Creation of the file is the most important operation on the file. Different types of files
are created by different methods for example text editors are used to create a text file, word
processors are used to create a word file and Image editors are used to create the image files.
2. Write: Writing the file is different from creating the file. The OS maintains a write pointer for
every file which points to the position in the file from which, the data needs to be written.
3. Read: Every file is opened in three different modes: Read, Write and append. A Read pointer is
maintained by the OS, pointing to the position up to which, the data has been read.
4. Re-position: Re-positioning is simply moving the file pointers forward or backward depending
upon the user's requirement. It is also called as seeking.
Operating system | 355
5. Delete: Deleting the file will not only delete all the data stored inside the file, It also deletes all
the attributes of the file. The space which is allocated to the file will now become available and can
be allocated to the other files.
6. Truncate: Truncating is simply deleting the file except deleting attributes. The file is not
completely deleted although the information stored inside the file get replaced.

9. Explain fist fit. [2015]


Solution:
First Fit: First Fit algorithm scans the linked list and whenever it finds the first big enough hole to
store a process, it stops scanning and load the process into that hole. This procedure produces
two partitions. Out of them, one partition will be a hole while the other partition will store the
process.
First Fit algorithm maintains the linked list according to the increasing order of starting index.
This is the simplest to implement among all the algorithms and produces bigger holes as compare
to the other algorithms.

10. What are the attribute of a file? [2015]


Solution:
A file can be defined as a data structure which stores the sequence of records. Files are stored in a
file system, which may exist on a disk or in the main memory. Files can be simple (plain text) or
complex (specially-formatted).
The collection of files is known as Directory. The collection of directories at the different levels, is
known as File System.
356 | Operating system
Attributes of the File
1. Name: Every file carries a name by which the file is recognized in the file system. One
directory cannot have two files with the same name.
2. Identifier: Along with the name, Each File has its own extension which identifies the type of the
file. For example, a text file has the extension .txt, A video file can have the extension .mp4.
3. Type: In a File System, the Files are classified in different types such as video files, audio files,
text files, executable files, etc.
4. Location: In the File System, there are several locations on which, the files can be stored. Each
file carries its location as its attribute.
5. Size: The Size of the File is one of its most important attribute. By size of the file, we mean the
number of bytes acquired by the file in the memory.
6. Protection: The Admin of the computer may want the different protections for the different
files. Therefore each file carries its own set of permissions to the different group of Users.
7. Time and Date: Every file carries a time stamp which contains the time and date on which the
file is last modified.

11. Write down the concept of file. [2010]


Solution:
The concept of file: Files are the most important mechanism for storing data permanently
on mass-storage devices. Permanently means that the data is not lost when the machine is
switched off. Files can contain:
 Data in a format that can be interpreted by programs, but not easily by humans (binary files);
 Alphanumeric characters, codified in a standard way (e.g., using ASCII or Unicode), and
directly readable by a human user (text files). Text files are normally organized in a sequence
of lines, each containing a sequence of characters and ending with a special character (usually
the newline character). Consider, for example, a Java program stored in a file on the hard-disk.
In this unit we will deal only with text files.
Each file is characterized by a name and a directory in which the file is placed (one may consider
the whole path that allows one to find the file on the hard-disk as part of the name of the file).
The most important operations on files are: creation, reading from, writing to, renaming, deleting.
All these operations can be performed through the operating system (or other application
programs), or through suitable operations in a Java program
Operating system | 357

CHAPTER 9
FILE SYSTEM IMPLEMENT
(1) What are the different types of file allocation methods? Briefly explain
(2017,2016,2013,2012,2008)
Answer: The File System Architecture Specifies that how the Files will be stored into the
Computer system means how the Files will be Stored into the System. Means how the data of the
user will be Stored into the Files and how we will Access the data from the File. There are many
types of Storage or Space allocation techniques which specify the Criteria by using; the Files will
Stores the data into them.
1) Continues Space Allocations: - The Continues space allocation will be used for storing all the
data into the Sequence Manner. Means all the data will store by using the Single Memory Block. In
this all the data of the File will be stored by using the Continues Memory of the Computer Systems.
This makes fastest Accessing of data and this is used in the Sequential Access.
In this when System Finds out the First Address or base Address from the Set of Address of the
Files, then this will makes easy for the System to read all the data from the Computer Systems. But
for Storing the Data into the Continues Forms, CPU loss his Time because many Times Data any
Large from the Existing Space. So that this will create Some Difficulties to find out the Free Space
on the disk.
2) Linked Allocation: - This Technique is widely used for storing the contents into the System. In
this the Space which is provided to the Files may not be in the Continuous Form and the Data of
the Files will be Stored into the Different blocks of the Disks.
This Makes Accessing Difficult for the Processor. Because Operating System will Traverse all the
Different Locations and also use Some Jumping Mechanism for Reading the contents from the File
in this the First Location will be accessed and after that System will search for the other Locations.
But Always Remember that all the Locations will be linked with Each other and all the Locations
will be automatically traversed.
3) Index Allocation: - This is also called as Some Advancement into the Linked Space Allocation.
This is same as the Linked Space Allocation by it also maintains all the Disk Address into the Form
of Indexes. As Every Book Contains an index on the Front Page of the System Like this All the Disk
Addresses are Maintained and stored into the Computer System and When a user Request To read
the Contents of the File , then the Whole Address will be used by the System by using the index
numbers.
For this System also Maintains an index Table which contains the Entry for the data and the
Address Means in which Address, which Data has Stored So that this makes the Accessing Fastest
and Easy for the Users.
(2) Write short notes on Resource Allocation Graph; (2021,2016)
Answer: As Banker’s algorithm using some kind of table like allocation, request, available all that
thing to understand what is the state of the system. Similarly, if you want to understand the state
of the system instead of using those table, actually tables are very easy to represent and
understand it, but then still you could even represent the same information in the graph. That
graph is called Resource Allocation Graph (RAG).
358 | Operating system
So, resource allocation graph is explained to us what is the state of the system in terms
of processes and resources. Like how many resources are available, how many are allocated and
what is the request of each process. Everything can be represented in terms of the diagram. One of
the advantages of having a diagram is, sometimes it is possible to see a deadlock directly by using
RAG, but then you might not be able to know that by looking at the table. But the tables are better
if the system contains lots of process and resource and Graph is better if the system contains less
number of process and resource.
We know that any graph contains vertices and edges. So RAG also contains vertices and edges. In
RAG vertices are two type –
1. Process vertex – Every process will be represented as a process vertex. Generally, the process
will be represented with a circle.
2. Resource vertex – Every resource will be represented as a resource vertex. It is also two type –
 Single instance type resource – It represents as a box, inside the box, there will be one dot.So
the number of dots indicate how many instances are present of each resource type.
 Multi-resource instance type resource – It also represents as a box, inside the box, there will
be many dots present.

(3) Write short notes on Virtual File System. (2016)


Answer: The Virtual File system (VFS), sometimes called the Virtual File Switch is the
subsystem of the kernel that implements the file and file system-related interfaces to user-space.
All file systems rely on the VFS not only to coexist but also to interoperate. This enables programs
to use standard Unix system calls to read and write to different file systems, even on different
media.

The VFS is the glue that enables system calls such as open(), read(), and write() to work regardless
of the file system or underlying physical medium.
Operating system | 359
The figure shows the flow from user-space’s write() call through the data arriving on the physical
media. On one side of the system call is the generic VFS interface, providing the frontend to user-
space; on the other side of the system call is the file system-specific backend, dealing with the
implementation details.
the kernel needs to understand the underlying details of the file systems, except the file systems
themselves. For example, consider a simple user-space program that does:

ret = write (fd, buf, len);

This system call writes the len bytes pointed to by buf into the current position in the file
represented by the file descriptor fd.
1. This system call is first handled by a generic sys_write() system call that determines the
actual file writing method for the file system on which fd resides.
2. The generic write system call then invokes this method, which is part of the file system
implementation, to write the data to the media (or whatever this file system does on write).

(4) Write down the advantages and disadvantages of Contiguous Linked and Indexed
Allocation methods. (2021,2015)
Answer:
Contiguous Linked and Indexed Allocation methods
Advantages:
 Both the Sequential and Direct Accesses are supported by this. For direct access, the address of
the kth block of the file which starts at block b can easily be obtained as (b+k).
 This is extremely fast since the number of seeks are minimal because of contiguous allocation of
file blocks.
Disadvantages:
 This method suffers from both internal and external fragmentation. This makes it inefficient in
terms of memory utilization.
 Increasing file size is difficult because it depends on the availability of contiguous memory at a
particular instance.

(5) Why must the bit map for file allocation be kept on mass storage rather than in main
memory? (2008)
Answer: In case of system crash (memory failure) the free-space list would not be lost as it would be
if the bit map had been stored in main memory.

(6) What problems could occur if a file system allowed a file system to be mounted
simultaneously at more than one location? (2008)
Answer: There would be multiple paths to the same file, which could confuse users or
encourage mistakes (deleting a file with one path deletes the file in all the other paths).
360 | Operating system
(7) What are the purposes of disk scheduling? (2013,2008)
Answer: Disk scheduling is is done by operating systems to schedule I/O requests arriving for
disk. Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:
 Multiple I/O requests may arrive by different processes and only one I/O request can be
served at a time by disk controller. Thus other I/O requests need to wait in waiting queue and
need to be scheduled.
 Two or more request may be far from each other so can result in greater disk arm movement.
 Hard drives are one of the slowest parts of computer system and thus need to be accessed in
an efficient manner.
 There are many Disk Scheduling Algorithms but before discussing them let’s have a quick look
at some of the important terms:
 Seek Time:Seek time is the time taken to locate the disk arm to a specified track where the
data is to be read or write. So the disk scheduling algorithm that gives minimum average seek
time is better.
 Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to
rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
 Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed
of the disk and number of bytes to be transferred.
 Disk Access Time: Disk Access Time is:
Disk Access Time = Seek Time +
Rotational Latency +
Transfer Time

 Disk Response Time: Response Time is the average of time spent by a request waiting to
perform its I/O operation. Average Response time is the response time of the all
requests. Variance Response Time is measure of how individual request are serviced with respect
to average response time. So the disk scheduling algorithm that gives minimum variance response
time is better.

(8) What is DNS? (2014)


Answer: The Domain Name System (DNS) is the phonebook of the Internet. Humans access
information online through domain names, like nytimes.com or espn.com. Web browsers interact
through Internet Protocol (IP) addresses. DNS translates domain names to IP addresses so
browsers can load Internet resources.
Each device connected to the Internet has a unique IP address which other machines use to find
the device. DNS servers eliminate the need for humans to memorize IP addresses such as
192.168.1.1 (in IPv4).
Operating system | 361
09. Define FCB. [2018]
Solution:
FCB: A File Control Block (FCB) is a file system structure in which the state of an open file is
maintained. A FCB is managed by the operating system, but it resides in the memory of the
program that uses the file, not in operating system memory.
Below is the feature of File control block in operating system:
 File Control Block (FCB) is a structure of file system that preserves the state of an open file.
 The OS (operating system) checks an FCB, but it in the program's memory which uses the file,
not the operating system memory.
 Allows process to have multiple files open at once
 It can save the memory.
 FCB is an internal file system framework that is used in the DOS to access disk files.
 The FCB block includes information about the name of the drive, the filename, the type of file
and other information provided by the device when accessing or creating a file.

10. Different between sequential and direct file access method. [2017]
Solution:
Sequential Access –It is the simplest access method. Information in the file is processed in order,
one record after the other. This mode of access is by far the most common; for example, editor and
compiler usually access the file in this fashion.
Read and write make up the bulk of the operation on a file. A read operation -read next- read the
next position of the file and automatically advance a file pointer, which keeps track I/O location.
Similarly, for the write writer next append to the end of the file and advance to the newly written
material.
Key points:
 Data is accessed one record right after another record in an order.
 When we use read command, it move ahead pointer by one
 When we use write command, it will allocate memory and move the pointer to the end of the
file
 Such a method is reasonable for tape.
Direct Access – Another method is direct access method also known as relative access method. A
filed-length logical record that allows the program to read and write record rapidly. The direct
access is based on the disk model of a file since disk allows random access to any file block. For
direct access, the file is viewed as a numbered sequence of block or record. Thus, we may read
block 14 then block 59 and then we can write block 17. There is no restriction on the order of
reading and writing for a direct access file.
A block number provided by the user to the operating system is normally a relative block number,
the first relative block of the file is 0 and then 1 and so on.
362 | Operating system
11. What is process control block? [2009]
Solution:
Process Control Block: All of the information needed to keep track of a process when
switching is kept in a data package called a process control block. The process control
block typically contains:
 An ID number that identifies the process
 Pointers to the locations in the program and its data where processing last occurred
 Register contents
 States of various flags and switches
 Pointers to the upper and lower bounds of the memory required for the process
 A list of files opened by the process
 The priority of the process
Each process has a status associated with it. Many processes consume no CPU time until they get
some sort of input. For example, a process might be waiting for a keystroke from the user. While it
is waiting for the keystroke, it uses no CPU time. While it's waiting, it is "suspended". When the
keystroke arrives, the OS changes its status. When the status of the process changes, from pending
to active, for example, or from suspended to running, the information in the process control block
must be used like the data in any other program to direct execution of the task-switching portion
of the operating system.

12. Describe PCB with diagram. [2009]


Solution: Process Control Block is a data structure that contains information of the process
related to it. The process control block is also known as a task control block, entry of the process
table, etc.
It is very important for process management as the data structuring for processes is done in terms
of the PCB. It also defines the current state of the operating system.
Structure of the Process Control Block: The process control stores many data items that are
needed for efficient process management. Some of these data items are explained with the help of
the given diagram −
Operating system | 363
The following are the data items −
Process State: This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number: This shows the number of the particular process.
Program Counter: This contains the address of the next instruction that needs to be executed in
the process.
Registers: This specifies the registers that are used by the process. They may include
accumulators, index registers, stack pointers, general purpose registers etc.
List of Open Files: These are the different files that are associated with the process
CPU Scheduling Information: The process priority, pointers to scheduling queues etc. is the CPU
scheduling information that is contained in the PCB. This may also include any other scheduling
parameters.
Memory Management Information: The memory management information includes the page
tables or the segment tables depending on the memory system used. It also contains the value of
the base registers, limit registers etc.
I/O Status Information: This information includes the list of I/O devices used by the process, the
list of files etc.
Accounting information: The time limits, account numbers, amount of CPU used, process
numbers etc. are all a part of the PCB accounting information.
Location of the Process Control Block: The process control block is kept in a memory area that
is protected from the normal user access. This is done because it contains important process
information. Some of the operating systems place the PCB at the beginning of the kernel stack for
the process as it is a safe location.
364 | Operating system

CHAPTER 10
DISK I/O MANAGEMENT
1. Define Caching.
A cache is a region of fast memory that holds copies of data. Access to the cached copy is
more efficient than access to the original. Caching and buffering are distinct functions, but
sometimes a region of memory can be used for both purposes.

2. Define Spooling.
A spool is a buffer that holds output for a device, such as printer, that cannot accept
interleaved data streams. When an application finishes printing, the spooling system queues the
corresponding spool file for output to the printer. The spooling system copies the queued spool
files to the printer one at a time.

3. What are the various Disk-Scheduling Algorithms?


The various disk-scheduling algorithms are,
 First Come First Served Scheduling
 Shortest Seek Time First Scheduling
 SCAN Scheduling
 C-SCAN Scheduling
 LOOK Scheduling

4. What is Low-Level Formatting?


Before a disk can store data, it must be divided into sectors that the disk controller can
read and write. This process is called low-level formatting or physical formatting. Low-level
formatting fills the disk with a special data structure for each sector. The data structure for a
sector consists of a header, a data area, and a trailer.

5. What is the use of Boot Block?


For a computer to start running when powered up or rebooted it needs to have an initial
program to run. This bootstrap program tends to be simple. It finds the operating system on the
disk loads that kernel into memory and jumps to an initial address to begin the operating system
execution. The full bootstrap program is stored in a partition called the boot blocks, at fixed
location on the disk. A disk that has boot partition is called boot disk or system disk.

6. What is Sector Sparing?


Low-level formatting also sets aside spare sectors not visible to the operating system. The
controller can be told to replace each bad sector logically with one of the spare sectors. This
scheme is known as sector sparing or forwarding.

7. What Does Error Handling Mean?


Error handling refers to the response and recovery procedures from error conditions
present in a software application. In other words, it is the process comprised of anticipation,
detection and resolution of application errors, programming errors or communication errors.
Operating system | 365
Error handling helps in maintaining the normal flow of program execution. In fact, many
applications face numerous design challenges when considering error-handling techniques.

8. Techopedia Explains Error Handling


Error handling helps in handling both hardware and software errors gracefully and helps
execution to resume when interrupted. When it comes to error handling in software, either the
programmer develops the necessary codes to handle errors or makes use of software tools to
handle the errors. In cases where errors cannot be classified, error handling is usually done with
returning special error codes. Special applications known as error handlers are available for
certain applications to help in error handling. These applications can anticipate errors, thereby
helping in recovering without actual termination of application.
There are four main categories of errors:
 Logical errors
 Generated errors
 Compile-time errors
 Runtime errors
Error-handling techniques for development errors include rigorous proofreading. Error-handling
techniques for logic errors or bugs is usually by meticulous application debugging or
troubleshooting. Error-handling applications can resolve runtime errors or have their impact
minimized by adopting reasonable countermeasures depending on the environment. Most
hardware applications include an error-handling mechanism which allows them to recover
gracefully from unexpected errors.
As errors could be fatal, error handling is one of the crucial areas for application designers and
developers, regardless of the application developed or programming languages used. In worst-
case scenarios, the error handling mechanisms force the application to log the user off and shut
down the system.

9. What is Disk Scheduling Algorithm?


A Process makes the I/O requests to the operating system to access the disk. Disk
Scheduling Algorithm manages those requests and decides the order of the disk access given to
the requests.

10. Why Disk Scheduling Algorithm is needed?


Disk Scheduling Algorithms are needed because a process can make multiple I/O requests
and multiple processes run at the same time. The requests made by a process may be located at
different sectors on different tracks. Due to this, the seek time may increase more. These
algorithms help in minimizing the seek time by ordering the requests made by the processes.

11. Define Important Terms related to Disk Scheduling Algorithms


 Seek Time - It is the time taken by the disk arm to locate the desired track.
 Rotational Latency - The time taken by a desired sector of the disk to rotate itself to the
position where it can access the Read/Write heads is called Rotational Latency.
 Transfer Time - It is the time taken to transfer the data requested by the processes.
 Disk Access Time - Disk Access time is the sum of the Seek Time, Rotational Latency, and
Transfer Time.
366 | Operating system
Disk Scheduling Algorithms
First Come First Serve (FCFS)
In this algorithm, the requests are served in the order they come. Those who come first are served
first. This is the simplest algorithm.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60.

Seek Time = Distance Moved by the disk arm = (140-70)+(140-50)+(125-50)+(125-30)+(30-


25)+(160-25)=480
Shortest Seek Time First (SSTF)
In this algorithm, the shortest seek time is checked from the current position and those requests
which have the shortest seek time is served first. In simple words, the closest request from the
disk arm is served first.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60.

Seek Time = Distance Moved by the disk arm = (60-50)+(50-30)+(30-25)+(70-25)+(125-


70)+(140-125)+(160-125)=270

SCAN
In this algorithm, the disk arm moves in a particular direction till the end and serves all the
requests in its path, then it returns to the opposite direction and moves till the last request is
found in that direction and serves all of them.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60. And it is given that the disk arm should move towards the larger value.
Operating system | 367

Seek Time = Distance Moved by the disk arm = (170-60)+(170-25)=255


LOOK
In this algorithm, the disk arm moves in a particular direction till the last request is found in that
direction and serves all of them found in the path, and then reverses its direction and serves the
requests found in the path again up to the last request found. The only difference between SCAN
and LOOK is, it doesn't go to the end it only moves up to which the request is found.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60. And it is given that the disk arm should move towards the larger value.

Seek Time = Distance Moved by the disk arm = (170-60)+(170-25)=235


C-SCAN
This algorithm is the same as the SCAN algorithm. The only difference between SCAN and C-SCAN
is, it moves in a particular direction till the last and serves the requests in its path. Then, it returns
in the opposite direction till the end and doesn't serve the request while returning. Then, again
reverses the direction and serves the requests found in the path. It moves circularly.
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60. And it is given that the disk arm should move towards the larger value.

Seek Time = Distance Moved by the disk arm = (170-60)+(170-0)+(50-0)=330


C-LOOK
This algorithm is also the same as the LOOK algorithm. The only difference between LOOK and C-
LOOK is, it moves in a particular direction till the last request is found and serves the requests in
its path. Then, it returns in the opposite direction till the last request is found in that direction and
doesn't serve the request while returning. Then, again reverses the direction and serves the
requests found in the path. It also moves circularly.
368 | Operating system
Eg. Suppose the order of requests are 70, 140, 50, 125, 30, 25, 160 and the initial position of the
Read-Write head is 60. And it is given that the disk arm should move towards the larger value.

Seek Time = Distance Moved by the disk arm = (160-60)+(160-25)+(50-25)=260

You might also like