0% found this document useful (0 votes)
23 views

Operating system notes

Operating system notes

Uploaded by

premrajora90501
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Operating system notes

Operating system notes

Uploaded by

premrajora90501
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

Unit-2

Memory Management technique-

In operating system, memory management is the function responsible


for managing the computer’s primary memory.

The memory management function keep track of the status of eacg


memory location either located or free.

It determines how memory is allocated among competing process,


deciding which gets memory, when they receive it and how much they
are allowed.

When memory is allocated it determines which memory locations will


be assigned. It tracks when memory is freed or unallocated and update
the status.

Memory management is the process of controlling and coordinating a


computer’s main memory. It ensures that block of memory space are
properly managed and allocated so the operating system ,application
and other running process have the memory they need to carry out
their operations.

To achieve a degree of multi programming and proper utilization of


memory,memory management is important. the main aim of memory
management is to achieve efficient utilization of memory.
In a multiprogramming computer, the operating system resides in a
part of memory and the rest is used by multiple processes. The task of
subdividing the memory among different processes is called memory
management.
Why Memory Management is required:

 Allocate and de-allocate memory before and after process


execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.

Every resource management module of operating system has to carry


any four functions in respect of corresponding resource.

1.) Keeping track of resource.(to check resource is free or not)


2.) Resource allocation policy.(provide resource for how much time
and where)
3.) Allocating the resource.(providing the resource actually)
4.) Reclaiming the resource.

GOALS OF MEMORY MANAGEMENT-

Memory management is the process of controlling and coordinating a


computer's main memory.
It ensures that blocks of memory space are properly managed and
allocated so the operating system (OS), applications and other running
processes have the memory they need to carry out their operations.

Space utilization-try to keep as much space can be used not waste.

Run larger program in smaller memory area.


User/Logical and Physical Address Space:

User/Logical Address space: An address generated by the CPU is


known as a “Logical Address”. It is also known as a Virtual
address. Logical address space can be defined as the size of the
process. A logical address can be changed.

Physical Address space: An address seen by the memory unit (i.e


the one loaded into the memory address register of the memory) is
commonly known as a “Physical Address”. A Physical address is also
known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address
space. A physical address is computed by MMU. The run-time
mapping from virtual to physical addresses is done by a hardware
device Memory Management Unit(MMU). The physical address always
remains constant.
Mainly user address space is the size of the process and the physical
address is the size of the main memory.

MEMORY ALLOCATION METHODS-

1.Contiguous memory allocation technique


a.)single partition allocation
b.) multiple partition allocation

2.Non-contiguous memory allocation technique

a.)static partition
b.)dynamic partition

3. Relocatable memory allocation technique

4. Paging memory allocation technique

5. Segmentation memory allocation technique

6. Demand paging memory allocation technique


7. Demand segmented memory allocation technique

In uni-programming system,main memory is divided into two parts-

One part for the operating system(resident monitor,kernel)

And one part for the program currently being executed.(contiguous)

In multi-programming system ,the ”user” part of memory must be


futher subdivided to accommodate multiple process. the task of
subdivision is carried out dynamically by the operating system. and is
known as memory management.(non-contiguous)

CONTIGUOUS MEMORY ALLOCATION TECHNIQUE-

Contiguous storage allocation implies that a program’s data and


instruction are assured to occupy a single contiguous memory area.

a.) Single partition allocation


b.) Multiple partition allocation

Single partition allocation-

The simplest possible memory management scheme is to run just one


program at a time. in this scheme, the memory is divided into two
parts. One part holds the operating system and the remaining holds the
users processes which are loaded and executed one at a time. When
the user completes its task, it is taken out of memory and other
requesting process is brought into the memory by the operating
system.
ADVANTAGES-

1. The major advantage of this scheme lies in its simplicity.


2. It does not require great expertise to understand to use such a
system.

DIS-ADVANTAGE-

1. Poor utilization of memory.


2. Poor utilization of processors(waiting for o/s)
3. Process is limited to the size of available main memory.

Multiple memory allocation-

In this memory management scheme more than one program run at


one time. in this scheme, memory is divided into more than one parts.
One part is hold by operating system and other part is further divided
into many party for execution of the process. this multiple allocation is
further divided into two parts

1. Fixed sized memory allocation (static)


2. Variable sized memory allocation(dynamic)
Fixed sized memory allocation-

This schemes divides the memory into number of separates fixed area,
each of which can hopld one process.

For example-

The memory is consisting of three fixed area, of size 200k, 300k and
400k respectively, each of which hold a process of 150k, 200k, 300k
respectively. all three processes could be active at any time. This
scheme is also known as partitioned memory allocation-static.

Each space will typically contain unused space,illustrarted by shaded


areas. The occurrence of wasted space in this way is refered to as
internal fragmentation.

ADVANTAGES-

1. Simply and easy to implement.

Dis-advatage-

1. The fixed partition sizes can prevent a process being run due to
the unavailability of a partition of sufficient size.
2. Internal fragmentation wastes space which, collectively could
accommodate another process.
These problems can be solved by next scheme.

VARIABLE SIZED PARTITION MEMORY ALLOCATION-

This obvious cure for the fixed partition problems is to allow the
partitions to be variable in size at load time. In other words, to allocate
the exact amount of memory to the process it requires. Processes are
loaded into consecutive areas until the memory is filled, or, more likely,
the remaining space is too small to accommodate another process.

When a process terminates, the space it occupied is freed and becomes


available for the loading of a new process. As processes terminate and
space is freed, the free space appears as a series of ‘holes’ between the
active memory areas. The operating system must attempt to load an
incoming process into a space large enough to accommodate it. It can
often happen that a new process cannot be started because none of
the holes is large enough even though the total free space is more than
the required size. This situation is shown into the figure in which a
process B has terminated. Suppose a new process D enters and requires
300k. here we have 450k unused space but we cannot allocate this
space to process D. distribution of the free memory space in this
fashion is termed external fragmentation.

STORAGE PLACEMENT POLICIES-

When new processes have to be loaded using the variable partiton


scheme, it is necessary to try to select the ‘best’ locations in which to
place them; to select the series of holes which will provide the
maximum overall throughput of the system, bearing in mind that an
inefficient allocation can delay the loading of a process. An algorithm
used for this purpose is termed as placement policy.
i.)Best fit policy- in this case, the memory manager places a process in
the smallest block of unallocated memory in which it will fit. for
example, as shown in fig. a process requires 12kb of memory and the
memory manager cureently has a list of unallocated blocks of 7kb,
14kb, 19kb, 10kb, and 13kb blocks. The best fit strategy will allocate
12kb of the 13kb block to the process.

ii)First fir policy- an incoming process is placed in the first available hole
which can accommodate it. Using the same example to fulfill 12kb
request, first fit will allocate 12kb of the 12kb block of the process.

iii)Worst fit policy-the memory manager places a process in the largest


block of unallocated memory available. To furnish the 12kb request
again, worst fit will allocate 12kb of the 19kb block to the process,
leaving a 7kb block for future use.

ADVANTAGES-

1. It facilities multiprogramming, hence, more efficient utilization of the


processor and I/O devices.

2. the algorithms used are simple and easy to implement.

3. it requires no special costly hardware.

Dis-advantage-

1. Fragmentation can be significant problem.


2. Even if memory id not fragmented, the single free area may not
be large enough for a partition.
3. It requires more memory than a single process system in order to
hold more than one process.
Relocatable partitioned memory management-

When partitioning creates multiple holes in memory, it is possible to


combine them all into one big one by moving all the processes
downward as far as possible. This technique is known as memory
compaction. It is a solution to the problem of external fragmentation.

for example the memory map of (a) can be compacted as shown in fig.

the two holes of sizes 200k and 250k can be compacted into one hole
of the size 450k. now it is possible to allocate the memory to a new
process D of size 300k.

there is the decision to be made as to when and how often to perform


the compaction;

scanned throughout after relocating process to identify address


dependent and modify their address dependent and modify their
address this is called CPU overhead to avoid corruption of process this
is called recompaction

for example, we could compact the memory:

1. As soon as any process terminates.


2. When a new process cannot load due to fragmentation
3. At fixed intervals
4. When the users decides to.

ADVANTAGES-

1. The relocatable partition scheme eliminates fragmentation.


2. This allows for a higher degree of multiprogramming, which
results is increased memory and processor utilization.

DIS-ADVANTAGES-

1. Complex
2. High cost
3. Slow down the speed
4. Compaction time may be substantial
5. Some memory will still be unused because even though it is
compacted, the amount of free area may be less than the needed
partition size.

Fragmentation
As processes are loaded and removed from memory, the free memory space
is broken into little pieces. It happens after sometimes that processes cannot
be allocated to memory blocks considering their small size and memory
blocks remains unused. This problem is known as Fragmentation.
Fragmentation is of two types −

S.N. Fragmentation & Description

1
External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it,
but it is not contiguous, so it cannot be used.
2
Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left
unused, as it cannot be used by another process.

Distinguish between internal and external fragmentation-

External fragmentation-

1. The space wasted external to the allocated memory regions.


2. Memory space exists to satisfy a request, but it is ususable as it it
not contiguous.

Internal fragmentation-

1. The space wasted internal to the allocated memory regions.


2. Allocated memory may be slightly larger than requested memory;
this size difference is wasted memory internal to a partition.

PAGING MEMORY MANAGEMENT TECHNIQUE-

Another possible solution to the external fragmentation problem is


paged memory management. In a paged system, each process id
divided into a number of fixed size ‘chunks’ called pages, typically 4k
bytes in length. The memory space is also divided into blocks of the
same size called frames. The loading process now involves transferring
each process page to some memory frame.

As shown in figure three processes which have been loaded into frame.
The pages remain logically contiguous but the corresponding frames
are not necessarily contiguous.
Another definition-Paging is a memory management technique in which
process address space is broken into blocks of the same size called pages .
The size of the process is measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical)
memory called frames and the size of a frame is kept the same as that of a
page to have optimum utilization of the main memory and to avoid external
fragmentation.
Advantages and Disadvantages of Paging
Here is a list of advantages and disadvantages of paging −
 Paging reduces external fragmentation,
 Paging is simple to implement and assumed as an efficient memory
management technique.
 Due to equal size of the pages and frames, swapping becomes very
easy.
 Increases memory and processor utilization.
Dis-advantages-
 but still suffer from internal fragmentation.
 Page table requires extra memory space, so may not be good for a
system having small RAM.
 The page address mapping hardware usually increases the cost of the
computer and at the same time slows down the processor.
 Some memory will still be unused if the number of avaliable frames is
not sufficient for the address spaces of the process to be run.
Segmentation memory management technique-
Segmentation is a memory management technique in which each job is
divided into several segments of different sizes, one for each module that
contains pieces that perform related functions. Each segment is actually a
different logical address space of the program.

Segmentation memory management works very similar to paging but here


segments are of variable-length where as in paging pages are of fixed size.

A program segment contains the program's main function, utility functions,


data structures, and so on. The operating system maintains a segment map
table for every process and a list of free memory blocks along with segment
numbers, their size and corresponding memory locations in main memory. For
each segment, the table stores the starting address of the segment and the
length of the segment.

ADVANTAGES –
1. Allows dynamic segment growth.
2. Assists dynamic linking.
3. Facilities shared segments.
Dis-advantages-
1. Considerable compaction overhead.
2. This is difficult in managing variable size segments on secondary
storage.
3. the maximum size of a segment is limited by size of main memory.

Demand Paging-
According to the concept of virtual memory, the entire process doesn't need
to be loaded into the main memory to execute any process at the given time.
The process can be executed efficiently if only some of the pages are present
in the main memory at a particular time. But, the problem here is how we
decide the basis of the selection of pages to be loaded into the main memory
for execution of a process beforehand. It means which page should be present
in the main memory at a particular time and which one should not be there.

To resolve this problem, here is a concept of demand paging in an operating


system. This concept says we should not load any page into the main memory
until required or keep all the pages in secondary memory until demanded.

Demand paging is a technique used in virtual memory systems where the


pages are brought in the main memory only when required or demanded by
the CPU. Hence, it is also called lazy swapper because the swapping of pages
is done only when the CPU requires it. Virtual memory is commonly
implemented in demand paging.

In demand paging, the pager brings only those necessary pages into the
memory instead of swapping in a whole process. Thus, demand paging avoids
reading into memory pages that will not be used anyways, decreasing the
swap time and the amount of physical memory needed.

Advantages of Demand Paging


Here are the following advantages of demand paging in the operating system,
such as:
o It increases the degree of multiprogramming as many processes can be
present in the main memory simultaneously.
o There is a more efficient use of memory as processes having a size more
than the size of the main memory can also be executed using this
mechanism because we are not loading the whole page at a time.
o We have to the right for scaling of virtual memory.
o If any program is larger than physical memory, it helps run this program
without compaction.
o Partition management is simpler.
o It is more useful in a time-sharing system.
o It has no limitations on the level of multi-programming.
o Discards external fragmentation.
o Easy to swap all pages.

Disadvantages of Demand Paging


Below are some disadvantages of demand paging in an operating system,
such as:

o The amount of processor overhead and the number of tables used for
handling the page faults is greater than in simple page management
techniques.
o It has more probability of internal fragmentation.
o Its memory access time is longer.
o Page Table Length Register (PTLR) has a limit for virtual memory.
o Page map table is needed additional memory and registers.
DEMAND SEGMENTATION-

According to the concept of virtual memory, the entire process doesn't need
to be loaded into the main memory to execute any process at the given time.
The process can be executed efficiently if only some of the segments are
present in the main memory at a particular time. But, the problem here is how
we decide the basis of the selection of pages to be loaded into the main
memory for execution of a process beforehand. It means which segments
should be present in the main memory at a particular time and which one
should not be there.

To resolve this problem, here is a concept of demand segment in an


operating system. This concept says we should not load any segment into the
main memory until required or keep all the segments in secondary memory
until demanded.

Demand segment is a technique used in virtual memory systems where the


segments are brought in the main memory only when required or demanded
by the CPU.

In other language- process is divided into variable segments and loaded into
main memory into parts not fully at one time. and another segments are
loaded when demanded.

Page Fault-
A page fault occurs when a program attempts to access data or code that is
in its address space, but is not currently located in the system RAM.

In other words-
When we load any page into main memory and the data inside that page doesn’t
belong to main memory then page fault occurs and then we use page
replacement to get the required page.
Steps to handle a page fault-

1. If a process refers to a page which is not in the physical memory, then an


internal table kept with process control block is checked to verify whether a
memory reference to a page was valid or invalid.
2. If the memory reference to page was valid, but the page is missing the
process of bringing a page into the physical memory starts.
3. Free memory location is identified to bring a missing page.
4. Restart the instruction that interrupted due to missing page.

Page replacement algorithm-


The algorithms used to choose which page will be replaced are referred to a s
page replacement algorithms or policies.

1. First-in, First-out (FIFO) page replacement algorithm-


FIFO method selects for removal the page which has been resident in
memory for the longest time. however, the performance of the FIFO
method is often poor, due to the probability that some heavily used
routines are in constant use throughout life of the process. this algorithm is
implemented with the help of FIFO queue to hold the pages in memory. A
pages is inserted at the rear of the queue and is replaced at the front of the
queue.

In FIFO policy, BELADY ANOMALY may exist. Belady’s anomaly reflects the
fact that for some page-replacement algorithms, the page-fault rate may
increase as the number of allocated frames increases. But this is not true
always.
2. Least recently used (LRU) page replacement algorithm-
In this method, we replace the page that has not been used for the longest
period of time, that’s why we call it ‘least recently used’. The LRU is
implemented by using stack.
Whenever a page is referenced, it is removed from the stack and put on the
top. In this way , the top of the stack is always the most recently used page
and the bottom is the LRU page.

As you see in the above example, enteries may be removed from the
middle of the stack. It is implemented by a doubly linked list with a head
and tail pointer.
3. Optimal page replacement algorithm-
An optimal rage-replacement algorithm has the lowest page-fault rate of all
algorithms. An optimal will never suffer from Belady’s anomaly. An optimal
page-replacement algorithm says that
Replace the page that will not be used for the longest period of time.
Unfortunately, the optimal page-replacement algorithm is difficult to
implement, because it requires future knowledge of the reference string.
Principle of localization of references-

If at any time ”t” an address “a” is used/referenced/accesed. Then it is


highly likely that address around “a”(a+-&a) must have been/will be
used around “a” must have been/will be used around time “t”(t+-&t).

THRASHING-

There may be situation when the memory is full with frequently used
pages. A process needs a page that is not in memory , then a page fault
occurs. In order to make space for swapping in the required page, one
of the frequently accessed page is swapped out from the main memory.
Soon, the page that is swapped out is required for execution and this
again results in page fault. page fault occurs again and again and
swapping becomes a lalrge overhead. This high paging activity is called
thrashing.

Thrashing, generally occurs in multiprogramming or multitasking


operating systems. as you know that many processes are running at the
same time in these cases, therefore memory becomes over-commited
and it leads to thrashing.

Dis-advantage-

Thrashing degrades the performance of a system.

Methods to handle thrashing-

1. By using a local page replacement algorithm.


2. By allocating a process as many frames as it needs.
3. By controlling the page-fault frequency.
BELADY’S ANOMALY-

It arises in first-in first-out page replacement algorithm.

Clearly we can assume that is we increase the number of frames the


number of page faults decreases.but there occurs anomaly if we
increase the number of frames the number of page faults also
increases.
OPERATING SYSTEM UNIT -3
PROCESS MANAGEMENT-

A process can be viewed as a program in execution, and is used as a unit of work


for a processor. To put this another way, you can say that the processor
simultaneously has to manage several activities at one time and each activity
corresponds to one process. A process will need several resources such as CPU
time, memory, files and I/O devices to accomplish its task. These resources are
allocated to the process either when it is created, or while it is executing.

Process management manages the allocation of processes(tasks or jobs) to a


processor.

JOB: job is a collection of activities needed to accomplish the required task.

PROCESS: it is a computation that may be done concurrently with each other


computations.

DIFFERENCE BETWEEN PROCESS AND PROGRAM:-

A program is defined as sequenced set of instructions whereas a process is more


than the program code, which is known as the text section.

A program by itself is not a process. A program is a passive entity such as the


contents of a file stored on disk, whereas a process is an active entity, with a
program counter specifying the next instruction to execute and set of associated
resources.

Program is a static entity in which state doesn’t changes with time.

Process is a dynamic entity in which state changes with time.

PROCESS STATES AND THEIR TRANSITIONS:-

When a process is born, its life in the system begins. During its experience, a
process goes through a series of discrete states or we can say that a process
executes, it changes state.
Each process may be in one of the following states-

1. NEW-(the process is being created)


Whenever a user types a program to execute, the O/S attempt to find this
program and. If successful, the program code will be loaded and a system
call used to generate a process corresponding to the execution of the
program.
2. READY-(the process is waiting to be assigned to a processor)
There are many process in the memory which may want to execute but
there is one processor which can execute only one process at a time. So
other processes must have to wait for the allocation of processer. A process
which is ready to execute by the processor is said to be in ready state.
3. RUNNING-(instructions are being executed)
The C.P.U is currently allocated to the process and the process is an
execution.
4. WAITING-(process waiting for event)
The process is waiting for some event to occur (such as an I/O completion
or reception of a signal.
5. BLOCKED-
A process come in the blocked state if the last event of interest to the
process was a request made by it to the system. The request is yet to be
fulfilled, hence the process can’t execute if the C.P.U is available to it.
6. TERMINATED-(the process has finished execution)

PCB (process control block)-

The PCB is data structure containing certain important about the process. Each
process has its own PCB to represent it in the operating system.

Each process is represented in the operating system by a process control block


(PCB)—also called a task control block. It contains many pieces of information
associated with a specific process including these-
1. Process state-the state may be new, ready, running, waiting, halted and so
on.
2. Program counter-the counter indicates the address of the next instruction
to be executed for this process.
3. CPU registers- the registers vary in number and type, depending on the
computer architecture. They include accumulator, index-registers, stack
pointers, and general-purpose registers, plus any condition- code
information.
4. CPU-scheduling information- this information includes a process priority,
pointers to scheduling queues and any other scheduling parameters.
5. Memory – management information- this information may include such
information as the value of the base and limit registers, the page tables, or
the segment tables, depending on the memory system used by the
operating system.
6. Accounting information-this information includes the amount of CPU and
real time used, time limits, account numbers, job or process numbers and
so on.
7. I/O status information- the information includes the list of I/O devices
allocated to this process, a list of open files and so on.

The PCB simply serves as the repository for any information that may vary
from process to process.

PROCESS SWITCH AND MODE SWITCH-

COMPUTER operates in two modes-

1. User mode-computer directly interact with user.


2. System/process mode-computer directly interact with system.

Process switch(switch from one process to another process)-


It is defined as that the processor switches from one thread/process to another thread
or process. It makes the contents of the CPU registers and instruction pointer to be
saved.
For the new task, the registers and instruction pointer are loaded into the processor
then the execution of the new process may start/resume.
The old programs will not execute further, but the state of that process is saved in
memory because when the kernel decides that it is ready to execute it again. This
concept is like multitasking, but in reality, only a single process can run at a time on a
CPU.
Features −
 It effects the performance
 It increases load on CPU processor.
 Here every packet is inspected by router or switch processor.
 On every packet load balancing is performed.
 Easy to enable by one command.

Mode switch (change from system mode to user mode and vice-
versa)-
The mode switch is used when the CPU changes privilege levels. for example when
a system call is made or a fault occurs. The kernel works in more a privileged
mode than a standard user task. If a user process wants to access things that
are only accessible to the kernel, a mode switch must occur. The currently
executing process need not be changed during a mode switch.
mode switch changes the process privilege between the modes like user mode,
kernel mode etc.
example- p1 to p2 then p1 will handle control to o/s and o/s will handle control to
p2.one process switch involve two mode switch.

STATE SPACE-set of states of all the process constitute state space. Combination
of all states is state space. Space acquired by collection of all process is called
state space.
STATE TRANSITION DIAGRAM-

SCHEDULING CRITERIA- different CPU-scheduling algorithms have different


properties and may favor one class of processes over another.

Let us consider various scheduling criteria-

1. Waiting time-in multiprogramming operating system several jobs reside at


time in memory, CPU executes only one job at a time. The rest of jobs wait
for CPU. Waiting time is the time spent in for waiting for resource allocation
due to contentions with other in multiprogramming system by the process.
It is calculated by W(x)=T(x)-x
W(x) = waiting time
T(x) = turnaround time
X = units of services.
OR

Waiting time=t2-t1

Average waiting time= total of all waiting time/n

Where t2=when a process is in ready state.

T1=when a process is in running state.

N= no. of processes.

For better performance average waiting time should be less.

2. Turn around time-it may be defined as interval from the time of submission
of a process to the time of its completion. it is the sum of periods spent
waiting to get into memory. Waiting in the ready queue, CPU time and I/O
operations. It should be as less as possible.

Turn around time=|t2-t1|

Where t1=readying time

t2=completion of process

3. Response time- In an interactive system, response time is the best metric. It


is defined as the time interval between the job submission and the first
response produced by the job. Response time should be minimized in an
interactive system.
Response time=|t2-t1|
Where t1=when you finish giving the input
t2=character of output appears
4. Throughput: It refers to the amount of work completed in a unit of time.
One way to measure throughput is by means of the number of processes
that are completed in a unit of time. The higher the number of processes,
the more work apparently being done by the system. But this approach is
not very useful for comparison because this is dependent on. the
characteristics and resource requirement of the process being executed.

Example- t1=ready state

t2=start running

t3=completes

t4=input completes

t5=output appears/produces.

|t1-t2|=waiting time

|t1-t3|=turn around time

|t4-t5|=response time

TYPES OF SCHEDULING-

There are two types of scheduling algorithms-

1. Non-preemptive scheduling
2. Preemptive scheduling
1. Non-preemptive scheduling: In non preemptive scheduling, a scheduling
decision is made every time some job in the system finishes its execution
and at system initialization time. It means once a process has been given
the C.P.U, the CPU cannot be taken away from that process. In non-
preemptive scheduling, short jobs are made to wait by longer jobs, but the
treatment of all process is fairer. Response times are more predictable
because incoming high priority jobs cannot displace waiting jobs.

2. Pre-emptive scheduling: On the other hand, in preemptive scheduling, a


scheduling decision can be made even while the execution of a job is in
progress. Consequently, a job in execution may be forced to release the
processor so that execution of some other jobs can be undertaken.
Preemptive scheduling is useful in systems in which high-priority processes
require rapid attention. To make preemption effective, many processes
must be kept in main storage so that the next process is normally read for
CPU when it becomes available. Keeping non-running programs in main
storage also involves overhead.

Which process will get C.P.U/resource when and for how much time is
scheduling.
Scheduler is an operating system module that m akes decision for admitting
next job into system for execution.

LEVELS OF SCHEDULING- system can be exercised at three distinct levels, which


we will refer to as high-levels (long term scheduling), medium levels (mid-term
scheduling) and low-levels(short term scheduling).

1. Long term scheduling-

Long term scheduler is also known as a job scheduler. This scheduler regulates
the program and select process from the queue and loads them into memory
for execution. It also regulates the degree of multi-programing. However, the
main goal of this type of scheduler is to offer a balanced mix of jobs, like
Processor, I/O jobs, that allows managing multiprogramming.

In simple words, schedules the process/tasks for optimizing the resource


utilization by preparing a balanced process load(avoid the C.P.U intensive jobs,
input intensive jobs make the all jobs in balance to achieve the 2nd
objective(proper utilization of resources).

2. Mid term scheduling-

Medium-term scheduling is an important part of swapping. It enables you


to handle the swapped out-processes. In this scheduler, a running process
can become suspended, which makes an I/O request. A running process can
become suspended if it makes an I/O request. A suspended processes can’t
make any progress towards completion. In order to remove the process
from memory and make space for other processes, the suspended process
should be moved to secondary storage.
In simple words, in process transition diagram have a state called blocked
or suspended or swapped state. Whenever a process is blocked the process
will taken up by mid-term to ready state by allocating memory to the
process which are blocked or swapped.

3. Short-term scheduling-

Short term scheduling is also known as CPU scheduler. The main goal of
this scheduler is to boost the system performance according to set criteria.
This helps you to select from a group of processes that are ready to execute
and allocates CPU to one of them. The dispatcher gives control of the CPU
to the process selected by the short term scheduler.

in simple words, pick up/select a process out of several ready state to running
state/ allocate the C.P.U to process

STARVATION-

Starvation or indefinite blocking is a phenomenon associated with the Priority


scheduling algorithms. A process that is present in the ready state and has low priority
keeps waiting for the CPU allocation because some other process with higher priority
comes with due respect time. Higher-priority processes can prevent a low-priority
process from getting the CPU.
For example, the above image process has higher priority than other processes getting
CPU earlier. We can think of a scenario in which only one process has very low priority
(for example, 127), and we are giving other processes high priority. This can lead to
indefinitely waiting for the process for CPU, which is having low-priority, which leads
to Starvation.

PROCESS SCHEDULING ALGORITHM-

1. FIRST-COME, FIRST –SERVED ALGORITHM


2. SHORTEST JOB FIRST
3. SHORTEST REMAINING TIME FIRST
4. ROUND ROBIN
5. PRIORITY BASED SCHEDULING
6. Multi-level queue scheduling
7. Biased round robin
8. Multi-level queue with feedback

1. FIRST-COME, FIRST-SERVED ALGORITHM-


As the name implies, the FFS policy simply assigns the processor to the
process which is first in the ready queue.

Key concept of this algorithm is "allocate the processor (CPU) in the order
in which the processor arrive".

It is also known as First in First out (FIFO). FFS is non-scheme discipline. It is


fair in formal sense but somewhat unfair in that long jobs make short jobs
wait, and unimportant jobs

Example 1
Consider the following set of processes that arrive at time 0, with the
length of CPU Burst time (or run time) given in milliseconds. CPU - burst
time indicates that for how much time, the process needs the CPU.

If the processes arrive in the order P1, P2, P3 and are served in FCFS order,
we get the result in "GRANTT CHART".

The waiting time is 0 millisecond for process P1, 24 milliseconds for process
p2, and 27 milliseconds for process p3.

Thus the Average Waiting Time=0 + 24 + 27/3= 17 ms

Average waiting time=Waiting Time/No.of Processes

Thus the average waiting time under FFS policy is generally very long. FIFO
is rarely used on its own but it is often embedded within other schemes.

Advantages-

1.) It is simple to understand and code.


2.) Suitable for Batch Systems.

Disadvantages-

1.) Waiting time can be large if short requests wait behind the long runes.
2.) It is not suitable for time sharing systems where it is important that each
user should get the CPU for an equal amount of time interval.
3.) A proper mix of jobs (I/O based and CPU based jobs) is needed to
achieve good results from FFS scheduling.
4.) Jobs are executed on first come, first serve basis.
5.) It is a non-preemptive, pre-emptive scheduling algorithm.
6.) Easy to understand and implement.
7.) Its implementation is based on FIFO queue.
8.) Poor in performance as average wait time is high.

2. Shortest Job Next (SJN)


key concept of algorithm is: “CPU is allocated to the process with least CPU-burst
time”.

Amongst the processes in the ready queue, CPU is always assigned to the process
with least CPU burst requirement. SJF reduces average waiting time over FCFS.

When CPU is available it is assigned to the process that has the smallest run time.

If two processes have the same run time, FCFS is used to break the tie. The
shorter the job, the better service it will receive. This tends to reduce the number
of waiting jobs, and also reduces the number of waiting jobs, and also reduces the
number of jobs waiting behind large. As a result, SJF can minimize the average
waiting time of Jobs. The obvious problem with SJF is that it require precise
knowledge of how long a job or process will run. This is also known as shortest
job first, or SJF

 This is a non-preemptive, pre-emptive scheduling algorithm.


 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in
advance.
 Impossible to implement in interactive systems where required CPU time is
not known.
 The processer should know in advance how much time process will take.

Example 2

Consider the set of processes, with the length of CPU burst time given in
milliseconds:
"GRANTT CHART"

Waiting Time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9

ms for process P3 and 0 millisecond for process P4

Average Waiting Time =

3+16+9+0/4=28/4= 7ms

If we were using FCFS policy, Average waiting Time would be 10.25 ms.

Thus SJF algorithm is optimal because it gives the minimum average waiting time.

Advantages-(i) Minimum average waiting tire.

Disadvantages- (i) The problem is to know the length of time for which CPU is
needed by a
process. A prediction formula can be used to predict the amount of time for

which CPU may be required by a process.

3. Priority Based Scheduling-

A priority is associated with each process and the CPU is allocated to the process
with highest priority. Equal priorities again are scheduled in FCFS order.

Priorities can be defined either internally or externally. Internally defined


priorities are some measurable qunatities. e.g. Time Limits, memory requirement,
The number of open files etc.

External priorities are set by criteria that are external to Operating system, such
as importance of process, type of process etc. Priority scheduling can be either
pre-emptive or non-preemptive. When a process arrives at ready queue, its
priority is compared with the priority of currently running process. The CPU will
be allocated to the new process if the priority of the newly arrived process is
higher than the priority of the currently running process. On the other hand non-
preemptive priority scheduling will simply put the new process at the head of
ready Queue.

 Priority scheduling is a non-preemptive algorithm and one of the most


common scheduling algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be
executed first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements
or any other resource requirement.
 Problem of starvation(a condition in which process don’t get the C.P.U and
it’s age is keep going increasing) occurs in priority based scheduling. To
avoid starvation problem we use aging priority.
 Aging priority says that Priority of a process is directly proportional to the
time spent by a process waiting for C.P.U.

EXAMPLE-
consider the processes:

“grant chart”

The Average Waiting time under priority scheduling algorithm is

6+0+16 + 18 + 1/5=

41/5=8.2 ms

Problem with priority scheduling: Preemptive priority scheduling some times


becomes the biggest cause of indefinite blocking or starvation of a process. If a
process is in ready state but its execution is almost always preempted due to the
arrival of higher priority processes, it will starve for its execution. Therefore, a
mechanism like aging has to be built into the system so that almost every process
should get the CPU in a fixed interval of time. This can be done by increasing
priority of a low-priority process after a fixed time interval so that at on moment
of time it becomes a high priority job as compared to others and thus, finat gets
CPU for its execution.

4. Shortest Remaining Time


 Shortest remaining time (SRT) is the preemptive version of the SJN
algorithm.
 The processor is allocated to the job closest to completion but it can be
preempted by a newer ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is
not known.
 It is often used in batch environments where short jobs need to give
preference.
 It is much better than SJF but CPU overhead is extra( C.P.U extra works we
have to review all steps and processes)

Suppose there are two processes p1 and p2 both have 50 and 5 C.P.U brust
time simultaneously. P1 arrives at 09:00 and p2 arrives at 09:10 so first the
processer is taken by process p1 but in case of pre-emptive processes C.P.U
is taken by process p2(only if the C.P.U brust time of process p2 is less than
p1 (executing process) ) at time 09:05 stops the execution of process p1 in
between.this is shortes remaining time

5. Round Robin Scheduling-


In round robin scheduling, processes are dispatched FIFO but are given a limited
amount of CPU time called a Time -slice or a quantum. If a process does not
complete before its CPU time expires, the CPU is preempted & allocated to next
waiting process. The preempted process is then placed at the back of ready
Queue. RR scheduling is effective in time-sharing environments in which the
system needs to guarantee reasonable response times for interactive users.
 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and
other process executes for a given time period.
 Context switching is used to save states of preempted processes.
 C.P.U is provided to the processes for a given time and in a cyclic manner
the process which completes it will get out and the cycle remains with the
remaining processes.

Example 4

Consider the processes:

Set Time slice = 4ms

then "GRANTT CHART"


Waiting time for P1 will be = 0 + (10 - 4) = 0 + 6 = 6ms

Waiting time for P2 will be = 4 ms

Waiting time for P3 will be = 7 ms

Average waiting time = 6 + 4 + 7 = 5.66 ms

Advantages-

1. it is simple to understand
2. Suitable for Interactive Systems or time sharing systems

Disadvantages-

1. Performance depends heavily on the size of the time quantum


2. Number of context switches - As mentioned above, the number of context-
switches should not be too many to slow down the overall execution of all
the processes. Time quantum should be large with respect to the context
switch time. This is to ensure that a process keeps CPU for a considerable
amount of time as compared the time spent in context switching00.

6. Multiple-Level Queues Scheduling-


Multilevel Queue scheduling was created for situation in which processes are
easily classified into different groups. It has the following steps.

1. A multi-level-queue scheduling algorithm partitions the ready queue into


separate queues. Processes are permanently assigned to each queue,
based upon properties such as, interactive jobs, batch jobs, memory sizes,
and so on.
2. Each queue has its own scheduling algorithm. One may be using FCFS, the
other round-robin and so on.
3. There must be scheduling algorithm between the queues. Usually this is a
fixed-priority preemptive scheduling. For example, the foreground queue
may have absolute priority over the background queue. A percentage-
based time sliced approach can also be used. Each queue gets a certain
portion of the CPU time, which it can then schedule among the various
processes in its queue.

Multiple-level queues are not an independent scheduling algorithm. They make


use of other existing algorithms to group and schedule jobs with common
characteristics.
 Multiple queues are maintained for processes with common characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.

Renaging - after joining queue and after watching sylagerous behavior of queue
and then leave.

Balking – leave after watching queue without joining the queue is called balking.

A single queue of ready processes is maintained easily. But processes can never
be homogenous in nature. So processes with different priority different c.p.u
brust not easy to maintain into single queue so we divide the types of process by
which we can create queue-

All ready processes are trificated into three types-

1. System jobs-these jobs are most important jobs/tasks.


2. Interactive jobs-these jobs are important but not that much.
3. Batch jobs- these jobs are least important jobs/tasks.mostly the jobs which
are used to operate computer and somewhere the life of someone based
on these jobs.
Between queue scheduling is of any type by which we can add the processes in
these types or processes.

When between queue scheduling is priority based and system or interactive


job have single queue then batch jobs/processes will lead to starvation then
aging priority is used by which we increase the priority of the batch jobs so
they will get the C.P.U.

Example 5

Consider a example of Multilevel Queue scheduling is : - having Five queues.

1.) System processes

2.) Interactive processes.

3.) Interactive editing processes.

4.) Batch processes


5.) User processes

Each queue has absolute priority over lower-priority queue. No process in batch
queue can run before system process & so on.

Another possibility is to time slice between the queues. Each queue runs for a
particular time slice.

Advantages-

In a Multilevel Queue algorithm, processes are permanently assigned to a queue


on entry to the system. Since processes do not change their interactive
foreground or batch (background) nature, this set up has the advantage of low
scheduling overhead.

Disadvantages-

It is inflexible as the processes can never change their queues and thus may have
to starve for the CPU if one or the other higher priority queues are never
becoming empty.
7. BIASED ROUND ROBIN-
In simple words, the process which have low C.P.U brust time that will be
added in between of the high priority processes.

Low requirement of the time process will be placed in system. jobs/high


priority.

High requirement of time process will be placed in batch jobs/low priority.

8. Multi-level queue with feedback-


Operating system do any mistake to take jobs in high or low priority so we
can upgrade queues/swap queues then this is called multi-level queue with
feedback.
In simple words, every job give feedback to the system in multi level queue
feedback if any process is placed in wrong queue.
For example any most important process is placed in the batch jobs so it
will send a feedback message to exchange its queue.

Inter-process communication-
Interprocess communication is the mechanism provided by the operating
system that allows processes to communicate with each other. This
communication could involve a process letting another process know that
some event has occurred or the transferring of data from one process to
another.

COOPERATION- p2 want some input from process p1.


P1-output is p2-input.
COMPETITION- same resource require which is in a unitary no. by more
than one processes.
NON-RELATION PROCESSES- two processes don’t have relation b/w each
other.
Both cooperation and competition are also communicatable process.
Cooperation process are communicatable process because both process
have to communicate with each other for sharing the outputs and inputs.
Competition are also communicatable process because one process which
acquired the resource have to communicate with the other process when
he leaves the resource to acquire it for another process.
CRITICAL CODE SECTION-
It is a part of computer program that needs to be executed in a single go.
The code/program which start it will finish when completes in between it
will not loose control. If it loose control the inconsistency will arise in the
code it will start giving the undesired outputs.

Solution to the Critical Section Problem


The critical section problem needs a solution to synchronize the different
processes. The solution to the critical section problem must satisfy the following
conditions −
 Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical
section at any time. If any other processes require the critical section, they
must wait until it is free.
 Progress
Progress means that if a process is not using the critical section, then it
should not stop any other process from accessing it. In other words, any
process can enter a critical section if it is free.
 Bounded Waiting
Bounded waiting means that each process must have a limited waiting time.
It should not wait endlessly to access the critical section.

Buffer-temporary memory-between input and computer when inputing at high


speed and output show/display device is busy then it will generate buffer store
the data in the temporary memory and wait for the device freeness.

COMPUTING RESOURCES-
1. Non-shared resources
2. Shared resources
i. Serially sharable
ii. Parallel sharable
Non-shared resources-these are the computing resources which cannot be
shared in processes these are used only in the defined process.
Example-a variable used in a function(local variable)

Shared resources- these are the computing resources which can be shared in a
processes these can be used by many process depending on their requirement
and the type of the resource.
There are two types of shared resources-
1. Serially sharable-these are sequencially share resources means these
resources are provided to the processes serially.these resources are
provided to processes one by one.
Example-p1-require printer and p2-require printer so the resource is printer
and this one is given to the process in serial firstly to process p1 then to
process p2.
2. Parallely sharable- these are the resources which can be parallel sharable
to more than one processes.
Example- global variables are the resources which can be shared parallely to
processes.

We will discuss about serially sharable resources-these resources must be


mutually exclusive. mutually exclusive means using the resource in a certain
way if one process is using the resource then either process will use resource

Protocol of any mutual exclusion solution-


Process p1-
i.) negotiation protocol
ii.) CCS of p1
iii.) Release protocol
Process p2-
i.) Negotiation protocol
ii.) CCS of p2
iii.) Release protocol
negotiation means negotiation done by a process to use the resource
in CCS part the process will use the resource
release means releasing the resource after using by the process.

SEMAPHORE-
Semaphores are integer variables that are used to solve the critical section
problem by using two atomic operations, wait and signal that are used for
process synchronization.
The definitions of wait and signal are as follows −
 Wait
The wait operation decrements the value of its argument S, if it is positive. If
S is negative or zero, then no operation is performed.
 Signal
The signal operation increments the value of its argument S.

Noted in class-Semaphore are variables used to indicate busyness/freshness


of a resource (serially sharable resource)
In Semaphore, it tells two functions-P(negotiation) and V(release).
There are two types of semaphore-
1. Binary semaphore (0/1)-this semaphore is used when we have only one
instances of resource and we have to tell that its free or not 0 means not
free and 1 means free.
2. Counting semaphore-this semaohore is used how many resources are free
or not free.

Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −
 Counting Semaphores
These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are
added, semaphore count automatically incremented and if the resources are
removed, the count is decremented.

 Binary Semaphores
The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is
1 and the signal operation succeeds when semaphore is 0. It is sometimes
easier to implement binary semaphores than counting semaphores

Advantages of Semaphores
Some of the advantages of semaphores are as follows −
 Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some
other methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is fulfilled
to allow a process to access the critical section.
 Semaphores are implemented in the machine independent code of the
microkernel. So they are machine independent.

Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −
 Semaphores are complicated so the wait and signal operations must be
implemented in the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent
the creation of a structured layout for the system.
 Semaphores may lead to a priority inversion where low priority processes
may access the critical section first and high priority processes later.
UNIT-4(OPERATING SYSTEM)
DEADLOCK-
Every process needs some resources to complete its execution. However, the resource is granted
in a sequential order.

1. The process requests for some resource.


2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some other
resource to be released.

A deadlock is a situation in which two or more processes are unable to proceed because each is
waiting .deadlock can occur in variety of situations, including when two or more processes
request the same resource simultaneously, and when a process holds aresource and requests
another resource that is held by another process.

Let us assume that there are three processes P1, P2 and P3. There are three different resources
R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.

After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it
can't complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops its
execution because it can't continue without R3. P3 also demands for R1 which is being used by
P1 therefore P3 also stops its execution.

In this scenario, a cycle is being formed among the three processes. None of the process is
progressing and they are all waiting. The computer becomes unresponsive since all the
processes got blocked.
Necessary conditions for Deadlocks
Four conditions that must be met for a deadlock to occur in a system.

1. Mutual exclusion
2. Hold and wait
3. No pre-emption
4. Circular wait

1. Mutual Exclusion

A resource can only be shared in mutually exclusive manner. It implies, if


two process cannot use the same resource at the same time. At least one
resource must be non-sharable, meaning that only one process can use it at
a time. This ensures that multiple processes cannot simultaneously occurs
the same resource.

2. Hold and Wait

A process waits for some resources while holding another resource at the
same time.

3. No pre-emption

The process which once scheduled will be executed till the completion. No
other process can be scheduled by the scheduler meanwhile.no another
can take the resource from the process which is holding the resource
another process have to wait till its execution.

4. Circular Wait

All the processes must be waiting for the resources in a cyclic manner so
that the last process is waiting for the resource which is being held by the
first process.

A set of processes must exist such that each process is waiting for a
resource that is held by another process in the set this creates a cycle of
dependency where no process can proceed, and the system is in deadlock.

It’s worth nothing that these conditions are sufficient but not necessary. The
above four conditions must be met simultaneously, otherwise the system will not
be in a deadlock.

ADDITIONAL TOPIC- single resource entry and multiple resource entry are two
different approaches to managing shared resources in operating system.

SINGLE RESOURCE INSTANCES- single resource entry is a method in which a


process can only access one resource at a time. This means that when a process
acquires on resources it must release it before it acquire another resource. This
approach provides a simple and straight forward way to manage resources but it
can be inefficient as a process may have to wait a long time to acquire a resource
that it needs.

MULTIPLE RESOURCES INSTANCES- a process can simultaneously acquire multipe


resources. This allows a process to work on multiple tasks at the same time , and
can lead to more efficient resources utilization, however, this approach can also
increase the risk of deadlock, especially if proper synchronization techniques are
not implemented.

SITUATIONS 2-

1. Single resources instances – resources instances can’t be more than one.

2. Multiple resources instances- resources instances are more than one.


Suppose-

1.) All these condition are true at any time


2.) A.) situation 1. (single resource instances)- then there will be deadlocks for
sure.
b.) situation 2. (multiple resource instances)- then deadlock may occur.

SITUATION 2-

Deadlock condition necessary but not sufficient ( In this situation to be deadlock


all 4 conditions should not be true)

SITUATION 3-

Deadlock necessary as well as sufficient

APPROACHES TO HANDLE DEADLOCK-

There are two main approaches for dealing with deadlock in operating system.

1. Algorithm based approaches.


2. Graphical approaches.

1. Algorithm-based approaches- these approaches use algorithm to detect


and resolve deadlock.
Example- Banker’s algorithm which uses resource allocation and release
information to detect and resolve deadlocks this algorithm aborts the
process if the allocation of resource will cause a deadlock.
2. Graphical approaches- these approaches use graphical representations
such as resource allocation atto graphs, to detect and resolve deadlock.
In this representation, resources are represented as vertices and processes
are represented as edges.
If a cycle is detected in the graph, a deadlock is said to exist. By finding the
cycle, one of the process can be terminated as rolled back to release the
resource and resolve the deadlock.
Algorithmic approach is more optimized but it can be difficult to implement
and maintain where as Graphical approach is easy to understand but it may
have higher overhead.

There are several ways to manage deadlocks in a system-

1. Deadlock prevention.
2. Deadlock avoidance.
3. Deadlock detection and recovery.
4. Ostrich approach.

1. Deadlock prevention-Deadlock happens only when Mutual Exclusion, hold


and wait, No preemption and circular wait holds simultaneously. If it is possible to
violate one of the four conditions at any time then the deadlock can never occur
in the system.

The idea behind the approach is very simple that we have to fail one of the four
conditions but there can be a big argument on its physical implementation in the
system.

We cann’t false the mutual exclusion and no preemption condtions because to


implement execution in a CCS(critical code section) in a single go if we do that
then inconsistency occurs.

We can false the hold and wait condtion but if we do that it should affect the 2 nd
objective of operating system(proper utilization of resources) means we don’t
have to hold the resource we have to acquire the resources only when after
holding we don’t have to wait for resources we directly can execute the process.

Suppose we need 95 for execution of a process but we have only 90 so we have to


wait without holding the resources to false the hold and wait condition but this is
the wastage of the 90 resources.

We can false the circular wait condition by adding some condition that
Allocate all resources types a number-RT1, RT2, RT3….. and then impose a
condition that a process can request a resource in increasing resource type
number.

Means if a process acquire a resource type RT3 then it cann’t demand the RT1
and RT2 so there will no chance to create a cycle so there is no circular wait
condition.

By this we can prevent deadlock but this is unnatural.

2. Deadlock avoidance-In deadlock avoidance, the operating system checks


whether the system is in safe state or in unsafe state at every step which the
operating system performs. The process continues until the system is in safe
state. Once the system moves to unsafe state, the OS has to backtrack one step.

In simple words, The OS reviews each allocation so that the allocation doesn't
cause the deadlock in the system.

STATE SPACE- collection/set of all the possible states of a computer system

In case of multiple resources deadlock may occur or not.

There are two states safe state and unsafe state. To avoid deadlock the system
should be in safe state and in unsafe state deadlock may occur or not.

If any process request for any resource then we firstly check that state is safe or
not. If state is safe then we will give the resource that we pretent to give to
process but if state is unsafe then we do not give the resource that we pretend to
give to process.

In safe state decreasing the availability of resource and increasing the allocation
of resource.

In unsafe state increasing the availability of resource and decreasing the


allocation of resource

In deadlock avoidance banker’s algorithm is given by Edgar dijkstra

There are two types of algorithm in banker’s algorithm-


1. Resource request part.
2. Safety part.

Terms used in banker’s algorithm-

1. n=no. of processors
2. m=no. of resource types.
3. Available- A 1-D array keeping information of current availability of all
resource types. Size=m
4. Max-demand (n*m)- a 2-D array indicating the max. demand each
process can raise in respect of each resource type.
5. Allocation- a 2-D array indicating current allocation of each resource
type to all the processes.

Allocation (i, j) =k. means that k instances of resource type j is allocated


to process i.

6. Need- a 2-D array indicating pending need of each process in respect of


each resource type.

Need(i, j)=k. process i need k instances of resource type j.

Need [i, j]=max demand [i, j]- allocation[i, j].

7. Request (i, j) =k. process i making request of k instances of resource type


j.

RESOURCE REQUEST PART-

1. If request (i)<= need(i). go to step 2.

Else throw an exception and exit.

“request cannot be fulfilled”.

2. If request(i)<=availability go to step

Else “wait”.
3. Pretent the resource requested by process p(i) has been allocated by
modifying the data structure.

Available= Avaliable - Request(i).

Need(i)= Need(i) – Request(i).

Allocation(i)= Allcoation(i) + Request(i).

4. Call SAFETY PART.


5. If state=SAFE allocate the resource physically to p(i).

Else “wait” AND undo step 3

Undo of step 3=

Available= Avaliable + Request(i).

Need(i)= need(i) + Request(i).

Allocation(i)= Allocation(i) – request(i)

How to solve numerical on resource request part-

R1 R2

P1 2 3

P2 5 1

P3 0 2

All values of request need available are given we just have to check
these steps of algorithm and find the state is safe or unsafe.

SAFETY PART-

1. Define two local vectors work(w) and finish(n) of integer and Boolean type
respectively.
2. Initialize work (w in starting)=available AND finish(n in starting)=false for all
process.
For i=1 to n. finish(i) = false.
For i=1 to m. work(i) = available(i).
3. Find an “i”:finish(i)= false AND need(i)<=work.
If no such “i” exist go to step 5.
4. Work=work+allocation(i)
Finish(i)=true.
Go to step 3.
5. If finish(i)=true for all i
Then declare the state is SAFE
Else UNSAFE.
How to solve question on safe part-
Allocation, Max. demand, Need, Avaliable is given we just have to put
available in w and take finish is false for all i and check the condition till the
finish will become true for all I if it will become true then declare the state
safe otherwise it is unsafe state.
Example:
Considering a system with five processes P0 through P4 and three
resources of type A, B, C. Resource type A has 10 instances, B has 5
instances and type C has 7 instances. Suppose at time t0 following
snapshot of the system has been taken:
Question. What will be the content of the Need matrix?
Need [i, j] = Max [i, j] – Allocation [i, j]
So, the content of Need Matrix is:

Is the system in a safe state? If Yes, then what is the safe sequence?
Applying the Safety algorithm on the given system,

.
What will happen if process P1 requests one additional instance of resource
type A and two instances of resource type C?
We use resource request part algorithm-

We must determine whether this new system state is safe. To do so, we again
execute Safety algorithm on the above data structures.
3.Deadlock detection and recovery: If Deadlock prevention or avoidance is not
applied to the software then we can handle this by deadlock detection and
recovery. which consist of two phases:
1. In the first phase, we examine the state of the process and check whether
there is a deadlock or not in the system.
2. If found deadlock in the first phase then we apply the algorithm for recovery
of the deadlock.
In Deadlock detection and recovery, we get the correctness of data but
performance decreases.
4.Ostrich approach-

Deadlock Ignorance is the most widely used approach among all the mechanism.
This is being used by many operating systems mainly for end user uses. In this
approach, the Operating system assumes that deadlock never occurs. It simply
ignores deadlock. This approach is best suitable for a single end user system
where User uses the system only for browsing and all other normal stuff.

There is always a tradeoff between Correctness and performance. The operating


systems like Windows and Linux mainly focus upon performance. However, the
performance of the system decreases if it uses deadlock handling mechanism all
the time if deadlock happens 1 out of 100 times then it is completely unnecessary
to use the deadlock handling mechanism all the time.

In these types of systems, the user has to simply restart the computer in the case
of deadlock. Windows and Linux are mainly using this approach.

File Access Methods in Operating System


When a file is used, information is read and accessed into computer memory
and there are several ways to access this information of the file. Some systems
provide only one access method for files.

There are three ways to access a file into a computer system: Sequential-Access,
Direct Access, Index sequential Method.

SequentialAccess-
It is the simplest access method. Information in the file is processed in order,
one record after the other. This mode of access is by far the most common.
For example if we want to access the 5th file we nust have to access the record
of all previously files.

Examples are paper tape, punch card, magnetic tape.

Advantages of Sequential Access Method :


 It is simple to implement this file access mechanism.
 It uses lexicographic order to quickly access the next entry.
Disadvantages of Sequential Access Method :
 If the file record that needs to be accessed next is not present next to the
current record, this type of file access method is slow.
 Moving a sizable chunk of the file may be necessary to insert a new record.

Direct access storage devices-

Another method is direct access method also known as relative access method. A
fixed-length logical record that allows the program to read and write record
rapidly. in no particular order. The direct access is based on the disk model of a
file since disk allows random access to any file block. For direct access, the file is
viewed as a numbered sequence of block or record. Thus, we may read block 14
then block 59, and then we can write block 17. There is no restriction on the
order of reading and writing for a direct access file.
A block number provided by the user to the operating system is normally
a relative block number, the first relative block of the file is 0 and then 1 and so
on.
In simple words, we can directly access the file. Direct allow sequential as well as
direct access.
Example- magnetic disc, magnetic drum, optical disc, disk drum media, hard
disk, floppy disc.

Advantages of Direct Access Method :


 The files can be immediately accessed decreasing the average access time.
 In the direct access method, in order to access a block, there is no need of
traversing all the blocks present before it.
Random access storage devices-

It is the other method of accessing a file that is built on the top of the sequential
access method. These methods construct an index for the file. The index, like an
index in the back of a book, contains the pointer to the various blocks. To find a
record in the file, we first search the index, and then by the help of pointer we
access the file directly.

in simple words, all the electric devices in nature are random access storage
devices. Means we can access all the files present in system in equal time.
Random access allows sequential as well as direct access.

Example-pen drive, bubble memory devices, ROM, memory stick.

Random access storage devices are fastest and sequential access storage devices
are slowest. In a condition To access all files use sequential access storage
devices or to access selected files use direct or random access storage devices.

Directory Structure in OS (Operating System)


Directory can be defined as the listing of the related files on the disk. The
directory may store some or the entire file attributes.

To get the benefit of different file systems on the different operating systems, A
hard disk can be divided into the number of partitions of different sizes. The
partitions are also called volumes or mini disks.

Each partition must have at least one directory in which, all the files of the
partition can be listed. A directory entry is maintained for each file in the
directory which stores all the information related to that file.
A directory can be viewed as a file which contains the Meta data of the bunch of
files.

Every Directory supports a number of common operations on the file:

1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files

Single Level Directory


The simplest method is to have one big list of all the files on the disk. The entire
system will contain only one directory which is supposed to mention all the files
present in the file system. The directory contains one entry per each file present
on the file system.
This type of directories can be used for a simple system.

Advantages-

1. Implementation is very simple.


2. If the sizes of the files are very small then the searching becomes faster.
3. File creation, searching, deletion is very simple since we have only one
directory.

Disadvantages-

1. We cannot have two files with the same name.


2. The directory may be very big therefore searching for a file may take so
much time.
3. Protection cannot be implemented for multiple users.
4. There are no ways to group same kind of files.
5. Choosing the unique name for every file is a bit complex and limits the
number of files in the system because most of the Operating System limits
the number of characters used to construct the file name.

Multi-level/Tree Structured Directory-


In Tree structured directory system, any directory entry can either be a file or sub
directory. Tree structured directory system overcomes the drawbacks of two level
directory system. The similar kind of files can now be grouped in one directory.
Each user has its own directory and it cannot enter in the other user's directory.
However, the user has the permission to read the root's data but he cannot write
or modify this. Only administrator of the system has the complete access of root
directory.

Searching is more efficient in this directory structure. The concept of current


working directory is used. A file can be accessed by two types of path, either
relative or absolute.

Absolute path is the path of the file with respect to the root directory of the
system while relative path is the path with respect to the current working
directory of the system. In tree structured directory systems, the user is given the
privilege to create the files as well as directories.

Acyclic-Graph Structured Directories -


The tree structured directory system doesn't allow the same file to exist in
multiple directories therefore sharing is major concern in tree structured
directory system. We can provide sharing by making the directory an acyclic
graph. In this system, two or more directory entry can point to the same file or
sub directory. That file or sub directory is shared between the two directory
entries.

These kinds of directory graphs can be made using links or aliases. We can have
multiple paths for a same file. Links can either be symbolic (logical) or hard link
(physical).

If a file gets deleted in acyclic graph structured directory system, then

1. In the case of soft link, the file just gets deleted and we are left with a dangling
pointer.

2. In the case of hard link, the actual file will be deleted only if all the references
to it gets deleted.

General cyclic graph directory-


In this a parent directory can be child of a child for example we have a storage
space holds two directories dir.1 and dir.2 and dir.1 holds dir.3 and dir.2 holds
dir.4 and dir.5 so in this case dir.4 is the child of dir.2 and dir.2 can be the child
of dir.4
File Allocation Methods
The allocation methods define how the files are stored in the disk blocks. There
are three main disk space or file allocation methods.
 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.

All the three methods have their own advantages and disadvantages as
discussed below:
1. Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the disk. For
example, if a file requires n blocks and is given a block b as the starting location,
then the blocks assigned to the file will be: b, b+1, b+2,……b+n-1. This means
that given the starting block address and the length of the file (in terms of blocks
required), we can determine the blocks occupied by the file.

The directory entry for a file with contiguous allocation contains


 Address of starting block
 Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19 with length = 6
blocks. Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.

Advantages:
 Both the Sequential and Direct Accesses are supported by this. For direct
access, the address of the kth block of the file which starts at block b can
easily be obtained as (b+k).
 This is extremely fast since the number of seeks are minimal because of
contiguous allocation of file blocks.
Disadvantages:
 This method suffers from both internal and external fragmentation. This
makes it inefficient in terms of memory utilization.
 Increasing file size is difficult because it depends on the availability of
contiguous memory at a particular instance.

2. Linked List Allocation


In this scheme, each file is a linked list of disk blocks which need not
be contiguous. The disk blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block.
Each block contains a pointer to the next block occupied by the file.

The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last
block (25) contains -1 indicating a null pointer and does not point to any other block.
Advantages:
 This is very flexible in terms of file size. File size can be increased easily since
the system does not have to look for a contiguous chunk of memory.
 This method does not suffer from external fragmentation. This makes it
relatively better in terms of memory utilization.
Disadvantages:
 Because the file blocks are distributed randomly on the disk, a large number
of seeks are needed to access every block individually. This makes linked
allocation slower.
 It does not support random or direct access. We can not directly access the
blocks of a file. A block k of a file can be accessed by traversing k blocks
sequentially (sequential access ) from the starting block of the file via block
pointers.
 Pointers required in the linked allocation incur some extra overhead.

3. Indexed Allocation-
In this scheme, a special block known as the Index block contains the
pointers to all the blocks occupied by a file. Each file has its own index block.
The ith entry in the index block contains the disk address of the ith file block.
The directory entry contains the address of the index block as shown in the
image:
Advantages:
 This supports direct access to the blocks occupied by the file and therefore
provides fast access to the file blocks.
 It overcomes the problem of external fragmentation.
Disadvantages:
 The pointer overhead for indexed allocation is greater than linked allocation.
 For very small files, say files that expand only 2-3 blocks, the indexed
allocation would keep one entire block (index block) for the pointers which is
inefficient in terms of memory utilization. However, in linked allocation we
lose the space of only 1 pointer per block.

For files that are very large, single index block may not be able to hold all the
pointers.
Following mechanisms can be used to resolve this:
1. Linked scheme: This scheme links two or more index blocks together for
holding the pointers. Every index block would then contain a pointer or the
address to the next index block.
2. Multilevel index: In this policy, a first level index block is used to point to the
second level index blocks which inturn points to the disk blocks occupied by
the file. This can be extended to 3 or more levels depending on the maximum
file size.
3. Combined Scheme: In this scheme, a special block called the Inode
(information Node) contains all the information about the file such as the
name, size, authority, etc and the remaining space of Inode is used to store
the Disk Block addresses which contain the actual file as shown in the image
below. The first few of these pointers in Inode point to the direct blocks i.e
the pointers contain the addresses of the disk blocks that contain data of the
file. The next few pointers point to indirect blocks. Indirect blocks may be
single indirect, double indirect or triple indirect. Single Indirect block is the
disk block that does not contain the file data but the disk address of the
blocks that contain the file data. Similarly, double indirect blocks do not
contain the file data but the disk address of the blocks that contain the
address of the blocks containing the file data.

You might also like