Operating System Module
Operating System Module
Page 1
Principles of Operating Systems
Page 2
Principles of Operating Systems
Job accounting
Error detecting aids
Coordination between other software and users
Memory Manage ment
Memory management refers to management of Primary Memory or Main Memory. Main
memory is a large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be access directly by the CPU. So for a
program to be executed, it must in the main memory. Operating System does the
following activities for memory management.
Keeps tracks of primary memory i.e. what part of it are in use by whom, what
part are not in use.
In multiprogramming, OS decides which process will get memory when and
how much.
Allocates the memory when the process requests it to do so.
De-allocates the memory when the process no longer needs it or has been
terminated.
Processor Manage ment
• In multiprogramming environment, OS decides which process gets the processor when
and how much time. This function is called process scheduling. Operating System does
the following activities for processor management.
Keeps tracks of processor and status of process. Program responsible for this task
is known as traffic controller.
Allocates the processor (CPU) to a process.
De-allocates processor when processor is no longer required.
Device Management
OS manages device communication via their respective drivers. Operating System does
the following activities for device management.
Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
Decides which process gets the device when and for how much time.
Allocates the device in the efficient way.
De-allocates devices.
Page 3
Principles of Operating Systems
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directories. Operating System does the following
activities for file management.
Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
Decides who gets the resources.
Allocates the resources.
De-allocates the resources.
Other Important Activities
Following are some of the important activities that Operating System does.
Security -- By means of password and similar other techniques, preventing
unauthorized access to programs and data.
Control ove r system performance -- Recording delays between request for a
service and response from the system.
Job accounting -- Keeping track of time and resources used by various jobs and
users.
Error detecting aids -- Production of error messages and other debugging and
error detecting aids.
Coordination between other software and users -- Coordination and
assignment of compilers, interpreters, assemblers and other software to the
various users of the computer systems.
1.2 History of operating systems
Page 4
Principles of Operating Systems
Page 5
Principles of Operating Systems
Advantages
Provide advantage of quick response.
Avoids duplication of software.
Reduces CPU idle time.
Disadvantages
Problem of reliability.
Question of security and integrity of user programs and data.
Problem of data communication.
Page 6
Principles of Operating Systems
Page 7
Principles of Operating Systems
Page 8
Principles of Operating Systems
Page 9
Principles of Operating Systems
OS ensures that external I/O devices are protected from invalid access attempts.
OS provides authentication feature for each user by means of a password.
Batch Processing
Batch processing is a technique in which Operating System collects one programs and
data together in a batch before processing starts. Operating system does the following
activities related to batch processing.
OS defines a job which has predefined sequence of commands, programs and data
as a single unit.
OS keeps a number a jobs in memory and executes them without any manual
information.
Jobs are processed in the order of submission i.e. first come first served fashion.
When job completes its execution, its memory is released and the output for the
job gets copied into an output spool for later printing or processing.
Advantages
Batch processing takes much of the work of the operator to the compute
Increased performance as a new job gets started as soon as the previous job
finished without any manual intervention.
Page 10
Principles of Operating Systems
Disadvantages
Page 11
Principles of Operating Systems
Page 12
Principles of Operating Systems
Since interactive I/O typically runs at people speeds, it may take a long time to complete.
During this time a CPU can be utilized by another process.
Operating system allows the users to share the computer simultaneously. Since each
action or command in a time-shared system tends to be short, only a little CPU time is
needed for each user.
As the system switches CPU rapidly from one user/program to the next, each user is
given the impression that he/she has his/her own CPU, whereas actually one CPU is
being shared among many users.
When two or more programs are residing in memory at the same time, then sharing the
processor is referred to the multiprogramming. Multiprogramming assumes a single
shared processor. Multiprogramming increases CPU utilization by organizing jobs so that
the CPU always has one to execute.
Operating system does the following activities related to multiprogramming.
The operating system keeps several jobs in memory at a time.
This set of jobs is a subset of the jobs kept in the job pool.
The operating system picks and begins to execute one of the job in the memory.
Multiprogramming operating system monitors the state of all active programs and system
resources using memory management programs to ensures that the CPU is never idle
unless there are no jobs.
Advantages
High and efficient CPU utilization.
User feels that many programs are allotted CPU almost simultaneously.
Disadvantages
CPU scheduling is required.
To accommodate many jobs in memory, memory management is required.
Spooling
Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers to
putting data of various I/O jobs in a buffer. This buffer is a special area in memory or
hard disk which is accessible to I/O devices. Operating system does the following
activities related to distributed environment.
OS handles I/O device data spooling as devices have different data access rates.
OS maintains the spooling buffer which provides a waiting station where data can rest
while the slower device catches up.
OS maintains parallel computation because of spooling process as a computer can
perform I/O in parallel fashion. It becomes possible to have the computer read data from
a tape, write data to disk and to write out to a tape printer while it is doing its computing
task.
Page 13
Principles of Operating Systems
Advantages
The spooling operation uses a disk as a very large buffer.
Spooling is capable of overlapping I/O operation for one job with processor operations
for another job.
Page 14
Microlink Information and
Business College-Mekelle
Prepared by:G/slassie E.
CHAPTER TWO
interrupt
ready running
Scheduler
I/O or dispatch I/O or
event
completion event wait
waiting
Process States
Chapter Two:
Part 2: process scheduling
1
Outline
Introduction to process scheduling
Process scheduling queues
Levels of Scheduling
Scheduling Criteria
Scheduling Algorithms
FCFS, Shortest Job First, Priority, Round Robin, Multilevel
Multiple Processor Scheduling
Real-time Scheduling
Algorithm Evaluation
2
2.2 Process Scheduling
The process scheduling is the activity of the
process manager that handles the removal of
the running process from the CPU and the
selection of another process on the basis of a
particular strategy.
Process scheduling is essential for operating
systems that allow more than one process to be
loaded into the executable memory at a time
and the loaded process shares the CPU using
time multiplexing.
3
Process Scheduling…
As we know that we can perform many
Programs at a Time on the Computer. But there
is a Single CPU. So, for running all the Programs
concurrently or simultaneously, Then we use the
Scheduling.
CPU Executes all the Process according to Some
Rules or Some Schedule. Scheduling is that in
which each process have Some Amount of Time
of CPU.
Scheduling Provides Time of CPU to each
Process.
4
Process Scheduling Queues
The OS maintains the following important process
scheduling queues. Process migration between the
various queues.
Job Queue – This queue keeps set of all
processes in the system
Ready Queue - This queue keeps set of all
processes residing in main memory, ready and
waiting to execute.
Device Queues - This queue keeps set of
processes waiting for an I/O device.
5
Context Switch
Is the mechanism to store and restore the state
or context of a CPU in PCB so that a process
execution can be resumed from the same point
at a later time.
Task that switches CPU from one process to
another process
• the CPU must save the PCB state of the old process and load
the saved PCB state of the new process.
7
Levels of Scheduling:
HIGH LEVEL: Scheduling of a complete set of
resources for a job or session. Determines
admission to the system. Also called long-term
scheduling, job scheduling.
INTERMEDIATE LEVEL: Scheduling of main
memory, primarily in a timesharing
environment. Also called medium-term
scheduling, storage scheduling.
LOW LEVEL: Scheduling of the processor or
processors, necessary for any system type. Also
called short-term scheduling, processor
scheduling, CPU scheduling.
8
Levels of Scheduling(cont.)
9
Schedulers
Scheduler are special system software which handles process
scheduling in various ways. Their main task is to select the jobs to be
submitted into the system and to decide which process to run.
Schedulers are of threeTypes :-
Long-term scheduler (or job scheduler) -
• selects which processes should be brought into the ready
queue.
• invoked very infrequently (seconds, minutes); may be slow.
• controls the degree of multiprogramming
Short term scheduler (or CPU scheduler) -
• selects which process should execute next and allocates CPU.
• it is the change of ready state to running state of the process .
• invoked very frequently (milliseconds) - must be very fast
• Also known as dispatchers , make the decision of which
process to execute next.
Medium Term Scheduler
• Is in charge of handling swaps out process temporarily
• Is parts of swapping.it removes the processes from the
memory. 10
CPU Scheduler
12
CPU scheduling decisions
new admitted
exit terminated
interrupt
ready running
Scheduler
I/O or dispatch I/O or
event
completion event wait
waiting
13
Scheduling Criteria
Waiting time
amount of time a process has been waiting in the ready
queue.
CPU Utilization
Keep the CPU and other resources as busy as possible
Throughput
number of processes that complete their execution per time
unit.
Turnaround time
amount of time to execute a particular process from its
entry time. i.e. The interval from time of submission of the
process to the time of completion of the process 14
Scheduling Criteria (cont.)
15
Optimization Criteria
16
scheduling Algorithms
17
1.First Come First Serve
(FCFS) Scheduling
Policy: Process that requests the CPU FIRST is
allocated the CPU FIRST.
FCFS is a non-preemptive algorithm.
Implementation - using FIFO queues
• incoming process is added to the tail of the queue.
• Process selected for execution is taken from head of queue.
0 24 27 30 (0+24+27)/3 = 17
19
FCFS Scheduling …
20
2.Shortest-Job-First(SJF)
Scheduling
21
SJF Scheduling…
23
Non-Preemptive SJF
Scheduling
24
Non-Preemptive SJF
Scheduling…
Example 2:
Process Arrival Time Burst Time
P1 0 7
P2 0.2 4
P3 4 1
P4 5 4
Gantt Chart for Schedule
P1 P3 P2 P4
0 7 8 12 16
Average waiting time =
(0+6+3+7)/4 = 4
25
Preemptive SJF
Scheduling(SRTF)
26
Preemptive SJF
Scheduling(SRTF)…
Example 2:
Process Arrival Time Burst Time
P1 0 7
P2 0.2 4
P3 4 1
P4 5 4
Gantt Chart for Schedule
P1 P2 P3 P2 P4 P1
0 2 4 5 7 11 16
Average waiting time =
(9+1+0+2)/4 = 3
27
Determining Length of Next
CPU Burst
28
Exponential Averaging(cont.)
= 0
n+1 = n; Recent history does not count
= 1
n+1 = tn; Only the actual last CPU burst counts.
Similarly, expanding the formula:
j
n+1 = tn + (1-) tn-1 + …+
(1-)^j tn-j + …
(1-)^(n+1) 0
• Each successive term has less weight than its predecessor.
29
4.Priority Scheduling
Priority is assigned for each process.
Process with highest priority is executed first
and so on.
Processes with same priority are executed in
FCFS manner.
Priority can be decided based on memory
requirements, time requirements or any other
resource requirement.
30
Priority Scheduling…
31
Priority Scheduling…
32
Priority Scheduling (cont.)
33
4.Round Robin (RR)
34
Round Robin (RR)…
Each process gets a small unit of CPU time
• Time quantum usually 10-100 milliseconds.
• After this time has elapsed, the process is preempted and
added to the end of the ready queue.
n processes, time quantum = q
• Each process gets 1/n CPU time in chunks of at most q time
units at a time.
• No process waits more than (n-1)q time units.
• Performance
– Time slice q too large - FIFO behavior
– Time slice q too small - Overhead of context switch is too
expensive.
– Heuristic - 70-80% of jobs block within timeslice
35
Round Robin Example
Time Quantum = 20
Process Burst Time
P1 53
P2 17
P3 68
P4 24
Gantt Chart for Schedule
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
Typically, higher average turnaround time than SRTF, but better response
36
Round Robin Example…
37
5.Multilevel Queue
38
Multilevel Queues
39
Multilevel Feedback Queue
40
Multilevel Feedback Queues
Scheduling
• New job enters Q0 - When it gains CPU, it receives 8
milliseconds. If job does not finish, move it to Q1.
• At Q1, when job gains CPU, it receives 16 more milliseconds.
If job does not complete, it is preempted and moved to queue
Q2.
41
Multilevel Feedback Queues
42
Multiple-Processor
Scheduling
Asymmetric multiprocessing
• only 1 CPU runs kernel, others run user programs
• alleviates need for data sharing
43
Real-Time Scheduling
44
Issues in Real-time
Scheduling
Dispatch Latency
• Problem - Need to keep dispatch latency small, OS may
enforce process to wait for system call or I/O to complete.
• Solution - Make system calls preemptible, determine safe
criteria such that kernel can be interrupted.
45
Real-time Scheduling -
Dispatch Latency
46
Algorithm Evaluation
Deterministic Modeling
• Takes a particular predetermined workload and defines the
performance of each algorithm for that workload. Too specific,
requires exact knowledge to be useful.
Chapter Two:
Part 3: The Threads concept
Outline
Resource Sharing
Economy
Utilization of MP Architectures
Threads(Cont.)
Network Servers
Concurrent requests from network
Again, single program, multiple concurrent operations
File server, Web server, and airline reservation systems
MS/DOS, early
One Traditional UNIX
Macintosh
Embedded systems Mach, OS/2, Linux
(Geoworks, VxWorks, Windows 9x???
Many JavaOS,etc)
Win NT to XP, Solaris,
JavaOS, Pilot(PC) HP-UX, OS X
Kernel-supported threads
User-level threads
Hybrid approach implements both user-level
and kernel-supported threads (Solaris 2).
Kernel Threads
Examples
Windows XP/2000, Solaris, Linux,Tru64 UNIX,
Mac OS X, Mach, OS/2
User Threads
Supported above the kernel, via a set of library calls
at the user level.
Thread management done by user-level threads library
User program provides scheduler and thread package
May have several user threads per kernel thread
User threads may be scheduled non-premptively relative to
each other (only switch on yield())
Advantages
Cheap, Fast
Threads do not need to call OS and cause interrupts to kernel
Disadv: If kernel is single threaded, system call from any
thread can block the entire task.
Example thread libraries:
POSIX Pthreads, Win32 threads, Java threads
Multithreading Models
Many-to-One
One-to-One
Many-to-Many
Many-to-One
Many user-level threads mapped to single
kernel thread
Examples:
Solaris Green Threads
GNU Portable Threads
One-to-One
Examples
Windows NT/XP/2000; Linux; Solaris 9 and later
Many-to-Many Model
Allows many user level
threads to be mapped to
many kernel threads
Allows the operating
system to create a
sufficient number of
kernel threads
Solaris prior to version 9
Windows NT/2000 with
the ThreadFiber package
Thread Support in Solaris 2
Multiprocessing B
C
A B C
Multiprogramming A B C A B C B
Principles of
Operating Systems
Chapter Two:
Part 4: Inter-process Communication (IPC)
Producer
repeat
…
produce an item in nextp;
…
send(consumer, nextp);
until false;
Consumer
repeat
receive(producer, nextc);
…
consume item from nextc;
…
until false;
Shared data
var n;
type item = ….;
var buffer: array[0..n-1] of item;
in, out: 0..n-1;
in :=0; out:= 0; /* shared buffer = circular array */
/* Buffer empty if in == out */
/* Buffer full if (in+1) mod n == out */
/* noop means ‘do nothing’ */
Problem
• Ensure that when one process is executing in its critical
section, no other process is allowed to execute in its critical
section.
Mutual Exclusion
• If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
Progress
• If no process is executing in its critical section and there exists
some processes that wish to enter their critical section, then
the selection of the processes that will enter the critical section
next cannot be postponed indefinitely.
Bounded Waiting
• A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and
before that request is granted.
Shared Variables:
var turn: (0..1);
initially turn = 0;
turn = i Pi can enter its critical section
Process Pi
repeat
while turn <> i do no-op;
critical section
turn := j;
remainder section
until false
Satisfies mutual exclusion, but not progress.
Shared Variables
var flag: array (0..1) of boolean;
initially flag[0] = flag[1] = false;
flag[i] = true Pi ready to enter its critical section
Process Pi
repeat
flag[i] := true;
while flag[j] do no-op;
critical section
flag[i]:= false;
remainder section
until false
Can block indefinitely…. Progress requirement not met.
Shared Variables
var flag: array (0..1) of boolean;
initially flag[0] = flag[1] = false;
flag[i] = true Pi ready to enter its critical section
Process Pi
repeat
while flag[j] do no-op;
flag[i] := true;
critical section
flag[i]:= false;
remainder section
until false
Does not satisfy mutual exclusion requirement ….
Notation -
Lexicographic order(ticket#, process id#)
(a,b) < (c,d) if (a<c) or if ((a=c) and (b < d))
max(a0,….an-1) is a number, k, such that k >=ai
for i = 0,…,n-1
Shared Data
var choosing: array[0..n-1] of boolean;(initialized to false)
number: array[0..n-1] of integer; (initialized to 0)
Higher-level
Locks Semaphores Monitors Send/Receive CCregions
API
until false;
Shared variables
var mutex: semaphore
initially mutex = 1
Process Pi
repeat
wait(mutex);
critical section
signal (mutex);
remainder section
until false
Shared data
type item = ….;
var buffer: array[0..n-1] of item;
full, empty, mutex : semaphore;
nextp, nextc :item;
full := 0; empty := n; mutex := 1;
ASymmetry?
Producer does: P(empty), V(full)
Consumer does: P(full), V(empty)
Is order of P’s important?
Yes! Can cause deadlock
Is order of V’s important?
No, except that it might affect scheduling efficiency
R
R
R
Shared Data
var mutex, wrt: semaphore (=1);
readcount: integer (= 0);
Writer Process
wait(wrt);
…
writing is performed
...
signal(wrt);
Reader process
wait(mutex);
readcount := readcount +1;
if readcount = 1 then wait(wrt);
signal(mutex);
...
reading is performed
...
wait(mutex);
readcount := readcount - 1;
if readcount = 0 then signal(wrt);
signal(mutex);
Shared Data
var chopstick: array [0..4] of semaphore (=1 initially);
Philosopher i :
repeat
wait (chopstick[i]);
wait (chopstick[i+1 mod 5]);
…
eat
...
signal (chopstick[i]);
signal (chopstick[i+1 mod 5]);
…
think
…
until false;
Shared variables
var buffer: shared record
pool:array[0..n-1] of item;
count,in,out: integer;
end;
Producer Process inserts nextp into the shared buffer
region buffer when count < n
do begin
pool[in] := nextp;
in := in+1 mod n;
count := count + 1;
end;
begin
for i := 0 to 4
do state[i] := thinking;
end;
Data Structures
var S1 : binary-semaphore;
S2 : binary-semaphore;
S3 : binary-semaphore;
C: integer;
Initialization
S1 = S3 =1;
S2 = 0;
C = initial value of semaphore S;
Wait operation
wait(S3);
wait(S1);
C := C-1;
if C < 0
then begin
signal (S1);
wait(S2);
end
else signal (S1);
signal (S3);
Signal operation
wait(S1);
C := C + 1;
if C <= 0 then signal (S2);
signal (S1);
Region x when B do S
var mutex, first-delay, second-delay: semaphore;
first-count, second-count: integer;
Mutually exclusive access to the critical section
is provided by mutex.
If a process cannot enter the critical section because the
Boolean expression B is false,
it initially waits on the first-delay semaphore;
moved to the second-delay semaphore before it is allowed to
reevaluate B.
Chapter Two:
Part 5: Deadlocks
Outline
System Model
Deadlock Characterization
Methods for handling deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
Combined Approach to Deadlock Handling
The Deadlock Problem
Resource
commodity required by a process to execute
Resources can be of several types
Serially Reusable Resources
CPU cycles, memory space, I/O devices, files
acquire -> use -> release
Consumable Resources
Produced by a process, needed by a process - e.g.
Messages, buffers of information, interrupts
create ->acquire ->use
Resource ceases to exist after it has been used
System Model
Resource types
R1, R2,….Rm
Each resource type Ri has Wi instances
Assume serially reusable resources
request -> use -> release
Conditions for Deadlock
The following 4 conditions are necessary and sufficient for
deadlock (must hold simultaneously)
Mutual Exclusion:
Only one process at a time can use the resource.
Hold and Wait:
Processes hold resources already allocated to them while
waiting for other resources.
No preemption:
Resources are released by processes holding them only after
that process has completed its task.
Circular wait:
A circular chain of processes exists in which each process
waits for one or more resources held by the next process in the
chain.
Resource Allocation Graph
Process
Pi requests instance of Rj
Pi is holding an instance of Rj
Graph with no cycles
R1 R2
P1 P2 P3
R3 R4
Graph with cycles
R1 P2
P1 P3
R2 P4
Graph with cycles and deadlock
R1 R2
P1 P2 P3
R3 R4
Basic facts
Prevention
Design the system in such a way that deadlocks can never
occur
Avoidance
Impose less stringent conditions than for prevention, allowing
the possibility of deadlock but sidestepping it as it occurs.
Detection
Allow possibility of deadlock, determine if deadlock has
occurred and which processes and resources are involved.
Recovery
After detection, clear the problem, allow processes to complete
and resources to be reused. May involve destroying and
restarting processes.
Deadlock Prevention
No Preemption
If a process that is holding some resources requests
another resource that cannot be immediately allocated to it,
the process releases the resources currently being held.
Preempted resources are added to the list of resources for
which the process is waiting.
Process will be restarted only when it can regain its old
resources as well as the new ones that it is requesting.
Circular Wait
Impose a total ordering of all resource types.
Require that processes request resources in increasing
order of enumeration; if a resource of type N is held,
process can only request resources of types > N.
Deadlock Avoidance
Possible Deadlock!! 3
6
5 4
Banker’s Algorithm
5 processes
P0 - P4;
3 resource types
A(10 instances), B (5 instances), C (7 instances)
Snapshot at time T0
Data Structures
Available: Vector of length m. If Available[j] = k, there are k
instances of resource type Rj available.
Allocation: n m matrix. If Allocation[i,j] = k, then process
Pi is currently allocated k instances of resource type Rj.
Request : An n m matrix indicates the current request of
each process. If Request [i,j] = k, then process Pi is
requesting k more instances of resource type Rj .
Deadlock Detection Algorithm
Detection
■ Background
■ Logical versus Physical Address Space
■ Swapping
■ Contiguous Allocation
■ Paging
■ Segmentation
■ Segmentation with Paging
Background
■ Controlled overlap:
❑ Processes should not collide in physical memory
❑ Conversely, would like the ability to share memory when desired (for
communication)
■ Protection:
❑ Prevent access to private memory of other processes
■ Different pages of memory can be given special behavior (Read Only,
Invisible to user programs, etc)
■ Kernel data protected from user programs
■ Translation:
❑ Ability to translate accesses from one address space (virtual) to a
different one (physical)
❑ When translation exists, process uses virtual addresses, physical
memory uses physical addresses
Names and Binding
❑ Early binding
❑ compiler - produces efficient code
❑ allows checking to be done early
❑ allows estimates of running time and space
❑ Delayed binding
❑ Linker, loader
❑ produces efficient code, allows separate compilation
❑ portability and sharing of object code
❑ Late binding
❑ VM, dynamic linking/loading, overlaying, interpreting
❑ code less efficient, checks done at runtime
❑ flexible, allows dynamic reconfiguration
Multi-step Processing of a Program for Execution
■ Dynamic Libraries
❑ Linking postponed until execution
❑ Small piece of code, stub, used to locate
appropriate memory-resident library routine
❑ Stub replaces itself with the address of the
routine, and executes routine
Dynamic Loading
OS OS OS OS
Process 5 Process 5 Process 5 Process 5
Process 9 Process 9
Process 8 Process 10
❑ EAT = 2+ ε - α
Memory Protection
100
: :
708
:
:
929
: :
900
Two Level Paging Example
1
2
1
4
2 4
3
editor 0
1 43062
segment 0
data 1 Segment Table editor
68348
process P1
data 1
segment 1 72773
Logical Memory
process P1
editor 90003
0
segment 0 data 2 data 2
1
98553
Segment Table
Logical Memory process P2
process P2 segment 1
Segmentation hardware
Segmented Paged Memory
52
Principles of Operating Systems
■ Background
■ Demand paging
❑ Performance of demand paging
■ Page Replacement
❑ Page Replacement Algorithms
■ Allocation of Frames
■ Thrashing
■ Demand Segmentation
Need for Virtual Memory
■ Virtual Memory
■ Separation of user logical memory from physical
memory.
■ Only PART of the program needs to be in memory for
execution.
■ Logical address space can therefore be much larger
than physical address space.
■ Need to allow pages to be swapped in and out.
■ Virtual Memory can be implemented via
❑ Paging
❑ Segmentation
Paging/Segmentation Policies
■ Fetch Strategies
■ When should a page or segment be brought into primary
memory from secondary (disk) storage?
❑ Demand Fetch
❑ Anticipatory Fetch
■ Placement Strategies
■ When a page or segment is brought into memory, where
is it to be put?
❑ Paging - trivial
❑ Segmentation - significant problem
■ Replacement Strategies
■ Which page/segment should be replaced if there is not
enough room for a required page/segment?
Demand Paging
Page Table
Handling a Page Fault
❑ Page is needed - reference to page
❑ Step 1: Page fault occurs - trap to OS (process suspends).
❑ Step 2: Check if the virtual memory address is valid. Kill
job if invalid reference. If valid reference, and page not in
memory, continue.
❑ Step 3: Bring into memory - Find a free page frame, map
address to disk block and fetch disk block into page frame.
When disk read has completed, add virtual memory
mapping to indicate that page is in memory.
❑ Step 4: Restart instruction interrupted by illegal address
trap. The process will continue as if page had always been
in memory.
What happens if there is no free
frame?
■ Page replacement - find some page in
memory that is not really in use and swap it.
■ Need page replacement algorithm
■ Performance Issue - need an algorithm which will result
in minimum number of page faults.
❑ Same page may be brought into memory many
times.
Performance of Demand Paging
10 Page faults
4 frames
FIFO Replacement - Belady’s Anomaly -- more frames does not mean less page faults
Optimal Algorithm
6 Page faults
4 frames
Least Recently Used (LRU)
Algorithm
❑ Use recent past as an approximation of near
future.
❑ Choose the page that has not been used for the
longest period of time.
❑ May require hardware assistance to implement.
❑ Reference String: 1,2,3,4,1,2,5,1,2,3,4,5
4 frames
8 Page faults
Implementation of LRU algorithm
■ Counter Implementation
❑ Every page entry has a counter; every time page is referenced
through this entry, copy the clock into the counter.
❑ When a page needs to be changes, look at the counters to
determine which page to change (page with smallest time value).
■ Stack Implementation
■ Keeps a stack of page numbers in a doubly linked form
■ Page referenced
❑ move it to the top
❑ requires 6 pointers to be changed
■ No search required for replacement
LRU Approximation Algorithms
❑ Reference Bit
❑ With each page, associate a bit, initially = 0.
❑ When page is referenced, bit is set to 1.
❑ Replace the one which is 0 (if one exists). Do not know
order however.
❑ Additional Reference Bits Algorithm
❑ Record reference bits at regular intervals.
❑ Keep 8 bits (say) for each page in a table in memory.
❑ Periodically, shift reference bit into high-order bit, I.e. shift
other bits to the right, dropping the lowest bit.
❑ During page replacement, interpret 8bits as unsigned
integer.
❑ The page with the lowest number is the LRU page.
LRU Approximation Algorithms
❑ Second Chance
■ FIFO (clock) replacement algorithm
■ Need a reference bit.
■ When a page is selected, inspect the reference bit.
■ If the reference bit = 0, replace the page.
■ If page to be replaced (in clock order) has reference bit
= 1, then
❑ set reference bit to 0
❑ leave page in memory
❑ replace next page (in clock order) subject to same rules.
LRU Approximation Algorithms
Segmentation Protection
■ Equal Allocation
❑ E.g. If 100 frames and 5 processes, give each 20 pages.
■ Proportional Allocation
■ Allocate according to the size of process
❑ Sj = size of process Pj
❑ S = ∑Sj
❑ m = total number of frames
❑ aj = allocation for Pj = Sj/S * m
❑ If m = 64, S1 = 10, S2 = 127 then
a1 = 10/137 * 64 ≈ 5
a2 = 127/137 * 64 ≈ 59
Priority Allocation
■ Global Replacement
■ Process selects a replacement frame from the set of all
frames.
■ One process can take a frame from another.
■ Process may not be able to control its page fault rate.
■ Local Replacement
■ Each process selects from only its own set of allocated
frames.
■ Process slowed down even if other less used pages of
memory are available.
■ Global replacement has better throughput
■ Hence more commonly used.
Thrashing
29
Thrashing
■ Δ ≡ working-set window
■ a fixed number of page references, e.g. 10,000 instructions
❑ WSSj (working set size of process Pj) - total number of
pages referenced in the most recent Δ (varies in time)
■ If Δ too small, will not encompass entire locality.
■ If Δ too large, will encompass several localities.
■ If Δ = ∞, will encompass entire program.
❑ D = ∑ WSSj ≡ total demand frames
■ If D > m (number of available frames) ⇒thrashing
❑ Policy: If D > m, then suspend one of the processes.
Keeping Track of the Working Set
■ Approximate with
■ interval timer + a reference bit
❑ Example: Δ = 10,000
❑ Timer interrupts after every 5000 time units.
❑ Whenever a timer interrupts, copy and set the values of all
reference bits to 0.
❑ Keep in memory 2 bits for each page (indicated if page was used
within last 10,000 to 15,000 references).
❑ If one of the bits in memory = 1 ⇒ page in working set.
■ Not completely accurate - cannot tell where reference
occurred.
■ Improvement - 10 bits and interrupt every 1000 time units.
Page fault Frequency Scheme
33
Demand Paging Issues
❑ Prepaging
■ Tries to prevent high level of initial paging.
❑ E.g. If a process is suspended, keep list of pages in
working set and bring entire working set back before
restarting process.
❑ Tradeoff - page fault vs. prepaging - depends on how many
pages brought back are reused.
❑ Page Size Selection
■ fragmentation
■ table size
■ I/O overhead
■ locality
Demand Paging Issues
❑ Program Structure
■ Array A[1024,1024] of integer
■ Assume each row is stored on one page
■ Assume only one frame in memory
■ Program 1
for j := 1 to 1024 do
for i := 1 to 1024 do
A[i,j] := 0;
1024 * 1024 page faults
■ Program 2
for i := 1 to 1024 do
for j:= 1 to 1024 do
A[i,j] := 0;
1024 page faults
Demand Paging Issues
❑ File Name
❑ File Type
❑ Address or Location
❑ Current Length
❑ Maximum Length
■ Sequential Access
read next
write next
reset
no read after last write (rewrite)
■ Direct Access ( n = relative block number)
read n
write n
position to n
read next
write next
rewrite n
Sequential File Organization
Indexed Sequential or Indexed
File Organization
Direct Access File Organization
Protection
■ File Structure
■ Logical Storage Unit with collection of related
information
❑ File System resides on secondary storage (disks).
■ To improve I/O efficiency, I/O transfers between memory
and disk are performed in blocks.
❑ Read/Write/Modify/Access each block on disk.
■ File system organized into layers.
■ File control block - storage structure
consisting of information about a file.
File System Mounting
File Implementations 39
CS-4513, D-Term 2007
FAT File Systems
■ Advantages
❑ Advantages of Linked File System
❑ FAT can be cached in memory
❑ Searchable at CPU speeds, pseudo-random access
■ Disadvantages
❑ Limited size, not suitable for very large disks
❑ FAT cache describes entire disk, not just open files!
❑ Not fast enough for large databases
Index table
Indexed Allocation
43
Indexed Allocation (cont.)
link
link
Indexed Allocation - Multilevel
index 2nd level Index
Index block
link
link
Combined Scheme: UNIX Inode
mode data
owners
timestamps data
Size block
count data
Direct blocks
data
data
data
data
Single indirect data
double indirect data
48
What is an inode?
Inode table
Directory
i1 Name1
i2 Name2
i3 Name3
i4 Name4
…
Copyright ©: Nahrstedt, Angrave,
…
Abdelzaher
51
Free Space Management
❑ Counting
❑ Linked list of contiguous blocks that are free
❑ Free list node contains pointer and number of free blocks
starting from that address.
Free Space Management
■ Need to protect
■ pointer to free list
■ Bit map
❑ Must be kept on disk
❑ Copy in memory and disk may differ.
❑ Cannot allow for block[i] to have a situation where bit[i] = 1
in memory and bit[i] = 0 on disk
■ Solution
❑ Set bit[i] = 1 in disk
❑ Allocate block[i]
❑ Set bit[i] = 1 in memory.
Directory Implementation
An I/O system is required to take an application I/O request and send it to the physical device,
then take whatever response comes back from the device and send it to the application. I/O
devices can be divided into two categories −
Block devices − A block device is one with which the driver communicates by sending entire
blocks of data. For example, Hard disks, USB cameras, Disk-On-Key etc.
Character devices − A character device is one with which the driver communicates by sending
and receiving single characters (bytes, octets). For example, serial ports, parallel ports, sounds
cards etc
Device Controllers
Device drivers are software modules that can be plugged into an OS to handle a particular
device. Operating System takes help from device drivers to handle all I/O devices.
The Device Controller works like an interface between a device and a device driver. I/O units
(Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an electronic
component where electronic component is called the device controller.
There is always a device controller and a device driver for each device to communicate with the
Operating Systems. A device controller may be able to handle multiple device s. As an interface
its main task is to convert serial bit stream to block of bytes, perform error correction as
necessary.
Any device connected to the computer is connected by a plug and socket, and the socket is
connected to a device controller. Following is a model for connecting the CPU, memory,
controllers, and I/O devices where CPU and device controllers all use a common bus for
communication.
Memory-mapped I/O
Memory-mapped I/O
When using memory- mapped I/O, the same address space is shared by memory and I/O devices.
The device is connected directly to certain main memory locations so that I/O device can
transfer block of data to/from memory without going through CPU.
While using memory mapped IO, OS allocates buffer in memory and informs I/O device to use
that buffer to send data to the CPU. I/O device operates asynchronously with CPU, interrupts
CPU when finished.
The advantage to this method is that every instruction which can access memory can be used to
manipulate an I/O device. Memory mapped IO is used for most high-speed I/O devices like
disks, communication interfaces.
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to
memory without involvement. DMA module itself controls exchange of data between main
memory and the I/O device. CPU is only involved at the beginning and end of the transfer and
interrupted only after entire block has been transferred.
Direct Memory Access needs a special hardware called DMA controller (DMAC) that manages
the data transfers and arbitrates access to the system bus. The controllers are programmed with
source and destination pointers (where to read/write the data), counters to track the number of
transferred bytes, and settings, which includes I/O and memory types, interrupts and states for
the CPU cycles.
The operating system uses the DMA hardware as follows −
Step Description
5 DMA controller transfers bytes to buffer, increases the memory address, decreases the counter
C until C becomes zero.
Polling I/O
Polling is the simplest way for an I/O device to communicate with the processor. The process of
periodically checking status of the device to see if it is time for the next I/O operation, is called
polling. The I/O device simply puts the information in a Status register, and the processor must
come and get the information.
Most of the time, devices will not require attention and when one does it will have to wait until
it is next interrogated by the polling program. This is an inefficient method and much of the
processors time is wasted on unnecessary polls.
Compare this method to a teacher continually asking every student in a class, one after another,
if they need help. Obviously the more efficient method would be for a student to inform the
teacher whenever they require assistance.
Interrupts I/O
An alternative scheme for dealing with I/O is the interrupt-driven method. An interrupt is a
signal to the microprocessor from a device that requires attention.
A device controller puts an interrupt signal on the bus when it needs CPU’s attention when CPU
receives an interrupt, It saves its current state and invokes the appropriate interrupt handler
using the interrupt vector (addresses of OS routines to handle various events). When the
interrupting device has been dealt with, the CPU continues with its original task as if it had
never been interrupted.
5.2: I/O Softwares
I/O software is often organized in the following layers −
User Level Libraries − This provides simple interface to the user program to perform input and
output. For example, stdio is a library provided by C and C++ programming languages.
Kernel Level Modules − This provides device driver to interact with the device controller and
device independent I/O modules used by the device drivers.
Hardware − This layer includes actual hardware and hardware controller which interact with the
device drivers and makes hardware alive.
A key concept in the design of I/O software is that it should be device independent where it
should be possible to write programs that can access any I/O device without having to specify
the device in advance. For example, a program that reads a file as input should be able to read a
file on a floppy disk, on a hard disk, or on a CD-ROM, without having to modify the program
for each different device.
Device Drivers
Device drivers are software modules that can be plugged into an OS to handle a particular
device. Operating System takes help from device drivers to handle all I/O devices. Device
drivers encapsulate device-dependent code and implement a standard interface in such a way
that code contains device-specific register reads/writes. Device driver, is generally written by
the device's manufacturer and delivered along with the device on a CD-ROM.
Interact with the device controller to take and give I/O and perform required error handling
How a device driver handles a request is as follows: Suppose a request comes to read a block N.
If the driver is idle at the time a request arrives, it starts carrying out the request immediately.
Otherwise, if the driver is already busy with some other request, it places the new request in the
queue of pending requests.
Interrupt handlers
An interrupt handler, also known as an interrupt service routine or ISR, is a piece of software or
more specifically a callback function in an operating system or more specifically in a device
driver, whose execution is triggered by the reception of an interrupt.
When the interrupt happens, the interrupt procedure does whatever it has to in order to handle
the interrupt, updates data structures and wakes up process that was waiting for an interrupt to
happen.
The interrupt mechanism accepts an address ─ a number that selects a specific interrupt
handling routine/function from a small set. In most architectures, this address is an offset stored
in a table called the interrupt vector table. This vector contains the memory addresses of
specialized interrupt handlers.
Device naming - Mnemonic names mapped to Major and Minor device numbers
Device protection
Buffering because data coming off a device cannot be stored in final destination.
Error Reporting
User-Space I/O Software
These are the libraries which provide richer and simplified interface to access the functionality
of the kernel or ultimately interactive with the device drivers. Most of the user- level I/O
software consists of library procedures with some exception like spooling system which is a
way of dealing with dedicated I/O devices in a multiprogramming system.
I/O Libraries (e.g., stdio) are in user-space to provide an interface to the OS resident device-
independent I/O SW. For example putchar(), getchar(), printf() and scanf() are example of user
level I/O library stdio available in C programming.
Scheduling − Kernel schedules a set of I/O requests to determine a good order in which to
execute them. When an application issues a blocking I/O system call, the request is placed on the
queue for that device. The Kernel I/O scheduler rearranges the order of the queue to improve the
overall system efficiency and the average response time experienced by the applications.
Buffering − Kernel I/O Subsystem maintains a memory area known as buffer that stores data
while they are transferred between two devices or between a device with an application
operation. Buffering is done to cope with a speed mismatch between the producer and consumer
of a data stream or to adapt between devices that have different data transfer sizes.
Caching − Kernel maintains cache memory which is region of fast memory that holds copies of
data. Access to the cached copy is more efficient than access to the original.
Spooling and Device Reservation − A spool is a buffer that holds output for a device, such as a
printer, that cannot accept interleaved data streams. The spooling system copies the queued
spool files to the printer one at a time. In some operating systems, spooling is managed by a
system daemon process. In other operating systems, it is handled by an in kernel thread.
Error Handling − An operating system that uses protected memory can guard against many
kinds of hardware and application errors.
Principles of Operating Systems
6.1 Security
Security refers to providing a protection system to computer system resources such as CPU,
memory, disk, software programs and most importantly data/information stored in the computer
system. If a computer program is run by an unauthorized user, then he/she may cause severe
damage to computer or data stored in it. So a computer system must be protected against
unauthorized access, malicious access to system memory, viruses, worms etc. We're going to
discuss following topics in this chapter.
Authentication
Program Threats
System Threats
Authentication
Authentication refers to identifying each user of the system and associating the executing
programs with those users. It is the responsibility of the Operating System to create a protection
system which ensures that a user who is running a particular program is authentic. Operating
Systems generally identifies/authenticates users using following three ways −
Username / Password − User need to enter a registered username and password with Operating
system to login into the system.
User card/key − User need to punch card in card slot, or enter key generated by key generator in
option provided by operating system to login into the system.
User attribute - fingerprint/ eye retina pattern/ signature − User need to pass his/her attribute
via designated input device used by operating system to login into the system.
One Time passwords
One-time passwords provide additional security along with normal authentication. In One-Time
Password system, a unique password is required every time user tries to login into the system.
Once a one-time password is used, then it cannot be used again. One-time password are
implemented in various ways.
Random numbers − Users are provided cards having numbers printed along with corresponding
alphabets. System asks for numbers corresponding to few alphabets randomly chosen.
Secret key − User are provided a hardware device which can create a secret id mapped with user
id. System asks for such secret id which is to be generated every time prior to login.
Program Threats
Operating system's processes and kernel do the designated task as instructed. If a user program
made these process do malicious tasks, then it is known as Program Threats. One of the
common example of program threat is a program installed in a comp uter which can store and
send user credentials via network to some hacker. Following is the list of some well-known
program threats.
Trojan Horse − Such program traps user login credentials and stores them to send to malicious
user who can later on login to computer and can access system resources.
Trap Door − If a program which is designed to work as required, have a security hole in its code
and perform illegal action without knowledge of user then it is called to have a trap door.
Logic Bomb − Logic bomb is a situation when a program misbehaves only when certain
conditions met otherwise it works as a genuine program. It is harder to detect.
Virus − Virus as name suggest can replicate themselves on computer system. They are highly
dangerous and can modify/delete user files, crash systems. A virus is generatlly a small code
embedded in a program. As user accesses the program, the virus starts getting embedded in other
files/ programs and can make system unusable for user
System Threats
System threats refers to misuse of system services and network connections to put user in
trouble. System threats can be used to launch program threats on a complete network called as
program attack. System threats creates such an environment that operating system resources/
user files are misused. Following is the list of some well-known system threats.
Worm − Worm is a process which can choked down a system performance by using system
resources to extreme levels. A Worm process generates its multiple copies where each copy use s
system resources, prevents all other processes to get required resources. Worms processes can
even shut down an entire network.
Port Scanning − Port scanning is a mechanism or means by which a hacker can detects system
vulnerabilities to make an attack on the system.
Denial of Service − Denial of service attacks normally prevents user to make legitimate use of
the system. For example, a user may not be able to use internet if denial of service attacks
browser's content settings.
1
Type A
Highest Level. Uses formal design specifications and verification techniques. Grants a high
degree of assurance of process security.
2
Type B
Provides mandatory protection system. Have all the properties of a class C2 system. Attaches
a sensitivity label to each object. It is of three types.
B1 − Maintains the security label of each object in the system. Label is used for
making decisions to access control.
B2 − Extends the sensitivity labels to each system resource, such as storage objects,
supports covert channels and auditing of events.
B3 − Allows creating lists or user groups for access-control to grant access or revoke
access to a given named object.
3
Type C
Provides protection and user accountability using audit capabilities. It is of two types.
C1 − Incorporates controls so that users can protect their private information and keep
other users from accidentally reading / deleting their data. UNIX versions are mostly
Cl class.
4
Type D
Lowest level. Minimum protection. MS-DOS, Window 3.1 fall in this category.