Chapter 2 Process Management
Chapter 2 Process Management
PROCESS MANAGEMENT
Process Management
A program does nothing unless its instructions are executed by a CPU. Process management is an
integral part of any modern day operating system (OS). In multiprogramming systems the OS must
allocate resources to processes, enable processes to share and exchange information, protect the
resources of each process from other processes and enable synchronization among processes. To
meet these requirements, the OS must maintain a data structure for each process, which describes
the state and resource ownership of that process, and which enables the OS to exert control over each
process
What is a process?
A process is a sequential program in execution. The components of a process are the following:
A process comes into being or is created in response to a user command to the OS. Processes may
also be created by other processes e.g. in response to exception conditions such as errors or
interrupts.
PROCESS STATES
As a process executes, it changes state. The state of a process is defined in part by the current activity
of that process. Process state determines the effect of the instructions i.e. everything that can affect,
or be affected by the process. It usually includes code, particular data values, open files, registers,
memory, signal management information etc. We can characterize the behavior of an individual
process by listing the sequence of instruction that execute for that process. Such listing is called the
trace of processes
By Esther K Page 1
DICT Module 1 Operating System
A transition from one process state to another is triggered by various conditions as interrupts and
user instructions to the OS. Execution of a program involves creating & running to completion a set
of programs which require varying amounts of CPU, I/O and memory resources.
If the OS supports multiprogramming, then it needs to keep track of all the processes. For each
process, its process control block (PCB) is used to track the process's execution status, including the
following:
OS must make sure that processes don‟t interfere with each other, this means
The dispatcher (short term scheduler) is the inner most portion of the OS that runs processes:
When a process is not running, its state must be saved in its process control block. Items saved
include:
i. Program counter
ii. Process status word (condition codes etc.).
iii. General purpose registers
iv. Floating - point registers etc.
By Esther K Page 2
DICT Module 1 Operating System
When no longer needed, a process (but not the underlying program) can be deleted via the OS, which
means that all record of the process is obliterated and any resources currently allocated to it are
released.
The principal responsibility of the OS is to control the execution of a process; this will include the
determination of interleaving patters for execution and allocation of resources to processes. We can
contrast the simplest model by observing that a process can either executed or not i.e. running or not
running
Each process must be presented in some way so that the OS can keep track of it i.e. the process
control block. Processes that are not running must be kept in some sort of a queue waiting their turn
to execute. There is a single queue in which each entry is a pointer to the PCB of a particular block.
Dispatch
Enter Exit
Not Running
Running
Pause
2. Three state
Completion
Running
(Active)
Delay Suspend
Dispatch
Submit
Resume
Ready
Blocked
(Wake up)
i. Ready: The process is waiting to be assigned to a processor i.e. It can execute as soon as
CPU is allocated to it.
ii. Running: The process is being executed i.e. actually using the CPU at that instant
iii. Waiting/blocked: The process is waiting for some event to occur (e.g., waiting for I/O
completion) such as completion of another process that provides the first process with
necessary data, for a synchronistic signal from another process, I/O or timer interrupt etc.
3. Five State
In this model two states have been added to the three state model i.e. the new and exit state. The new
state correspond to a process that has just been defined e.g. a new user trying to log onto a time
sharing system. In this instance, any tables needed to manage the process are allocated and built.
In the new state the OS has performed the necessary action to create the process but has not
committed itself to the execution of the process i.e. the process is not in the main memory.
Dispatch
Admit Release
New Ready Runnin Exit
g
Time out
Event Event
occurs wait
Blocke
d
i. Running: The process is currently being executed i.e. actually using the CPU at that instant
ii. Ready: The process is waiting to be assigned to a processor i.e. It can execute as soon as
CPU is allocated to it.
By Esther K Page 4
DICT Module 1 Operating System
iii. Waiting/blocked: The process is waiting for some event to occur (e.g., waiting for I/O
completion) such as completion of another process that provides the first process with
necessary data, for a synchronistic signal from another process, I/O or timer interrupt etc.
iv. New: The process has just been created but has not yet being admitted to the pool of
executable processes by the OS i.e. the new process has not been loaded into the main
memory although its PBC has been created.
v. Terminated/exit: The process has finished execution or the process has been released from
the pool of executable processes by the OS either because it halted or because it was aborted
for some reasons.
When a new process is to be added to those concurrently being managed the OS builds the data
structures that are used to manage the process and allocates address space to the processor
i. Normal completion
The process executes an OS service call to indicate that it has completed running
ii. Time limit exceeded
The process has run longer than the specified total time limit
iii. Memory unavailable
The process requires more memory than the system can provide
iv. Bound variation
The process tries to access memory location that it is not allowed to access
v. Protection error
The process attempts to use a resource or a file that is not allowed to use or it tries to use it in
an improper version such as writing to read only file
vi. Arithmetic error
The process tries to prohibit computation e.g. division by zero or tries to state number larger
than the hardware can accommodate
vii. Time overrun
The process has waited longer than a specified maximum time for a certain event to occur
viii. I/O failure
An error occurs during I/O such as inability to find a file. Failure to read or write or write
after a specified number of times
ix. Invalid instruction
The process attempts to execute a non-existing instruction
x. Data misuse
A piece of data is of the wrong type or is not initialized
xi. Operator / OS intervention
For some reasons the operator or OS has terminated the process e.g. if a deadlock existed
viii. Think time: The time spent by the user of an interactive system to figure out the next request.
(Seconds)
ix. The goal is to optimize both the average and the amount of variation. (But beware the ogre
Predictability)
A capability supported by some operating systems that allows one process to communicate with
another process. The processes can be running on the same computer or on different computers
connected through a network.
IPC enables one application to control another application, and for several applications to share the
same data without interfering with one another. IPC is required in all multiprocessing systems.
Definitions of Terms
1. Race Conditions
The race condition is the situation where several processes access and manipulate shared data
concurrently. The final value of the shared data depends upon which process finishes last. To prevent
race conditions, concurrent processes must be synchronized.
Example:
{
balance = balance + amount;
}
Where we assume that balance is a shared variable, suppose process PI calls deposit (10) and process
P2 calls deposit (20). If one completes before the other starts, the combined effect is to add 30 to the
balance. However, the calls may happen at exactly the same time. Suppose the initial balance is 100,
and the two processes run on different CPUs. One possible result is
By Esther K Page 7
DICT Module 1 Operating System
This kind of bug is called a race condition. It only occurs under certain timing conditions. It is very
difficult to track down it since it may disappear when you try to debug it. It may be nearly impossible
to detect from testing since it may occur very rarely. The only way to deal with race conditions is
through very careful coding. The systems that support processes contain constructs called
synchronization primitives to avoid these problems
2. Critical Sections
Are sections in a process during which the process must not be interrupted, especially when the
resource it requires is shared. It is necessary to protect critical sections with interlocks which allow
only one thread (process) at a time to transverse them.
This is an inter-process communication primitive that block instead of wasting CPU time when they
(processes) are not allowed to enter their critical sections. One of the simplest is the pair SLEEP and
WAKEUP.
SLEEP is a system call that causes the caller to block, that is, be suspended until another process
wakes it up. The WAKEUP call has one parameter, the process to be awakened.
E.g. the case of producer-consumer problem – where the producer, puts information into a buffer and
on the other hand, the consumer, takes it out. The producer will go to sleep if the buffer is already
full, to be awakened when the consumer has removed one or more items. Similarly, if the consumer
wants to remove an item from the buffer and sees the buffer is empty, it goes to sleep until the
producer puts something in the buffer and wakes it up.
4. Event counters
An event counter is another data structure that can be used for process management. Like a
semaphore, it has an integer count and a set of waiting process identifications. Un-like semaphores,
the count variable only increases. This uses a special kind of variable called an Event Counter.
Before a process can have access to a resource, it first reads E, if value good, advance E otherwise
await until v reached.
5. Message Passing
When processes interact with one another, two fundamental requirements must be satisfied:
synchronization and communication. One approach to providing both of this function is message
passing. A case where a processor (is a combination of a processing element (PE) and a local main
memory, it may include some external communication (I/O) facilities) when processing elements
communicate via messages transmitted between their local memories. A process will transmit a
message to other processes to indicate state and resources it is using.
By Esther K Page 8
DICT Module 1 Operating System
In Message Passing two primitives SEND and RECEIVE, which are system calls, are used. The
SEND sends a message to a given destination and RECEIVE receives a message from a given
source.
Synchronization
Definition: Means the coordination of simultaneous threads or processes to complete a task in order
to get correct runtime order and avoid unexpected race conditions
The communication of a message between two processing will demand some level of
synchronization. Since there is need to know what happens after a send or receive primitive is issued.
The sender and the receiver can be blocking or non-blocking. Three combinations are common but
only one can be applied in any particular system
i. Blocking send, blocking receive. Both the sender and the receiver are blocked until the
message is delivered. this allows for tight synchronization
ii. Non-blocking send, blocking receive. Although the sender may continue on, the receiver is
blocked until the requested message arrives. This method is effective since it allows a process
to send more than one message to a variety of destination as quickly as possible.
iii. Non-blocking send, non-blocking receive. Neither party is required to wait. Useful for
concurrent programming.
Addressing
When a message is to send it is necessary to specify the in the send primitive which process is to
receive the message. This can be either direct addressing or indirect addressing
Direct addressing
The send primitive include a specific identifier of the destination process. There are two ways to
handle the receiving primitive.
i. Require that the process explicitly designate a sending process. i.e. a process must know a
head of time from which process a message is expected
ii. Use of implicit addressing where the source parameter of the receive primitive possesses
a value returned when a receive operation has been performed.
Indirect addressing
This case instead of sending a message directly to the receiver the message is sent to a shared data
structure consisting of a queue that can temporarily hold messages. Such queues are often referred to
as mailboxes.
By Esther K Page 9
DICT Module 1 Operating System
Message type
Destination ID
Header Source ID
Message length
Control
information
6. Equivalence of primitives
Many new IPC‟s have been proposed like sequencers, path expressions and serializers but are
similar to other ones. One can be able to build new methods or schemes from the four different inter-
process communication primitives – semaphores, monitors, messages & event counters. The
following are the essential equivalence of semaphores, monitors, and messages.
1. Mutual Exclusion
The mutual exclusion is a way of making sure that if one process is using a shared modifiable data,
the other processes will be excluded from doing the same thing. It‟s a way of making sure that
processes are not in their critical sections at the same time
i. Leave the responsibility with the processes themselves: this is the basis of most software
approaches. These approaches are usually highly error-prone and carry high overheads.
ii. Allow access to shared resources only through special-purpose machine instructions: i.e. a
hardware approach. These approaches are faster but still do not offer a complete solution to
the problem, e.g. they cannot guarantee the absence of deadlock and starvation.
iii. Provide support through the operating system, or through the programming language. We
shall outline three approaches in this category: semaphores, monitors, and message passing.
By Esther K Page 10
DICT Module 1 Operating System
2. Semaphores
This is a variable that has an integer value that is used to manage concurrent processes
Semaphores are integer variables used by processes to send signals to other processes. Semaphores
can only be accessed by two operations:
P (Wait or Down)
V (Signal or Up)
A semaphore is a control or synchronization variable (that takes on positive integer values) that is
associated with each critical resource R, which indicates when the resource is being used. e.g. Each
process must first read S, if S = 1 (busy) it doesn‟t take control of resource R, if S = 0, it takes
control & sets S to 1 and proceeds to use R.
i. Machine independent.
ii. Simple
iii. Powerful (embody both exclusion and waiting).
iv. Correctness is easy to determine.
v. Work with many processes.
vi. Can have many different critical sections with different semaphores.
vii. Can acquire many resources simultaneously.
viii. Can permit multiple processes into the critical section at once, if that is desirable.
They do a lot more than just mutual exclusion.
By Esther K Page 11
DICT Module 1 Operating System
i. Semaphores do not completely eliminate race conditions and other problems (like deadlock).
ii. Incorrect formulation of solutions, even those using semaphores, can result in problems.
3. Monitor
This is a condition variable used to block a thread until a particular condition is true.
It has a collection of procedures, variables and data structures that are all grouped together in a
special kind of module or package. Thus a monitor has: - shared data, a set of atomic (tiny)
operations on the data and a set of condition variables. Monitors can be imbedded in a programming
language thus mostly the compiler implements the monitors.
Typical implementation: each monitor has a lock. Acquire lock when begin a monitor operation, and
release lock when operation finishes.
Advantages:
i. Reduces probability of error, biases programmer to think about the system in a certain way
Disadvantages:
ii. Absence of concurrency: if a monitor encapsulate the source since only one process can be
active within a monitor at a time thus possibility of a deadlocks in case of nested monitors
call
4. Deadlock
A deadlock is a situation in which two or more processes sharing the same resource are effectively
preventing each other from accessing the resource, resulting in those processes ceasing to function.
i. A preemptable resource is one that can be taken away from the process with no ill effects.
Memory is an example of a preemptable resource. On the other hand,
ii. A nonpreemptable resource is one that cannot be taken away from process (without causing
ill effect). For example, CD resources are not preemptable at an arbitrary moment.
Reallocating resources can resolve deadlocks that involve preemptable resources.
By Esther K Page 12
DICT Module 1 Operating System
The resources involved are non-shareable. At least one resource (thread) must be held in a
non-shareable mode, that is, only one process at a time claims exclusive control of the
resource. If another process requests that resource, the requesting process must be delayed
until the resource has been released
The processes in the system form a circular list or chain where each process in the list is
waiting for a resource held by the next process in the list.
By Esther K Page 13
DICT Module 1 Operating System
Deadlock Prevention
Havender in his pioneering work showed that since all four of the conditions are necessary for
deadlock to occur, it follows that deadlock might be prevented by denying any one of the conditions.
The nonpreemption condition can be alleviated by forcing a process waiting for a resource
that cannot immediately be allocated to relinquish all of its currently held resources, so that
By Esther K Page 14
DICT Module 1 Operating System
other processes may use them to finish. Suppose a system does allow processes to hold
resources while requesting additional resources. Consider what happens when a request
cannot be satisfied. A process holds resources a second process may need in order to proceed
while second process may hold the resources needed by the first process. This is a deadlock.
This strategy require that when a process that is holding some resources is denied a request
for additional resources. The process must release its held resources and, if necessary, request
them again together with additional resources. Implementation of this strategy denies the “no-
preemptive” condition effectively.
High Cost When a process release resources the process may lose all its work to that point.
One serious consequence of this strategy is the possibility of indefinite postponement
(starvation). A process might be held off indefinitely as it repeatedly requests and releases the
same resources.
1 ≡ Card reader
2 ≡ Printer
3 ≡ Plotter
4 ≡ Tape drive
5 ≡ Card punch
Now the rule is this: processes can request resources whenever they want to, but all requests
must be made in numerical order. A process may request first printer and then a tape drive
(order: 2, 4), but it may not request first a plotter and then a printer (order: 3, 2). The problem
with this strategy is that it may be impossible to find an ordering that satisfies everyone.
By Esther K Page 15
DICT Module 1 Operating System
Deadlock Avoidance
Either : Each process provides the maximum number of resources of each type it needs. With
these information, there are algorithms that can ensure the system will never enter a deadlock state.
This is deadlock avoidance.
A sequence of processes <P1, P2, …, Pn> is a safe sequence if for each process Pi in the sequence,
its resource requests can be satisfied by the remaining resources and the sum of all resources that are
being held by P1, P2, …, Pi-1. This means we can suspend Pi and run P1, P2, …, Pi-1 until they
complete. Then, Pi will have all resources to run.
A state is safe if the system can allocate resources to each process (up to its maximum, of course) in
some order and still avoid a deadlock. In other word, a state is safe if there is a safe sequence.
Otherwise, if no safe sequence exists, the system state is unsafe. An unsafe state is not necessarily a
deadlock state. On the other hand, a deadlock state is an unsafe state
Then, <B, A, C> is a safe sequence (safe state). The system has 12-(5+2+2)=3 free tapes.
Since B needs 2 tapes, it can take 2, run, and return 4. Then, the system has (3-2)+4=5 tapes. A now
can take all 5 tapes and run. Finally, A returns 10 tapes for C to take 7 of them
A system has 12 tapes and three processes A, B, C. At time t1, C has one more tape:
At this point, only B can take these 2 and run. It returns 4, making 4 free tapes available.
By Esther K Page 16
DICT Module 1 Operating System
OR
A deadlock avoidance algorithm ensures that the system is always in a safe state. Therefore, no
deadlock can occur. Resource requests are granted only if in doing so the system is still in a safe
state.
Consequently, resource utilization may be lower than those systems without using a deadlock
avoidance algorithm.
Deadlock Detection
Deadlock detection is the process of actually determining that a deadlock exists and identifying the
processes and resources involved in the deadlock. The basic idea is to check allocation against
resource availability for all possible allocation sequences to determine if the system is in deadlocked
state . Of course, the deadlock detection algorithm is only half of this strategy. Once a deadlock is
detected, there needs to be a way to recover several alternatives exists:
These methods are expensive in the sense that each iteration calls the detection algorithm until the
system proves to be deadlock free. The complexity of algorithm is O(N2) where N is the number of
proceeds. Another potential problem is starvation; same process killed repeatedly.
PROCESS SCHEDULING
Introduction
By Esther K Page 17
DICT Module 1 Operating System
In a multiprogramming computer, several processes will be competing for use of the processor. The
OS has the task of determining the optimum sequence and timing of assigning processes to the
processor. This activity is called scheduling.
The part of the OS that makes this decision (which process come first) is called the scheduler; the
algorithm it uses is called the scheduling algorithm
Objectives of Scheduling:
These objectives are in term of the system‟s performance and behavior: the objectives are:
i. Maximize the system throughput. The number of processes completed per time unit is
called throughput. Higher throughput means more jobs get done.
ii. Be „fair’ to all users. This does not mean all users to be treated equally, but consequently,
relative to the importance of the work being done.
iii. Provide tolerable response (for on-line users) or turn-around time (for batch users).
Minimize the time batch users must wait for output. (Turnaround time is the total time
taken between the submission of a program for execution and the return of the complete
output to the customer.)
iv. Degrade performance gracefully. If the system becomes overloaded, it should not
„collapse‟, but avoid further loading (e.g. by inhibiting any new jobs or users) and/or
temporarily reduce the launch of service (e.g. response time).
v. Be consistent and predictable. The response and turn-around time should be relatively
stable from day to day.
vi. Maximize efficiency/ CPU utilization: keep the CPU busy 100% of the time
By Esther K Page 18
DICT Module 1 Operating System
This is the most complex and significant of the scheduling levels. The HLS and MLS operate over
time scales of seconds or minutes, the LLS make critical decisions many times every second.
The LLS invoke whenever the current process relinquishes control, which, will occurs when the
process calls for an I/O transfer or some other interrupt occurs.
In a non-preemptic scheme the running process retains the processor until it „voluntarily‟ gives it up;
it is not affected by external events.
A preemptic scheme will occur greater overheads since it will generate more context switches but is
often desirable in order to avoid one (possibly long) processes from monopolizing the processor and
to guarantee a reasonable level of service for all processes. A pre-emptic scheme is generally
necessary in an on-line environment and essential in a real-time one.
By Esther K Page 19
DICT Module 1 Operating System
Disadvantage of cooperate scheduling: that the OS does not have overall control of the solution and
it is possible for an application to incur an error situation that prevents It from relinquishing the
processor, thereby, „freezing‟ the whole computer.
This non-preemptic policy reverses the biases against short jobs found in FCFS scheme by selecting
the process from the READY queue which has the shortest estimated run time. A job of expected
short duration will jump the longer jobs in the queue. This is a priority scheme, where the priority is
the inverse of the estimated run time
This process appears to be more equitable with no process having a large wait to run-time ratio.
By Esther K Page 20
DICT Module 1 Operating System
i. That a long job in the queue will be delayed indefinitely by a succession of smaller jobs
arriving in the queue.
ii. The scheduling sketched above assumes that the job list is constant however, in practice,
before process A is reached when process C is done, another job say length 9 minutes could
arrive and this be placed in front of A. Thus, this queue jumping could occur many times,
effectively preventing job A from starting at all. This situation is known as starving.
iii. SJF is more applicable to batch processing since it requires that an estimate of run-time be
available, which could be supplied in the JCL commands for the job.
2. First-Come-First-Served (FCFS)
Also known as FIFO (First In First Out). The process that requests the CPU first is allocated the CPU
first. This can easily be implemented using a queue. FCFS is not preemptive. Once a process has the
CPU, it will occupy the CPU until the process completes or voluntarily enters the wait state.
FCFS is a non-preemptic scheme, since it is activated only when the current process relinquishes
control. FCFS favors long jobs over short ones. E.g. if we assume the annual in the ready queue of
four processes in the sequence numbered, one can calculate how long each process has to wait.
Advantages of FCFS
i. It‟s a fair scheme in that processes are served in the order of arrival
By Esther K Page 21
DICT Module 1 Operating System
Disadvantages of FCFS
i. It is easy to have the convoy effect: all the processes wait for the one big process to get off
the CPU. CPU utilization may be low. Consider a CPU-bound process running with many
I/O-bound process
ii. It is in favor of long processes and may not be fair to those short ones. What if your 1-minute
job is behind a 10-hour job?
iii. It is troublesome for time-sharing systems, where each user needs to get a share of the CPU at
regular intervals.
FCFS is rarely used on its own but often used in conjunction with other methods.
3. Round-Robin (RR)
In the round-robin scheme, a process is selected for running from the READY queue in FIFO
sequence. However, if the process runs beyond a certain fixed length of time, called the time
quantum, it is interrupted and returned to the end of the READY queue. That is, each active process
is a given a „time-slice‟ in rotation.
The timing required by this scheme is obtained by using a hardware timer which generates an
interrupt at pre-set intervals. RR is effective in time-sharing environment, where it is desirable to
provide an acceptable response time for every user and where the processing demands of each user
will often be relatively low and sporadic.
The RR scheme is preemptive, but preemption occurs only by expiry of the time quantum.
By Esther K Page 22
DICT Module 1 Operating System
4. Priority Scheduling
Each process is assigned a priority and the process with the highest priority is allowed to run first. To
prevent high priority process from running indefinitely, the scheduler decreases the priority of the
current running process at each clock tick i.e. clock
Internal priority: determined by time limits, memory requirement, # of files, and so on.
External priority: not controlled by the OS (e.g., importance of the process)
The scheduler always picks the process (in ready queue) with the highest priority to run. FCFS and
SJF are special cases of priority scheduling.
Priority scheduling can be non-preemptive or preemptive. With preemptive priority scheduling, if the
newly arrived process has a higher priority than the running one, the latter is preempted. Indefinite
block (or starvation) may occur: a low priority process may never have a chance to run.
Aging is a technique to overcome the starvation problem. Aging: gradually increases the priority of
processes that wait in the system for a long time. Example: If 0 is the highest (resp., lowest) priority,
then we could decrease (resp., increase) the priority of a waiting process by 1 every fixed period
(e.g., every minute).
Advantage
Disadvantage
i. A high priority process may keep a lower priority process waiting indefinitely (starving)
SRT is a preemptive version of SJF. At the time of dispatching, the shortest queue process, say, job
A, will be started however, if during running of this process, another job arrives whose run-time is
shorter than job‟s A remaining run-time, then job A will be preempted to allow the new job to start.
SRT favors shorter jobs even more than SJF since a currently running long job could be ousted by a
new shorter one.
The danger of starvation of long jobs still exists in this scheme. Implementation of SRT requires an
estimate of total run-time and measurements of elapsed run-time.
By Esther K Page 23
DICT Module 1 Operating System
The scheme is designed from the SJF method, modified to reduce SJF‟s bias against long jobs and to
avoid the danger of starvation.
HRN derives a dynamic priority values based in the estimated run-time and the incurred waiting
time. The priority for each process is calculated from the formula:
Priority, P = time waiting + run-time
Run-time
The process with the highest priority value will be selected for running. When processes first appear
in the „READY‟ queue the „time waiting‟ will be zero and thus P, will be equal to 1 for all processes.
After a short period of waiting, the shorter jobs will be favored e.g. consider jobs A and B with run
times of 10 and 50 minutes. After each has waited for 5 minutes, their priorities are:
5 10 5 50
A: P 1.5 B: P 1.1
10 50
Note: If A has first started (wait time = 0), job B could be chosen in preference to A. As time passes,
the wait time will become more significant. If B has been waiting for say 30 minutes then its priority
would be
30 50
B: P 1.6
50
In this technique a job cannot be starved since the effect of the wait time is the numerator of the
priority expression will predominate over short jobs with a smaller wait time.
7. Multiple queues
Mean to assign a large quantum once in a while rather than giving processes small quantum
frequency (to reduce swapping)
8. Guarantee scheduling
The system keeps track of how much CPU each process has had since its creation. If there are n
users logged in while working, then each user receives 1/n of the CPU power. The algorithm is used
to run the process with the lowest ratio between the actual time the CPU has entitled each process
should run
By Esther K Page 24
DICT Module 1 Operating System
9. Lottery scheduling
Processes are given lottery tickets for various system resources such as CPU time. Whenever a
scheduling decision has to be made, a lottery ticket is chosen at random and the process holding the
ticket gets the resource.
A real time system is where one or more physical devices external to the computer generate stimuli
and the computer react appropriately to them within a fixed amount of time e.g. a computer gets bits
from the drive of a CD player and converts them into music within a very tight time interval
A program is divided into a number of processes. The external events that a real time system may
have to respond to may be categorized into
The scheduler assigns to each process a priority proportional to the frequency of occurrence. Earliest
deadline first scheduling runs the first process on the list within the closest deadline
By Esther K Page 25
DICT Module 1 Operating System
Each queue has its own scheduling algorithm (e.g., RR for foreground and FCFS for background). A
priority is assigned to each queue. A higher priority process may preempt a lower priority process.
Multilevel queue with feedback scheduling is similar to multilevel queue; however, it allows
processes to move between queues. If a process uses more (less) CPU time, it is moved to a queue of
lower (higher) priority. As a result, I/O-bound (CPU-bound) processes will be in higher (lower)
priority queues
All run-able are in main memory. If the memory is insufficient, some of the run-able processes are
kept on the disk in whole or in part.
When the scheduler has finished its work it suspends itself by executing a wait operation on
semaphore. This semaphore is signaled whenever a scheduling event occurs. However in an
OS which neither prevents or avoid deadlock its possible no scheduling event does in fact
occurs
By Esther K Page 27