0% found this document useful (0 votes)
13 views17 pages

2.2 Os

The document discusses various CPU scheduling algorithms, focusing on Shortest Job First (SJF), Priority Scheduling, and Round-Robin (RR) Scheduling. SJF is optimal for minimizing average waiting time but is challenging to implement in short-term scheduling due to the unpredictability of CPU burst lengths. The RR algorithm is designed for time-sharing systems, allowing preemption and requiring careful selection of time quantum to balance context switching and turnaround time.

Uploaded by

sakshi142310
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views17 pages

2.2 Os

The document discusses various CPU scheduling algorithms, focusing on Shortest Job First (SJF), Priority Scheduling, and Round-Robin (RR) Scheduling. SJF is optimal for minimizing average waiting time but is challenging to implement in short-term scheduling due to the unpredictability of CPU burst lengths. The RR algorithm is designed for time-sharing systems, allowing preemption and requiring careful selection of time quantum to balance context switching and turnaround time.

Uploaded by

sakshi142310
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Cont’d

• As an example of SJF scheduling, consider the following set of processes, with the
length of the CPU burst given in milliseconds:

• Using SJF scheduling, we would schedule these processes according to the following
Gantt chart:

• The waiting time is 3 milliseconds for process P1, 16milliseconds for process P2, 9
milliseconds for process P3, and 0milliseconds for process P4
• Thus, the average waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds
• By comparison, if we were using the FCFS scheduling scheme, the average waiting
time would be 10.25 milliseconds
Cont’d
• The SJF scheduling algorithm is provably optimal, in that it gives the minimum average
waiting time for a given set of processes
• Moving a short process before a long one decreases the waiting time of the short
process more than it increases the waiting time of the long process
• Consequently, the average waiting time decreases
• The real difficulty with the SJF algorithm is knowing the length of the next CPU
request
• For long-term (job) scheduling in a batch system, we can use the process time limit that
a user specifies when he submits the job
• In this situation, users are motivated to estimate the process time limit accurately, since
a lower value may mean faster response but too low a value will cause a time-limit-
exceeded error and require resubmission
• SJF scheduling is used frequently in long-term scheduling.
Cont’d
• Although the SJF algorithm is optimal, it cannot be implemented at the level of
short-term CPU scheduling
• With short-term scheduling, there is no way to know the length of the next CPU burst
• One approach to this problem is to try to approximate SJF scheduling
• We may not know the length of the next CPU burst, but we may be able to predict its
value
• We expect that the next CPU burst will be similar in length to the previous ones
• By computing an approximation of the length of the next CPU burst, we can pick the
process with the shortest predicted CPU burst
• The next CPU burst is generally predicted as an exponential average of the measured
lengths of previous CPU bursts
Cont’d
• We can define the exponential average with the following formula
• Let tn be the length of the nth CPU burst, and let τn+1 be our predicted value for the next
CPU burst. Then, for α, 0 ≤ α ≤ 1, define

• The value of tn contains our most recent information, while τn-1 stores the past history
• The parameter α controls the relative weight of recent and past history in our prediction
• If α =0, then τn+1 = τn, and recent history has no effect
• If α =1, then τn+1 = τn, and only the most recent CPU burst matters (history is assumed
to be old and irrelevant)
• More commonly, α = 1/2, so recent history and past history are equally weighted
• The initial τ0 can be defined as a constant or as an overall system average
Cont’d
Cont’d
• The SJF algorithm can be either preemptive or non-preemptive
• The choice arises when a new process arrives at the ready queue while a previous
process is still executing
• The next CPU burst of the newly arrived process may be shorter than what is left of the
currently executing process
• A preemptive SJF algorithm will preempt the currently executing process, whereas a
non-preemptive
• SJF algorithm will allow the currently running process to finish its CPU burst
• Preemptive SJF scheduling is sometimes called shortest-remaining-time-first
scheduling
Cont’d
Priority Scheduling
• The SJF algorithm is a special case of the general priority-scheduling algorithm
• A priority is associated with each process, and the CPU is allocated to the process with
the highest priority
• Equal-priority processes are scheduled in FCFS order
• An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of
the (predicted) next CPU burst
• The larger the CPU burst, the lower the priority, and vice versa
• Note that we discuss scheduling in terms of high priority and low priority
• Priorities are generally indicated by some fixed range of numbers, such as 0 to 7 or 0 to
4,095
• However, there is no general agreement on whether 0 is the highest or lowest
priority.
Cont’d
• Some systems use low numbers to represent low priority; others use low numbers for
high priority
• This difference can lead to confusion
• In this text, we assume that low numbers represent high priority
• As an example, consider the following set of processes, assumed to have arrived at time
0 in the order P1, P2, · · ·, P5, with the length of the CPU burst given in milliseconds:

• Using priority scheduling, we would schedule these processes according to the


following Gantt chart:
Cont’d
• The average waiting time is 8.2 milliseconds
• Priorities can be defined either internally or externally
• Internally defined priorities use some measurable quantity or quantities to compute the
priority of a process
• For example, time limits, memory requirements, the number of open files, and the ratio
of average I/O burst to average CPU burst have been used in computing priorities
• External priorities are set by criteria outside the operating system, such as the
importance of the process, the type and amount of funds being paid for computer
use, the department sponsoring the work, and other, often political, factors
• Priority scheduling can be either preemptive or non-preemptive
• When a process arrives at the ready queue, its priority is compared with the priority of
the currently running process
Cont’d
• A major problem with priority scheduling algorithms is indefinite blocking, or
starvation
• A process that is ready to run but waiting for the CPU can be considered blocked
• Problem: A priority scheduling algorithm can leave some low-priority processes
waiting indefinitely
• In a heavily loaded computer system, a steady stream of higher-priority processes can
prevent a low-priority process from ever getting the CPU
• Generally, one of two things will happen
• Either the process will eventually be run (at 2 A.M. Sunday, when the system is finally
lightly loaded), or the computer system will eventually crash and lose all unfinished
low-priority processes (Rumor has it that when they shut down the IBM 7094 at MIT in
1973, they found a low-priority process that had been submitted in 1967 and had not yet
been run.)
• A solution to the problem of indefinite blockage of low-priority processes is aging
• Solution: Aging involves gradually increasing the priority of processes that wait in
the system for a long time
Cont’d
• For example, if priorities range from 127 (low) to 0 (high), we could increase the
priority of a waiting process by 1 every 15 minutes
• Eventually, even a process with an initial priority of 127 would have the highest priority
in the system and would be executed
• In fact, it would take no more than 32 hours for a priority-127 process to age to a
priority-0 process
Round-Robin Scheduling
• The round-robin (RR) scheduling algorithm is designed especially for timesharing
systems
• It is similar to FCFS scheduling, but preemption is added to enable the system to switch
between processes
• A small unit of time, called a time quantum or time slice, is defined
• A time quantum is generally from10 to 100 milliseconds in length
• The ready queue is treated as a circular queue
Cont’d
• The CPU scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum
• To implement RR scheduling, we again treat the ready queue as a FIFO queue of
processes
• New processes are added to the tail of the ready queue
• The CPU scheduler picks the first process from the ready queue, sets a timer to
interrupt after 1 time quantum, and dispatches the process
• One of two things will then happen
• The process may have a CPU burst of less than 1 time quantum
• In this case, the process itself will release the CPU voluntarily
• The scheduler will then proceed to the next process in the ready queue
• If the CPU burst of the currently running process is longer than 1 time quantum, the
timer will go off and will cause an interrupt to the operating system
Cont’d
• A context switch will be executed, and the process will be put at the tail of the ready
queue
• The CPU scheduler will then select the next process in the ready queue
• The average waiting time under the RR policy is often long
• Consider the following set of processes that arrive at time 0, with the length of the CPU
burst given in milliseconds:

• If we use a time quantum of 4 milliseconds, then process P1 gets the first 4


milliseconds
• Since it requires another 20 milliseconds, it is preempted after the first time quantum,
and the CPU is given to the next process in the queue, process P2. Process P2 does not
need 4 milliseconds, so it quits before its time quantum expires
Cont’d
• The CPU is then given to the next process, process P3
• Once each process has received 1 time quantum, the CPU is returned to process P1 for
an additional time quantum
• The resulting RR schedule is as follows:

• Let’s calculate the average waiting time for this schedule. P1 waits for 6 milliseconds
(10 - 4), P2 waits for 4 milliseconds, and P3 waits for 7 milliseconds
• Thus, the average waiting time is 17/3 = 5.66 milliseconds
• In the RR scheduling algorithm, no process is allocated the CPU for more than 1 time
quantum in a row (unless it is the only runnable process)
• If a process’s CPU burst exceeds 1 time quantum, that process is preempted and is put
back in the ready queue
• The RR scheduling algorithm is thus preemptive.
Cont’d
• If there are n processes in the ready queue and the time quantum is q, then each process
gets 1/n of the CPU time in chunks of at most q time units
• Each process must wait no longer than (n − 1) × q time units until its next time quantum
• For example, with five processes and a time quantum of 20 milliseconds, each process
will get up to 20 milliseconds every 100 milliseconds
• The performance of the RR algorithm depends heavily on the size of the time quantum
• At one extreme, if the time quantum is extremely large, the RR policy is the same as the
FCFS policy
Cont’d
• In contrast, if the time quantum is extremely small (say, 1 millisecond), the RR
approach can result in a large number of context switches. Assume, for example, that
we have only one process of 10 time units
• If the quantum is 12 time units, the process finishes in less than 1 time quantum, with
no overhead
• If the quantum is 6 time units, however, the process requires 2 quanta, resulting in
a context switch
• If the time quantum is 1 time unit, then nine context switches will occur, slowing the
execution of the process accordingly
• Thus, we want the time quantum to be large with respect to the context switch time
• If the context-switch time is approximately 10 percent of the time quantum, then about
10 percent of the CPU time will be spent in context switching
• In practice, most modern systems have time quanta ranging from 10 to 100
milliseconds
• The time required for a context switch is typically less than 10 microseconds; thus, the
context-switch time is a small fraction of the time quantum
Cont’d
• Turnaround time also depends on the size of the time quantum
• As we can see from Figure, the average turnaround time of a set of processes does not
necessarily improve as the time-quantum size increases
• In general, the average turnaround time can be improved if most processes finish their
next CPU burst in a single time quantum
• For example, given three processes of 10 time units each and a quantum of 1 time unit,
the average turnaround time is 29
• If the time quantum is 10, however, the average turnaround time drops to 20
• If context-switch time is added in, the average turnaround time increases even more for
a smaller time quantum, since more context switches are required
• Although the time quantum should be large compared with the context switch time, it
should not be too large
• As we pointed out earlier, if the time quantum is too large, RR scheduling degenerates
to an FCFS policy
• A rule of thumb is that 80 percent of the CPU bursts should be shorter than the time
quantum

You might also like