0% found this document useful (0 votes)
13 views44 pages

Module 2.0

Module 2 covers CPU scheduling concepts, including the importance of scheduling for efficient resource utilization and fairness in multi-user environments. It details various scheduling algorithms such as First Come First Serve, Round Robin, and Shortest Job First, along with performance criteria like CPU utilization, throughput, and turnaround time. Additionally, it discusses process management, process states, and the role of the Process Control Block (PCB) in managing processes within an operating system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views44 pages

Module 2.0

Module 2 covers CPU scheduling concepts, including the importance of scheduling for efficient resource utilization and fairness in multi-user environments. It details various scheduling algorithms such as First Come First Serve, Round Robin, and Shortest Job First, along with performance criteria like CPU utilization, throughput, and turnaround time. Additionally, it discusses process management, process states, and the role of the Process Control Block (PCB) in managing processes within an operating system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Module 2 CPU Scheduling

Out Line
• Scheduling Concepts
• Performance Criteria
• Process Concept Process States
• Process Transition Diagram
• Schedulers
• Process Control Block (PCB)
• Process address space
• Process identification information
• Threads and their management
• Scheduling Algorithms
• Multiprocessor Scheduling.

2
What is Scheduling?
• Scheduling- processes/work is done to finish the work on
time.
• CPU Scheduling is a process that allows one process to
use the CPU while another process is delayed (in standby)
due to unavailability of any resources such as I / O etc,
thus making full use of the CPU.
• The purpose of CPU Scheduling is to make the system
more efficient, faster, and fairer.
3
Why do we need Scheduling?
• In Multiprogramming, if the long term scheduler picks more I/O
bound processes then most of the time, the CPU remains idol.
• The task of Operating system is to optimize the utilization of
resources.
• If most of the running processes change their state from running to
waiting then there may always be a possibility of deadlock in the
system.
• To reduce this overhead, the OS needs to schedule the jobs to get
the optimal utilization of CPU and to avoid the possibility to
deadlock.
4
Purpose of a Scheduling
• Maximum CPU utilization
• Fare allocation of CPU
• Maximum throughput
• Minimum turnaround time
• Minimum waiting time
• Minimum response time

5
Types of Scheduling Algorithms
• There are the following algorithms which can be used to schedule the jobs.
1. First Come First Serve
• It is the simplest algorithm to implement. The process with the minimal arrival time will
get the CPU first. The lesser the arrival time, the sooner will the process gets the CPU. It
is the non-preemptive type of scheduling.
2. Round Robin
• In the Round Robin scheduling algorithm, the OS defines a time quantum (slice). All the
processes will get executed in the cyclic way. Each of the process will get the CPU for a
small amount of time (called time quantum) and then get back to the ready queue to wait
for its next turn. It is a preemptive type of scheduling.
3. Shortest Job First
• The job with the shortest burst time will get the CPU first. The lesser the burst time, the
sooner will the process get the CPU. It is the non-preemptive type of scheduling.
6
Types of Scheduling Algorithms
4. Shortest remaining time first
•It is the preemptive form of SJF. In this algorithm, the OS schedules the Job
according to the remaining time of the execution.
5. Priority based scheduling
•In this algorithm, the priority will be assigned to each of the processes. The
higher the priority, the sooner will the process get the CPU. If the priority of
the two processes is same then they will be scheduled according to their arrival
time.
6. Highest Response Ratio Next
•In this scheduling Algorithm, the process with highest response ratio will be
scheduled next. This reduces the starvation in the system.
7
CPU Scheduling Criteria
• CPU scheduling is a method process or task that the CPU will run at any
given moment.
• It is an essential part of modern operating systems as it enables multiple
processes to run at the same time on the same processor.
• In short, the CPU scheduler decides the order and priority of the processes
to run and allocates the CPU time based on various parameters such as CPU
usage, throughput, turnaround, waiting time, and response time.
• CPU scheduling is essential for the system’s performance and ensures that
processes are executed correctly and on time.

8
CPU Scheduling Criteria
[Link] Utilization: The main objective of any CPU scheduling algorithm is to keep the CPU as busy as
possible.
2. Throughput- A measure of the work done by the CPU is the number of processes being executed and
completed per unit of time.
3. Turnaround Time-Turnaround time is a criterion used in CPU scheduling that measures the time it takes
for a task or process to complete from the moment it is submitted to the system until it is fully processed and
ready for output.
Turn Around Time = Completion Time - Arrival Time
4. Waiting Time- Waiting time is a criterion used in CPU scheduling that measures the amount of time a task
or process waits in the ready queue before it is processed by the CPU.
Waiting Time = Turnaround Time - Burst Time.
5. Response Time- Response time is a criterion used in CPU scheduling that measures the time it takes for the
system to respond to a user's request or input.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) - Arrival Time
6. Completion Time: The completion time is the time when the process stops executing, which means that
9 the process has completed its burst time and is completely executed.
Times Related to Process

10
Times Related to Process
• 1. Arrival Time: The time at which the process enters into the ready queue is called the arrival time.
• 2. Burst Time: The total amount of time required by the CPU to execute the whole process is called the
Burst Time. This does not include the waiting time. It is confusing to calculate the execution time for a
process even before executing it hence the scheduling problems based on the burst time cannot be
implemented in reality.
• 3. Completion Time: The Time at which the process enters into the completion state or the time at which
the process completes its execution, is called completion time.
• 4. Turnaround time: The total amount of time spent by the process from its arrival to its completion, is
called Turnaround time.
• 5. Waiting Time: The Total amount of time for which the process waits for the CPU to be assigned is
called waiting time.
• 6. Response Time: The difference between the arrival time and the time at which the process first gets the
CPU is called Response Time.

11
Importance of CPU Scheduling Criteria
• Efficient resource utilization − By maximizing CPU utilization and throughput, CPU
scheduling ensures that the processor is being used to its full potential. This leads to
increased productivity and efficient use of system resources.
• Fairness − CPU scheduling algorithms that prioritize waiting time and response time
help ensure that all processes have a fair chance to access the CPU. This is important in
multi-user environments where multiple users are competing for the same resources.
• Responsiveness − CPU scheduling algorithms that prioritize response time ensure that
processes that require immediate attention (such as user input or real-time systems) are
executed quickly, improving the overall responsiveness of the system.
• Predictability − CPU scheduling algorithms that prioritize turnaround time provide a
predictable execution time for processes, which is important for meeting deadlines and
ensuring that critical tasks are completed on time.

12
Process Management in OS
• The operating system is responsible for the following activities in
connection with Process Management:
• Scheduling processes and threads on the CPUs.
• Creating and deleting both user and system processes.
• Suspending and resuming processes.
• Providing mechanisms for process synchronization.
• Providing mechanisms for process communication.

13
What is Process in OS
• A process is a program in execution.
• The execution of a process progresses in a sequential fashion.
• A program is a passive entity while a process is an active entity.
• A process includes much more than just the program code.
• A process is the unit of work in modern time-sharing systems.
• A system has a collection of processes – user processes as well as system processes.
• A process includes the text section, stack, data section, program counter, register
contents and so on.

14
Process in Operating System
• A process is a running program that serves as the foundation for all computation.
• A process is essentially running software.
• The execution of any process must occur in a specific order.
• A process refers to an entity that helps in representing the fundamental unit of work that
must be implemented in any system.
• A program can be segregated into four pieces when put into memory to become a
process: stack, heap, text, and data.

15
Components of a Process
Stack Temporary data like method or function parameters, return address,
and local variables are stored in the process stack.

Heap This is the memory that is dynamically allocated to a process during


its execution.

Text This comprises the contents present in the processor’s registers as well
as the current activity reflected by the value of the program counter.

Data The global as well as static variables are included in this section.

16
Process States
• new: The process is being created.

• running: Instructions are being executed.

• waiting: The process is waiting for some event to occur.

• ready: The process is waiting to be assigned to a processor.

• terminated: The process has finished execution.

17
Process States

18
Attributes of a Process
The Attributes of the process are used by the Operating System to create the process control
block (PCB) for each of them. This is also called context of the process. Attributes which
are stored in the PCB are described below.

19
Attributes of a Process
[Link] ID: When a process is created, a unique id is assigned to the process which is
used for unique identification of the process in the system.

2. Program counter: A program counter stores the address of the last instruction of the
process on which the process was suspended. The CPU uses this address when the
execution of this process is resumed.

3. Process State: The Process, from its creation to the completion, goes through various
states which are new, ready, running and waiting.

20
Attributes of a Process
4. Priority: Every process has its own priority. The process with the highest priority among
the processes gets the CPU first. This is also stored on the process control block.

5. General Purpose Registers: Every process has its own set of registers which are used to
hold the data which is generated during the execution of the process.

6. List of open files: During the Execution, Every process uses some files which need to be
present in the main memory. OS also maintains a list of open files in the PCB.

7. List of open devices: OS also maintain the list of all open devices which are used during
the execution of the process.

21
Schedulers

The process manager’s activity is process scheduling, which involves removing the
running process from the CPU and selecting another process based on a specific
strategy.
The scheduler’s purpose is to implement the virtual machine so that each process
appears to be running on its own computer to the user.

22
Schedulers

23
Schedulers
1. Long term scheduler
•The job scheduler is another name for Long-Term scheduler.
• It selects processes from the pool (or the secondary memory) and then maintains them in
the primary memory’s ready queue.
•The Multiprogramming degree is mostly controlled by the Long-Term Scheduler.
•The goal of the Long-Term scheduler is to select the best mix of IO and CPU bound
processes from the pool of jobs.
•If the job scheduler selects more IO bound processes, all of the jobs may become stuck, the
CPU will be idle for the majority of the time, and multiprogramming will be reduced as a
result. Hence, the Long-Term scheduler’s job is crucial and could have a Long-Term impact
on the system.

24
Schedulers
Short term scheduler:
•CPU scheduler is another name for Short-Term scheduler. It chooses one job from the
ready queue and then sends it to the CPU for processing.
•To determine which work will be dispatched for execution, a scheduling method is
utilized.
•The Short-Term scheduler’s task can be essential in the sense that if it chooses a job with a
long CPU burst time, all subsequent jobs will have to wait in a ready queue for a long
period.

25
Schedulers
Medium term scheduler:
•The switched-out processes are handled by the Medium-Term scheduler.
•If the running state processes require some IO time to complete, the state must be changed
from running to waiting.
•It stops the process from executing in order to make space for other processes.
•Swapped out processes are examples of this, and the operation is known as swapping.
•The Medium-Term scheduler here is in charge of stopping and starting processes.
•The degree of multiprogramming is reduced. To have a great blend of operations in the
ready queue, swapping is required.

26
Process Queues
1. Job Queue: In starting, all the processes get stored in the job queue. It is maintained in the secondary
memory. The long term scheduler (Job scheduler) picks some of the jobs and put them in the primary
memory.
2. Ready Queue: Ready queue is maintained in primary memory. The short term scheduler picks the job from
the ready queue and dispatch to the CPU for the execution.
3. Waiting Queue: When the process needs some IO operation in order to complete its execution, OS changes
the state of the process from running to waiting. The context (PCB) associated with the process gets stored on
the waiting queue which will be used by the Processor when the process finishes the IO.

27
Process Control Block (PCB)
• A Process Control Block is a data structure maintained by
the Operating System for every process.

28
Process Control Block (PCB)
Process State The current state of the process i.e., whether it is ready, running, waiting, or whatever
Process privileges This is required to allow/disallow access to system resources.
Process ID Unique identification for each of the process in the operating system.
Pointer A pointer to parent process.
Program Counter Program Counter is a pointer to the address of the next instruction to be executed for this
process.
CPU registers Various CPU registers where process need to be stored for execution for running state.
CPU Scheduling Information Process priority and other scheduling information which is required to schedule
the process.
Memory management information This includes the information of page table, memory limits, Segment
table depending on memory used by the operating system.
Accounting information This includes the amount of CPU used for process execution, time limits, execution
ID etc.
IO status information This includes a list of I/O devices allocated to the process.

29
Process Address Space
 Address space may also denote a range of
physical or virtual addresses which can be
accessed by a processor.

 While a Process Address space is set of logical


addresses that a process references in its code.
For example, for a 32-bit address allowed, the
addresses can range from 0 to 0x7fffffff; that
is, 2^31 possible numbers

 The OS here also has an additional job to map


the logical addresses to the actual physical
addresses too.

30
Process Address Space
Components of a Process Address Space
•The total amount of shared memory a system can allocate
depends on several factors.
•The overall space may include sections such as stack
space, program size required, memory mapped files, shared
libraries, as well as memory allocated from the heap.
•Memory allocation policies and address spaces used by the
varied operating systems are complicated.
•They may also differ from one operating system to another.

31
Thread in Operating System
 A thread is a single sequence stream within a process.
 Threads are also called lightweight processes as they possess some of the
properties of processes.
 Each thread belongs to exactly one process.
 In an operating system that supports multithreading, the process can consist
of many threads.
 But threads can be effective only if the CPU is more than 1 otherwise two
threads have to context switch for that single CPU.
 A thread refers to an execution unit in the process that has its own
programme counter, stack, as well as a set of registers.

32
Thread in Operating System
 A thread is a single sequential flow of
execution of tasks of a process so it is also
known as thread of execution or thread of
control.
 There is a way of thread execution inside the
process of any operating system.
 Each thread of the same process makes use of
a separate program counter and a stack of
activation records and control blocks.
 Thread is often referred to as a lightweight
process. Components of Threads
[Link] space
[Link] set
[Link] counter
33
Types of Thread
In the operating system, there are two types of threads.

Kernel level thread. User-level thread.

34
User Level Thread

[Link] Level thread (ULT) – Is implemented in the user level library, they are not
created using the system calls. Thread switching does not need to call OS and to
cause interrupt to Kernel. Kernel doesn’t know about the user level thread and
manages them as if they were single-threaded processes.
1. Advantages of ULT –
[Link] be implemented on an OS that doesn’t support multithreading.
[Link] representation since thread has only program counter, register set,
stack space.
[Link] to create since no intervention of kernel.
[Link] switching is fast since no OS calls need to be made.
2. Limitations of ULT –
[Link] or less co-ordination among the threads and Kernel.
[Link] one thread causes a page fault, the entire process blocks.
35
Kernel Level Thread
Kernel Level Thread (KLT) – Kernel knows and manages the threads.
Instead of thread table in each process, the kernel itself has thread table (a
master one) that keeps track of all the threads in the system. In addition kernel
also maintains the traditional process table to keep track of the processes. OS
kernel provides system call to create and manage threads.
2. Advantages of KLT –
[Link] kernel has full knowledge about the threads in the system,
scheduler may decide to give more time to processes having large
number of threads.
[Link] for applications that frequently block.
3. Limitations of KLT –
[Link] and inefficient.
[Link] requires thread control block so it is an overhead.
36
Advantages of Threading
 Responsiveness: A multithreaded application increases
responsiveness to the user.
 Resource Sharing: Resources like code and data are shared
between threads, thus allowing a multithreaded application to
have several threads of activity within the same address space.
 Increased concurrency: Threads may be running parallelly on
different processors, increasing concurrency in a multiprocessor
machine.
 Lesser cost: It costs less to create and context-switch threads
than processes.
 Lesser context-switch time: Threads take lesser context-switch
37 time than processes.
Differences between Threads and Processes
• Resources: Processes have their own address space and resources, such as memory and
file handles, whereas threads share memory and resources with the program that created
them.
• Scheduling: Processes are scheduled to use the processor by the operating system,
whereas threads are scheduled to use the processor by the operating system or the
program itself.
• Creation: The operating system creates and manages processes, whereas the program or
the operating system creates and manages threads.
• Communication: Because processes are isolated from one another and must rely on
inter-process communication mechanisms, they generally have more difficulty
communicating with one another than threads do. Threads, on the other hand, can
interact with other threads within the same programme directly.

38
Multithreading Models
 Some operating system provide a combined user level thread and Kernel level thread
facility.
 Solaris is a good example of this combined approach.
 In a combined system, multiple threads within the same application can run in parallel
on multiple processors and a blocking system call need not block the entire process.
Multithreading models are three types.
 Many to many relationship.
 Many to one relationship.
 One to one relationship.

39
Many to Many Model
• In this model, many user level threads multiplexes to the Kernel thread of smaller or
equal numbers.
• The number of Kernel threads may be specific to either a particular application or a
particular machine. Fig. shows the many to many model.
• In this model, developers can create as many user threads as necessary and the
corresponding Kernel threads can run in parallels on a multiprocessor.

40
Many to One Model
 Many-to-one model maps many user level threads to one Kernel-level thread.
 Thread management is done in user space by the thread library.
 When thread makes a blocking system call, the entire process will be blocked. Only one thread can access
the Kernel at a time, so multiple threads are unable to run in parallel on [Link] the user-level
thread libraries are implemented in the operating system in such a way that the system does not support
them, then the Kernel threads use the many-to-one relationship modes.

41
One to One Model
• There is one-to-one relationship of user-level thread to the kernel-level thread.
• This model provides more concurrency than the many-to-one model.
• It also allows another thread to run when a thread makes a blocking system call.
• It supports multiple threads to execute in parallel on microprocessors.
• Disadvantage of this model is that creating user thread requires the corresponding Kernel
thread. OS/2, windows NT and windows 2000 use one to one relationship model.

42
Difference between User-Level & Kernel-Level Thread

43
Difference between Process and Thread

44

You might also like