0% found this document useful (0 votes)
40 views

Module 2.4

This document discusses operating system concepts related to process and thread management on single-processor and multi-processor systems. It covers topics such as process states, threads, scheduling algorithms, classifications of multiprocessor systems, synchronization granularity, design issues in multiprocessor scheduling including process and thread assignment, and approaches to thread scheduling. The goal is for students to understand fundamental concepts in process and thread management that enable parallel execution across multiple processors.

Uploaded by

Aryan Juyal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Module 2.4

This document discusses operating system concepts related to process and thread management on single-processor and multi-processor systems. It covers topics such as process states, threads, scheduling algorithms, classifications of multiprocessor systems, synchronization granularity, design issues in multiprocessor scheduling including process and thread assignment, and approaches to thread scheduling. The goal is for students to understand fundamental concepts in process and thread management that enable parallel execution across multiple processors.

Uploaded by

Aryan Juyal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Operating Systems

By
Prof. Sagar D. Korde
Department of Information Technology
K J Somaiya College of Engineering, Mumbai-77
(Constituent college of Somaiya Vidyavihar University)

9/15/2020 1
Course Outcomes:
At the end of successful completion of the module a student will be able to
CO2: Demonstrate use of inter process communication.
Module 2: Process Management
2.1 Processes: Process Concept, process creation , suspension and termination ,Process States: 2, 5, 7
state models, Process Description, Process Control block.
2.2 Threads: Multithreading models, Thread implementations – user level and kernel level threads,
Symmetric Multiprocessing.
2.3 Uniprocessor Scheduling: Scheduling Criteria, Types of Scheduling: Preemptive, Non-preemptive,
Long-term, Medium- term, Short-term schedulers. Scheduling Algorithms: FCFS, SJF, SRTF,RR, Priority.
2.4 Multiprocessor Scheduling: Granularity, Design Issues, Process Scheduling. Thread Scheduling,
Real Time Scheduling
2.5 Process Security

9/15/2020 2
Classifications of Multiprocessor Systems

Loosely coupled or distributed multiprocessor, or cluster


• consists of a collection of relatively autonomous systems, each processor having its
own main memory and I/O channels
Functionally specialized processors
• there is a master, general-purpose processor; specialized processors are controlled
by the master processor and provide services to it
Tightly coupled multiprocessor
• consists of a set of processors that share a common main memory and are under
the integrated control of an operating system

9/15/2020 3
Synchronization Granularity
and Processes

9/15/2020 4
Independent Parallelism

each user is performing a


particular application
• No explicit synchronization among processes
• each represents a separate,
multiprocessor provides the
independent application or job same service as a
multiprogrammed
• Typical use is in a time-sharing system uniprocessor

because more than one


processor is available, average
response time to the users will
be less

9/15/2020 5
Coarse and Very
Coarse-Grained Parallelism

• Synchronization among processes, but at a very gross level


• Good for concurrent processes running on a multiprogrammed
uniprocessor
• can be supported on a multiprocessor with little or no change to user
software

9/15/2020 6
Medium-Grained
Parallelism

• Single application can be effectively implemented as a collection of


threads within a single process
• programmer must explicitly specify the potential parallelism of an application
• there needs to be a high degree of coordination and interaction among the threads of an
application, leading to a medium-grain level of synchronization
• Because the various threads of an application interact so frequently,
scheduling decisions concerning one thread may affect the
performance of the entire application

9/15/2020 7
Fine-Grained Parallelism

• Represents a much more complex use of parallelism than is found in


the use of threads
• Is a specialized and fragmented area with many different approaches

9/15/2020 8
Design Issues

Scheduling on a • The approach taken will depend on


multiprocessor involves the degree of granularity of
three interrelated applications and the number of
issues:
processors available

actual assignment of
dispatching of use of processes to
a process multiprogramming on processors
individual processors

9/15/2020 9
Assignment of Processes to Processors

Assuming all processors are equal,


it is simplest to treat processors as static or dynamic needs to be
a pooled resource and assign determined
processes to processors on demand

If a process is permanently assigned


to one processor from activation advantage is that there may
allows group or gang
until its completion, then a be less overhead in the
scheduling
dedicated short-term queue is scheduling function
maintained for each processor

• A disadvantage of static assignment is that one processor can be idle, with an empty queue,
while another processor has a backlog
• to prevent this situation, a common queue can be used
• another option is dynamic load balancing
9/15/2020 10
Assignment of
Processes to Processors

• Both dynamic and static methods require some way of assigning a


process to a processor
• Approaches:
• Master/Slave
• Peer

9/15/2020 11
Master/Slave Architecture
• Key kernel functions always run on a particular processor
• Master is responsible for scheduling
• Slave sends service request to the master
• Is simple and requires little enhancement to a uniprocessor
multiprogramming operating system
• Conflict resolution is simplified because one processor has control of
all memory and I/O resources

Disadvantages:

• failure of master brings down whole system


• master can become a performance bottleneck

9/15/2020 12
Peer Architecture
• Kernel can execute on any processor
• Each processor does self-scheduling from the pool of available
processes

Complicates the operating system

• operating system must ensure that two processors do not choose the same
process and that the processes are not somehow lost from the queue

9/15/2020 13
Process Scheduling
• Usually processes are not dedicated to processors
• A single queue is used for all processors
• if some sort of priority scheme is used, there are multiple queues based on priority
• System is viewed as being a multi-server queuing architecture

9/15/2020 14
Process Scheduling
• With static assignment: should individual processors be multiprogrammed or should each be dedicated to a
single process?
• Often it is best to have one process per processor; particularly in the case of multithreaded programs where
it is advantageous to have all threads of a single process executing at the same time.

9/15/2020 15
Thread Scheduling
• Thread execution is separated from the rest of the definition of a process
• An application can be a set of threads that cooperate and execute concurrently in the same address
space
• On a uniprocessor, threads can be used as a program structuring aid and to overlap I/O with processing
• In a multiprocessor system kernel-level threads can be used to exploit true parallelism in an application
• Dramatic gains in performance are possible in multi-processor systems
• Small differences in thread management and scheduling can have an impact on applications that
require significant interaction among threads

9/15/2020 16
Approaches to Thread Scheduling

processes are not a set of related thread


assigned to a scheduled to run on a
particular processor set of processors at the
same time, on a one-to-
one basis

Load Sharing Gang Scheduling


Four approaches for
multiprocessor thread
scheduling and processor
assignment are: the number of
threads in a
provides implicit process can be
scheduling defined altered during the
by the assignment course of
of threads to execution
processors
Dynamic Scheduling
Dedicated Processor Assignment

9/15/2020 17
Load Sharing
• Simplest approach and carries over most directly from a uniprocessor environment

Advantages:

• load is distributed evenly across the processors


• no centralized scheduler required
• the global queue can be organized and accessed using any of the schemes discussed in
Chapter 9

• Versions of load sharing:


• first-come-first-served
• smallest number of threads first
• preemptive smallest number of threads first

9/15/2020 18
Disadvantages of Load Sharing
• Central queue occupies a region of memory that must be accessed in a manner that enforces mutual
exclusion
• can lead to bottlenecks
• Preemptive threads are unlikely to resume execution on the same processor
• caching can become less efficient
• If all threads are treated as a common pool of threads, it is unlikely that all of the threads of a program will
gain access to processors at the same time
• the process switches involved may seriously compromise performance

9/15/2020 19
Gang Scheduling
• Simultaneous scheduling of the threads that make up a single process
Benefits:

• synchronization blocking may be reduced, less process switching may be


necessary, and performance will increase
• scheduling overhead may be reduced

• Useful for medium-grained to fine-grained parallel applications whose


performance severely degrades when any part of the application is
not running while other parts are ready to run
• Also beneficial for any parallel application

9/15/2020 20
Example of Scheduling Groups With Four and One Threads

9/15/2020 21
Dedicated Processor Assignment
• When an application is scheduled, each of its threads is assigned to a
processor that remains dedicated to that thread until the application
runs to completion
• If a thread of an application is blocked waiting for I/O or for
synchronization with another thread, then that thread’s processor
remains idle
• there is no multiprogramming of processors
• Defense of this strategy:
• in a highly parallel system, with tens or hundreds of processors, processor utilization is no
longer so important as a metric for effectiveness or performance
• the total avoidance of process switching during the lifetime of a program should result in
a substantial speedup of that program

9/15/2020 22
9/15/2020 23
Dynamic Scheduling
• For some applications it is possible to provide language and system
tools that permit the number of threads in the process to be altered
dynamically
• this would allow the operating system to adjust the load to improve utilization
• Both the operating system and the application are involved in
making scheduling decisions
• The scheduling responsibility of the operating system is primarily limited to processor allocation
• This approach is superior to gang scheduling or dedicated processor assignment for applications
that can take advantage of it

9/15/2020 24
Real-Time Systems
• The operating system, and in particular the scheduler, is perhaps the most important component

• control of laboratory experiments


• process control in industrial plants
• robotics
Examples: • air traffic control
• telecommunications
• military command and control systems

• Correctness of the system depends not only on the logical result of the computation but also on the time
at which the results are produced
• Tasks or processes attempt to control or react to events that take place in the outside world
• These events occur in “real time” and tasks must be able to keep up with them

9/15/2020 25
Hard and Soft Real-Time Tasks

Hard real-time task Soft real-time task


• one that must meet its • Has an associated deadline
deadline that is desirable but not
• otherwise it will cause mandatory
unacceptable damage or a • It still makes sense to schedule
fatal error to the system and complete the task even if
it has passed its deadline

9/15/2020 26
Periodic and Aperiodic
Tasks
• Periodic tasks
• requirement may be stated as:
• once per period T
• exactly T units apart
• Aperiodic tasks
• has a deadline by which it must finish or start
• may have a constraint on both start and finish time

9/15/2020 27
Characteristics of Real Time Systems

Real-time operating systems have requirements in five


general areas:
Determinism
Responsiveness
User control
Reliability
Fail-soft operation

9/15/2020 28
Determinism
• Concerned with how long an operating system delays before
acknowledging an interrupt
• Operations are performed at fixed, predetermined times or within
predetermined time intervals
• when multiple processes are competing for resources and processor time, no system will
be fully deterministic

The extent to which an whether the system has


operating system can the speed with which it can sufficient capacity to handle all
deterministically satisfy requests respond to interrupts requests within the required
depends on: time

9/15/2020 29
Responsiveness

• Together with determinism make up the response time to external


events
• critical for real-time systems that must meet timing requirements imposed by
individuals, devices, and data flows external to the system
• Concerned with how long, after acknowledgment, it takes an
operating system to service the interrupt
Responsiveness includes:

• amount of time required to initially handle the interrupt and begin execution of the
interrupt service routine (ISR)
• amount of time required to perform the ISR
• effect of interrupt nesting

9/15/2020 30
User Control
• Generally much broader in a real-time operating system than in
ordinary operating systems
• It is essential to allow the user fine-grained control over task priority
• User should be able to distinguish between hard and soft tasks and to
specify relative priorities within each class
• May allow user to specify such characteristics as:
what processes what disk what rights the
paging or transfer
must always be processes in
process algorithms are
resident in main various priority
swapping to be used
memory bands have

9/15/2020 31
Reliability
• More important for real-time systems than non-real time systems
• Real-time systems respond to and control events in real time so
loss or degradation of performance may have catastrophic
consequences such as:
• financial loss
• major equipment damage
• loss of life

9/15/2020 32
Fail-Soft Operation
• A characteristic that refers to the ability of a system to fail in such a way as to preserve as much capability
and data as possible
• Important aspect is stability
• a real-time system is stable if the system will meet the deadlines of its most critical, highest-
priority tasks even if some less critical task deadlines are not always met

9/15/2020 33
Scheduling of Real-Time Process

9/15/2020 34
Real-Time Scheduling

whether a system performs if it does, whether it is done


schedulability analysis statically or dynamically

Scheduling approaches
depend on:
whether the result of the
analysis itself produces a
scheduler plan according to
which tasks are dispatched at
run time

9/15/2020 35
Classes of Real-Time Scheduling Algorithms
Static table-driven approaches
• performs a static analysis of feasible schedules of dispatching
• result is a schedule that determines, at run time, when a task must begin execution

Static priority-driven preemptive approaches


• a static analysis is performed but no schedule is drawn up
• analysis is used to assign priorities to tasks so that a traditional priority-driven preemptive scheduler can be used

Dynamic planning-based approaches


• feasibility is determined at run time rather than offline prior to the start of execution
• one result of the analysis is a schedule or plan that is used to decide when to dispatch this task

Dynamic best effort approaches


• no feasibility analysis is performed
• system tries to meet all deadlines and aborts any started process whose deadline is missed

9/15/2020 36
Deadline Scheduling
• Real-time operating systems are designed with the objective of
starting real-time tasks as rapidly as possible and emphasize rapid
interrupt handling and task dispatching
• Real-time applications are generally not concerned with sheer speed
but rather with completing (or starting) tasks at the most valuable
times
• Priorities provide a crude tool and do not capture the requirement of
completion (or initiation) at the most valuable time

9/15/2020 37
Information Used for Deadline Scheduling

• time task becomes ready for


Ready time execution Resource • resources required by the
requirements task while it is executing

Starting • time task must begin


deadline • measures relative
Priority
importance of the task
Completion • time task must be
deadline completed
• a task may be decomposed
Subtask
Processing • time required to execute the scheduler
into a mandatory subtask
task to completion and an optional subtask
time

9/15/2020 38
Execution Profile of Two Periodic Tasks

9/15/2020 39
9/15/2020 40
9/15/2020 41
Execution Profile of Five Aperiodic Tasks

9/15/2020 42
Rate Monotonic Scheduling

Figure 10.7
9/15/2020 43
Periodic Task Timing Diagram

9/15/2020 44
Value of the RMS Upper Bound

9/15/2020 45
Priority Inversion
• Can occur in any priority-based preemptive scheduling scheme
• Particularly relevant in the context of real-time scheduling
• Best-known instance involved the Mars Pathfinder mission
• Occurs when circumstances within the system force a higher priority
task to wait for a lower priority task
Unbounded Priority Inversion

• the duration of a priority inversion depends not only on the time


required to handle a shared resource, but also on the unpredictable
actions of other unrelated tasks

9/15/2020 46
Unbounded Priority Inversion

9/15/2020 47
Priority Inheritance

9/15/2020 48
Linux Scheduling
• The three classes are:
• SCHED_FIFO: First-in-first-out real-time threads
• SCHED_RR: Round-robin real-time threads
• SCHED_OTHER: Other, non-real-time threads
• Within each class multiple priorities may be used

9/15/2020 49
Linux Real-Time Scheduling

9/15/2020 50
Non-Real-Time Scheduling

• The Linux 2.4 scheduler for the SCHED_OTHER • Time to select the appropriate process and
class did not scale well with increasing number assign it to a processor is constant regardless of
of processors and processes the load on the system or number of
processors

• Linux 2.6 uses a new priority scheduler known • Kernel maintains two scheduling data
as the O(1) scheduler structures for each processor in the system

9/15/2020 51
Linux Scheduling Data Structures

9/15/2020 52
UNIX SVR4 Scheduling
• A complete overhaul of the scheduling algorithm used in earlier UNIX
systems
The new algorithm is designed to give:

• highest preference to real-time processes


• next-highest preference to kernel-mode
processes
• lowest preference to other user-mode processes

• Major modifications:
• addition of a preemptable static priority scheduler and the introduction of a set of 160
priority levels divided into three priority classes
• insertion of preemption points

9/15/2020 53
SVR Priority Classes

9/15/2020 54
SVR Priority Classes

Real time (159 – 100) Kernel (99 – 60) Time-shared (59-0)

guaranteed to be selected to run guaranteed to be selected to run lowest-priority processes, intended


before any kernel or time-sharing before any time-sharing process, but for user applications other than real-
process must defer to real-time processes time applications

can preempt kernel and user


processes

9/15/2020 55
SVR4 Dispatch Queues

9/15/2020 56
UNIX FreeBSD Scheduler

9/15/2020 57
SMP and Multicore Support
 FreeBSD scheduler was designed to provide effective scheduling for a SMP or multicore system

 Design goals:
 address the need for processor affinity in SMP and multicore systems
 processor affinity – a scheduler that only migrates a thread when necessary to avoid
having an idle processor
 provide better support for multithreading on multicore systems
 improve the performance of the scheduling algorithm so that it is no longer a function of the
number of threads in the system

9/15/2020 58
Windows
Thread
Dispatching Priorities

9/15/2020 59
Interactivity Scoring
• A thread is considered to be interactive if the ratio of its voluntary sleep time versus its runtime is below a
certain threshold
• Interactivity threshold is defined in the scheduler code and is not configurable
• Threads whose sleep time exceeds their run time score in the lower half of the range of interactivity scores
• Threads whose run time exceeds their sleep time score in the upper half of the range of interactivity scores

9/15/2020 60
Thread Migration
• Processor affinity is when a Ready thread is scheduled onto the last processor that it ran on
• significant because of local caches dedicated to a single processor

an idle processor steals a


thread from an nonidle
processor
Pull
Mechanism
primarily useful when there is a light or
sporadic load or in situations where
FreeBSD scheduler processes are starting and exiting very
supports two frequently
mechanisms for thread
migration to balance a periodic scheduler task
load: evaluates the current load
situation and evens it out
Push
Mechanism
ensures fairness among
the runnable threads

9/15/2020 61
Windows Scheduling
• Priorities in Windows are organized into two bands or classes:
real time priority class

• all threads have a fixed priority that never changes


• all of the active threads at a a given priority level are in a round-robin queue

variable priority class

• a thread’s priority begins an initial priority value and then may be temporarily boosted during the thread’s
lifetime

• Each band consists of 16 priority levels


• Threads requiring immediate attention are in the real-time class
• include functions such as communications and real-time tasks

9/15/2020 62
Windows Priority Relationship

9/15/2020 63
Linux Virtual Machine Process Scheduling

9/15/2020 64
Summary
• With a tightly coupled multiprocessor, multiple processors have access to the same main memory
• Performance studies suggest that the differences among various scheduling algorithms are less significant in
a multiprocessor system
• A real-time process is one that is executed in connection with some process or function or set of events
external to the computer system and that must meet one or more deadlines to interact effectively and
correctly with the external environment
• A real-time operating system is one that is capable of managing real-time processes
• Key factor is the meeting of deadlines
• Algorithms that rely heavily on preemption and on reacting to relative deadlines are appropriate in this
context

9/15/2020 65
THANK YOU

9/15/2020 Silberschatz A., Galvin P., Gagne G, “Operating Systems Concepts”, VIIIth Edition, Wiley, 2011. 66

You might also like