0% found this document useful (0 votes)
50 views42 pages

Chapter 2 Process Management Part 2 Threads and Multithreading

This document discusses threads and multithreading. It defines a thread as the smallest unit of processing that can be scheduled by an operating system. While processes are independent units of execution, threads exist within a process and share the process's resources. The document compares threads and processes, discusses thread libraries, provides examples of multithreaded applications, and examines single-threaded and multithreaded process models.

Uploaded by

amanterefe99
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
50 views42 pages

Chapter 2 Process Management Part 2 Threads and Multithreading

This document discusses threads and multithreading. It defines a thread as the smallest unit of processing that can be scheduled by an operating system. While processes are independent units of execution, threads exist within a process and share the process's resources. The document compares threads and processes, discusses thread libraries, provides examples of multithreaded applications, and examines single-threaded and multithreaded process models.

Uploaded by

amanterefe99
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 42

Chapter Two

Process and Thread Management

Part Two
Threads and Multithreading

Operating Systems
1 (SEng 2043)
2043)
Objective
At the end of this session students will be able to:

Understand the basic concepts, principles and notions of a thread, a

fundamental unit of CPU utilization that forms the basis of multithreaded

computer systems.

Understand the differences and similarities of threads and processes

Identify the basic thread libraries and threading issues.

To examine issues related to multithreaded programming and multithreading

Models

Threading Issues

2 Operating System Examples


Threads
 Inter-process communication is simple and easy when used occasionally

If there are many processes sharing many resources, then the mechanism
becomes cumbersome- difficult to handle.
Threads are created to make this kind of resource sharing simple & efficient
 Each process has the following two characteristics:

1. Unit of resource ownership:- When a new process is created , an address space

containing program text and data , as well as other resources(files, child processes,
pending alarms, signal handlers, accounting information) is allocated.
The unit of resource ownership is usually referred to as a Task or a Process .
2. Unit of dispatching:- is a thread of execution, usually shortened to just thread.

The thread has a program counter, that keeps track of which instruction to
execute next.
3
It has registers, which hold its current working variables.
Contd.
It has a stack, which contains the execution history, with one frame for each
procedure called but not yet returned from.
 The unit of dispatching is usually referred to a Thread or a Light-Weight Process

(LWP).
 A thread is a basic unit of CPU utilization that consists of:
 Thread id
 Execution State
 Program counter
 Register set
 Stack
 Threads belonging to the same process share:

 its code
 its data section
4
 other OS resources such as open files and
Process Vs. Thread
 A thread of execution is the smallest unit of processing that can be scheduled by an OS.
 The implementation of threads and processes differs from one OS to another, but in
most cases, a thread is contained inside a process.
 Multiple threads can exist within the same process and share resources such as
memory, while different processes do not share these resources.
 Like process states, threads also have states:
New, Ready, Running, Waiting and Terminated
 Like processes, the OS will switch between threads (even though they belong to a
single process) for CPU usage.
 Like process creation, thread creation is supported by APIs
 Creating threads is inexpensive (cheaper) compared to processes
They do not need new address space, global data, program code or
operating system resources
Context switching is faster as the only things to save/restore are program
5
counters, registers and stacks
Contd.

Similarities Difference

 Both share CPU and only one  Unlike processes, threads are not

thread/process is active (running) at a independent of one another.


time.
 Unlike processes, all threads
 Like processes, threads within a
can access every address in the
process execute sequentially.

 Like processes, thread can create task.

children.  Unlike processes, threads are


 Like process, if one thread is blocked,
designed to assist one other.
another thread can run.
6
Contd.

Fig. 2.2.1 Each thread has its own Stack


7
Thread Libraries
 Thread libraries provide programmers an API to create and manage threads

 They can be implemented in either user space or kernel space


User space libraries
All code and data structure of the library are in user space
Invoking a function in the library results in a local function call in user space and
not a system call
Kernel space libraries
Code and data structure of the library is in kernel space and is directly
supported by OS.
Invoking an API for the library results in a system call to the kernel
 There are three basic libraries used today:
1. POSIX pthreads
2. WIN32 threads
8 3. Java threads
Contd. Read More

1. POSIX pthreads
They may be provided as either a user or kernel library, as an extension to
the POSIX standard
Systems like Solaris, Linux and Mac Oss implement pthreads
specifications
2. WIN32 threads
These are provided as a kernel-level library on Windows systems.
3. Java threads
Since Java generally runs on a Java Virtual Machine (JVM), the
implementation of threads is based upon whatever OS and hardware the
JVM is running on i.e. either Pthreads or Win32 threads depending on the
system.
On Windows systems, Java threads are typically implemented using the
9
Win32 API whereas UNIX and Linux systems often use Pthreads.
Examples of Threads
In a word processor,
A background thread may check spelling and grammar, while a
foreground thread processes user input (keystrokes), while yet a third
thread loads images from the hard drive, and a fourth does periodic
automatic backups of the file being edited

In a spreadsheet program,
one thread could display menus and read user input, while another thread
executes user commands and updates the spreadsheet

In a web server,
Multiple threads allow for multiple requests to be satisfied simultaneously,
without having to service requests sequentially or to fork off separate
processes for every incoming request
10
Thread usage example: Web Server

Fig. 2.2.2a Web server process and its threads


11
Contd.
//Worker Thread
//Dispatcher Thread while(true)
while(true) {
{ wait_for_work(&buf);
get_next_request(&buf); look_for_page_in_cache(&buf, &page);

handoff_work(&buf); if(page_not_in_cache(&page)
read_page_from_disk(&buf, &page);
}
return_page(&page);
}

Fig. 2.2.2b Multithreaded


12 Server Architecture
Multithreading

 Multithreading refers to the ability on an operating system to support

multiple threads of execution within a single process.

 A traditional (heavy weight) process has a single thread of control

There’s one program counter and a set of instructions carried out at a time

 If a process has multiple thread of control, it can perform more than one task

at a time

Each threads have their own program counter, stacks and registers

But they share common code, data and some operating system data

13
structures like files
Multitasking Vs. Multithreading
 Multitasking is the ability of an OS to execute more than one program
simultaneously.
Though we say so but in reality no two programs on a single processor
machine can be executed at the same time.
 The CPU switches from one program to the next so quickly that appears as if all of
the programs are executing at the same time.
 Multithreading is the ability of an OS to execute the different parts of the
program, called threads, simultaneously.
 The program has to be designed well so that the different threads do not interfere
with each other.
 Individual programs are all isolated from each other in terms of their memory and
data, but individual threads are not as they all share the same memory and data
variables.
Hence, implementing multitasking is relatively easier in an operating
14
system than implementing multithreading.
Contd.
 Traditionally there is a single thread of execution per process.

Example: MSDOS supports a single user process and single thread.

Older UNIX supports multiple user processes but only support one thread

per process.

Multithreading

Java run time environment is an example of one process with

multiple threads.

Examples of OS supporting multiple processes, with each process

supporting multiple threads: Windows 2000, Solaris, Linux, Mac,

and OS/2
15
Contd.

Fig. 2.2.3 Combinations of Threads and Processes


16
Single and Multithreaded Processes
 In a single threaded process model, the representation of a process includes

its PCB, user address space, as well as user and kernel stacks

 When a process is running, the contents of these registers are controlled by

that process, and the contents of these registers are saved when the process is

not running.

 In a multithreaded environment

There is a single PCB and address space,

However, there are separate stacks for each thread as well as separate

control blocks for each thread containing register values, priority, and
17
other thread related state information.
Contd.

Fig. 2.2.4 Single and Multithreaded processes


18
Benefits of Multithreading
1. Responsiveness
A program is allowed to continue running even if part of it is blocked or is
performing a lengthy operation, thereby increasing responsiveness to the user.
One thread can give response while other threads are blocked or slowed
down doing computations
2. Resource Sharing
Threads share common code, data and resources of the process to which
they belong.
This allows multiple tasks to be performed within the same address space
3. Economy
Creating and allocating memory and resources to processes is expensive, while
creating threads is cheaper as they share the resources of the process to which
they belong.
19
Hence, it’s more economical to create and context-switch threads.
Contd.
4. Scalability/utilization of multi-processor architectures
The benefits of multithreading is increased in a multiprocessor architecture,
where threads may be executed in parallel on different processors.
A single-threaded process can run on one CPU, no matter how many are
available.
Multithreading on a multi CPU machine increases parallelism.

Concurrent execution
on a single-core system

Parallel execution on a
multicore system

20 Fig. 2.2.5 Concurrent Execution of threads of a processes on Single and Multiple processors
Challenges in programming for multicore systems

 There are five challenges in programming for multicore systems:


1. Dividing Activities:- involves examining applications to find areas that can be

divided into separate, concurrent tasks and thus can run in parallel on
individual cores
2. Balancing:- While identifying tasks that can run in parallel, programmers must

also ensure that the tasks perform equal work of equal value.
In some instances, a certain task may not contribute as much value to the
overall process as other tasks; using a separate execution core to run that
task may not be worth the cost.
3. Data Splitting:- Just as applications are divided into separate tasks, the data

21 accessed and manipulated by the tasks must be divided to run on separate cores.
Contd.
4. Data Dependency:- The data accessed by the tasks must be examined for

dependencies between two or more tasks.

In instances where one task depends on data from another, programmers

must ensure that the execution of the tasks is synchronized to

accommodate the data dependency.

5. Testing and Debugging:-When a program is running in parallel on multiple

cores, there are many different execution paths.

Testing and debugging such concurrent programs is inherently more

22
difficult than testing and debugging single-threaded applications.
Multithreading Models
 There are three types of multithreading models in modern operating systems:
1. Kernel Level threads:- are supported by the OS kernel itself.
All modern OS support kernel threads
Need user/kernel mode switch to change threads
2. User Level threads:- are threads application programmers put in their programs.
They are managed without the kernel support
Has problems with blocking system calls
Cannot support multiprocessing
2. Hybrid Level threads:- the combination of both Kernel and User level threads.
 There must be a relationship between the kernel threads and the user threads.
 There are three common ways to establish this relationship:
A. Many-to-One
B. One-to-One
23
C. Many-to-Many
1. User level Threads (ULTs)
 The kernel is not aware of the existence of threads.
 All thread management is done by the application by using a thread library.
 Thread switching does not require kernel mode privileges (no mode switch).
 Scheduling is application specific.

24 Fig. 2.2.6 ULT and its implementation


ULT Idea
 Thread management done by user-level threads library.

 Threads library contains code for:

creating and destroying threads.

passing messages and data between threads.

scheduling thread execution.

saving and restoring thread contexts.

Kernel activity for ULTs


 The kernel is not aware of thread activity but it is still managing process activity.

 When a thread makes a system call, the whole task will be blocked.

But for the thread library that thread is still in the running state.
25
So thread states are independent of process states.
Advantages and inconveniences of ULT

Advantages Inconveniences
 Most system calls are blocking
 Thread switching does not involve the
and the kernel blocks processes.
kernel: no mode switching. So all threads within the process
will be blocked.
 Scheduling can be application specific:  The kernel can only assign
processes to processors.
choose the best algorithm.
Two threads within the same

 ULTs can run on any OS. process cannot run


simultaneously on two
Only needs a thread library. processors.

26 Examples of ULT Threads: POSIX Pthreads, Win32 threads, Java threads


2. Kernel Level Threads (KLTs)
 All thread management is done by kernel and kernel maintains context information
for the process and the thread.
 No thread library but an API to the kernel thread facility.
 Switching between threads requires the kernel. i.e. Threads are supported by kernel.
 Scheduling on a thread basis

27 Fig. 2.2.7 KLT and its implementation


Advantages and inconveniences of KLT

Advantages Inconveniences
 The kernel can simultaneously schedule
 thread switching within the same

many threads of the same process on


process involves the kernel.

many processors.  We have 2 mode switches per


thread switch.
 blocking is done on a thread level.
 This results in a significant slow
 kernel routines can be multithreaded.
down.

Examples of KLT OSs: Windows NT/2000/XP, Linux, Solaris, Tru64 UNIX,


28
Mac OS X
3. Hybrid ULT/KLT Approaches

 Thread creation is done in the user

space.

 Bulk of scheduling and synchronization

of threads done in the user space.

 The programmer may adjust the

number of KLTs.

 May combine the best of both

approaches.

Example is Solaris prior to version 9.


Fig. 2.2.8 Hybrid ULT/KLT Approach
29
Contd.

Fig. 2.2.9 Multiplexing user-level threads onto kernel-level threads.


30
Multithreading Models: Many-
Many-to-
to-One Model
 It maps many user level threads in to one kernel
thread
 Thread management is done by thread library in
user space.
Hence, it is efficient because there is no
mode switch but if a thread makes blocking
system call, it blocks
 Only one thread can access the kernel at a time,
so multiple threads can not run on
multiprocessor systems
 Used by ULT Libraries on systems that do not
support kernel threads (KLT).
Examples:- Solaris Green Threads, GNU Fig. 2.2.10 Many-to-One Model
31
Portable Threads
Multithreading Models: One -to-
to-One Model
 Each user-level thread maps to kernel thread. Examples:-Windows NT/XP
 A separate kernel thread is created to handle /2000, Linux, Solaris 9 and later
each user-level thread
 It provides more concurrency and solves the problems
of blocking system calls
Allows another thread to run when a thread is
blocked.
Allows multiple threads to run in parallel on
multiprocessors.
 Managing the one-to-one model involves more
overhead and slows-down system
 Drawback:- creating user thread requires creating
the corresponding kernel thread.
 Most implementations of this thread puts restrictions
on the number of threads created. Fig. 2.2.11 One-to-One Model
32
Multithreading Models: Many-
Many-to-
to-Many Model
 Allows the mapping of many user level threads in to
many(smaller or equal number ) kernel threads
 Allows the OS to create sufficient number of kernel
threads(specific to either a particular application or a
particular machine) .i.e.it is most flexible
 It combines the best features of one-to-one and
many-to-one model
 Users have no restrictions on the numbers of threads
created
 Blocking kernel system calls do not block the entire
process.

Examples:-Solaris prior to version 9, Windows


NT/2000 with the ThreadFiber
Fig. 2.2.12 Many-to-Many Model
33 package
Multithreading Models: Many-
Many-to-
to-Many Model
 Processes can be split across multiple processors

 Individual processes may be allocated variable

numbers of kernel threads, depending on the number

of CPUs present and other factors

 One popular variation of the many-to-many model is

the two-tier model, which allows either many-to-

many or one-to-one operation.

 Is Similar to Many-to-Many model, except that it

allows a user thread to be bound to kernel thread

Examples:-Solaris 8 and earlier, IRIX, HP-UX, Fig. 2.2.12 Two-tier Many-to-Many


34 Model
Tru64UNIX,
Threading Issues
 Some of the issues to be considered for multithreaded programs:
Semantics of fork() and exec() system calls.
Thread cancellation
Signal handling
Thread pools
Thread-specific data
1. Semantics of fork() and exec() system calls
 In multithreaded program, the semantics of the fork and exec systems calls change.
 If one thread calls fork, there are two options:
New process can duplicate all the threads or new process is a process with
single thread
Some systems have chosen two versions of fork
The exec()system call loads the selected thread to memory and executes the
35
thread.
Contd.
2. Thread Cancellation:- is the task of terminating thread before its
completion.
Example: if multiple threads are searching a database, if one gets the result others
should be cancelled
 A thread that is to be canceled is often referred to as the target thread.
 Cancellation of a target thread may occur in two different scenarios:
A. Asynchronous cancellation
One thread immediately terminates the target thread
B. Deferred cancellation
The target thread can periodically checks if it should terminate, allowing
it an opportunity to terminate itself in an orderly fashion.
36
Contd.
The difficulty with cancellation occurs in situations where resources have been
allocated to a canceled thread or where a thread is canceled while in the
midst of updating data it is sharing with other threads.
 This becomes especially troublesome with asynchronous cancellation.
Often, the OS will reclaim system resources from a canceled thread but will not
reclaim all resources.
Therefore, canceling a thread asynchronously may not free a necessary system-
wide resource.
 But in case of deferred cancellation, one thread indicates that a target
thread is to be canceled, but cancellation occurs only after the target thread has
checked a flag to determine whether or not it should be canceled.
The thread can perform this check at a at point which it can be canceled safely.
Pthreads refers to such points as cancellation points
37
Contd.
3. Signal Handling:- Signals are used in UNIX systems to notify a process that a
particular event has occurred
 A signal may be received either synchronously or asynchronously, depending on
the source of and the reason for the event being signaled.
 All signals, whether synchronous (illegal memory access or division by zero) or
asynchronous (signal is generated by an event external to a running process),
are handled using signal handler following the below pattern:
a. Signal is generated by particular event
b. Signal is delivered to a process
c. Signal is handled

 A signal may be handled by one of two possible handlers:


1. A default signal handler
38 2. A user-defined signal handler
Contd.
Every signal has a default signal handler that is run by the kernel when handling that signal.
This default action can be overridden by a user-define signal handler that is called to handle
the signal.
 Signals are handled in different ways.
Some signals (such as changing the size of a window) are simply ignored; others (such as
an illegal memory access) are handled by terminating the program.
Handling signals in single-threaded programs is straightforward: signals are always
delivered to a process.
However, delivering signals is more complicated in multithreaded programs, where a
process may have several threads.
Where, then, should a signal be delivered?
Signal Handling Options:
Deliver the signal to the thread to which the signal applies
Deliver the signal to every thread in the process
Deliver the signal to certain threads in the process
39
Assign a specific thread to receive all signals for the process
Contd.
4. Thread Pools:- Creating new threads every time one is needed and then deleting it when
it is done can be inefficient, and can also lead to a very large (unlimited) number of threads
being created.
 Two potential problems: the amount of time required to create the thread prior to
servicing the request and not setting abound on a no. of concurrently running threads.
 An alternative solution is to create a number of threads when the process first starts, and
put those threads into a thread pool.
Threads are allocated from the pool as needed, and returned to the pool when no
longer needed.
When no threads are available in the pool, the process may have to wait until one
becomes available.
The (maximum) number of threads available in a thread pool may be determined by
adjustable parameters, possibly dynamically in response to changing system loads.
Win32 provides thread pools through the "PoolFunction" function.
40
Java also provides support for thread pools.
Contd.
5. Thread Specific data:-Most data is shared among threads, and this is one of
the major benefits of using threads in the first place.
However sometimes threads need thread-specific data also in some
circumstances; Such data is called thread-specific data.
For example, in a transaction-processing system, we might service each
transaction in a separate thread.
Furthermore, each transaction might be assigned a unique identifier.
To associate each thread with its unique identifier, we could use thread-
specific data.
Most major thread libraries ( pThreads, Win32, Java ) provide support for
thread-specific data.
41
Contd.
6. Scheduler Activation:- Both Many-to-many and Two-level models require

communication to maintain the appropriate number of kernel threads

allocated to the application

Scheduler activations provide upcalls, a communication mechanism from

the kernel to the user thread library

Upcalls are handled by the thread library with upcall handler

This communication allows an application to maintain the correct number

of kernel threads

Reading Assignment
42  Windows XP Threads Vs. Linux Threads

You might also like