0% found this document useful (0 votes)
24 views38 pages

CS0051 - M2-Threads, Processes and Mutual Exclusion

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
24 views38 pages

CS0051 - M2-Threads, Processes and Mutual Exclusion

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 38

MODULE 2

Threads, Processes and Mutual Exclusion


Module 2A

Threads and Processes

CCS0049
CS Elective 1
Learning Objectives

Differentiate threads from processes


Apply execution scheduling, thread lifecycle and daemon thread in Java

CCS0049
CS Elective 1
Threads vs. Processes
Process
• When a computer runs an application, that instance of the program executing is referred to
as a process.
• A process consists of the program's code, its data, and information about its state.
• Each process is independent, and has its own separate address space in memory. A
computer can have hundreds of active processes at once, and an operating system's job is to
manage all of them.
Threads vs. Processes
Threads
• Within every process, there are one or more smaller sub elements called threads.
• Each of those threads is an independent path of execution through the program, a different
sequence of instructions, and it can only exist as part of a process.
• Threads are the basic units that the operating system manages and it allocates time on the
processor to actually execute them.
Threads vs. Processes
Processes and Threads
Threads vs. Processes
Processes and Threads
• Threads that belong to the same process share the processes' address space, which gives
them access to the same resources in memory including the program's executable code and
data.

• Sharing resources between separate processes is not as easy as sharing between threads in
the same process, because every process exists in its own address space.
Threads vs. Processes
Processes and Threads
• There are ways to communicate and share data between processes, but it requires a bit more
work than communicating between threads.

• You have to use system-provided inter-process communication mechanisms like sockets and
pipes, allocating special inter-process shared memory space, or using remote procedure calls.

• It's possible to write parallel programs that use multiple processes working together towards a
common goal, or using multiple threads within a single process.
Threads vs. Processes
Which is better, using multiple threads or multiple processes?
• It depends on what you're doing and the environment it's running in, because the
implementation of threads and processes differs between operating systems and
programming languages.

• If your application is going to be distributed across multiple computers, you most likely need
separate processes for that.

• But, as a rule of thumb, if you can structure your program to take advantage of multiple
threads, stick with using threads rather than multiple processes.
Threads vs. Processes
Which is better, using multiple threads or multiple processes?
• Threads are considered lightweight compared to processes, which are more resource
intensive.

• A thread requires less overhead to create and terminate than a process, and it's usually
faster for an operating system to switch between executing threads from the same process
than to switch between different processes.
Concurrent vs. Parallel Execution
Concurrency
• Just because a program is structured to have multiple threads or processes does not mean
they'll necessarily execute in parallel.
• A concept that's closely related to parallel execution but often gets confused with it is
concurrency.
• Concurrency refers to the ability of an algorithm or program to be broken into different
parts that can be executed out of order or partially out of order without affecting the end
result.
• Concurrency is about how a program is structured and the composition of independently
executing processes.
Concurrent vs. Parallel Execution
Parallel Execution
• To actually execute in parallel, we need parallel hardware.
• In regards to computers, parallel hardware can come in a variety of forms.
Concurrent vs. Parallel Execution
Parallel Execution
• Most modern processors used in things like desktop computers and cellphones have multiple
processing cores.
• Graphics processing units, or GPUs, contain hundreds, or even thousands, of specialized
cores working in parallel to make amazing graphics that you see on the screen.
• Computer clusters distribute their processing across multiple systems.
Concurrent vs. Parallel Execution
Concurrency Parallelism
• Program Structure • Simultaneous Execution
• Dealing with multiple things at • Doing multiple things at once
once

Note: Concurrency enables a program to execute in parallel, given the


necessary hardware. But a concurrent program is not inherently parallel.
Concurrent vs. Parallel Execution
Programs may not always benefit from parallel execution
• For example, the software drivers that handle I/O devices, like a mouse, keyboard, and hard
drive, need to execute concurrently. They're managed by the operating system as
independent things that get executed, as needed.
• In a multi-core system, the execution of those drivers might get split amongst the available
processors. However, since I/O operations occur rather infrequently, relative to the speed at
which computer operates, we don't really gain anything from parallel execution.
• Those sparse independent tasks can run just fine on a single processor, and we wouldn't feel
a difference.
Concurrent vs. Parallel Execution
Programs may not always benefit from parallel execution
• Concurrent programming is useful for I/O-dependent tasks like graphical user interfaces.
When the user clicks a button to execute an operation, that might take a while.
• To avoid locking up the user interface until it's completed, we can run the operation in a
separate concurrent thread.
• This leaves the thread that's running the UI free to accept new inputs. That sort of I/O-
dependent task is a good use case for concurrency.
Concurrent vs. Parallel Execution
Programs may not always benefit from parallel execution
• Parallel processing really becomes useful for computationally intensive tasks, such as
calculating the result of multiplying two matrices together.
• When large math operations can be divided into independent subparts, executing those parts
in parallel on separate processors can really speed things up.
Execution Scheduling
Deciding which goes first
• Threads don't just execute whenever they want to. A computer might have hundreds of
processes, with thousands of threads, that all want their turn to run on just a handful of
processors.
• It's the operating system's job to decide which goes first.
• The OS includes a scheduler that controls when different threads and processes get their
turn to execute on the CPU.
• The scheduler makes it possible for multiple programs to run concurrently on a single
processor.
Execution Scheduling
Deciding which goes first
• When a process is created and ready to run, it gets loaded into memory and placed in the
ready queue.
• It cycles through the ready processes so they get a chance to execute on the processor.
• If there are multiple processors, then the OS will schedule processes to run on each of them,
to make the most use of the additional resources.
Execution Scheduling
Deciding which goes first
• A process will run until it finishes, and then the scheduler will assign another process to
execute on that processor.
• Or, a process might get blocked and have to wait for an I/O event, in which case it will go
into a separate I/O waiting queue, so another process can run.
• Or, the scheduler might determine that a process has spent its fair share of time on the
processor, and swap it out for another process from the ready queue.
Execution Scheduling
Context Switch
• The operating system has to save the state, or context, of the process that was running, so it
can be resumed later, and it has to load the context of the new process that's about to run.
• As the new process that just got scheduled, the state information is loaded and then begin
executing.
• Context switches are not instantaneous. It takes time to save and restore the registers and
memory state, so the scheduler needs a strategy for how frequently it switches between
processes.
Execution Scheduling
Algorithms
• There's a wide variety of algorithms that different operating system schedulers implement.
– Preemptive, which means they may pause, or preempt, a running, low-priority task when a higher priority
task enters the ready state.
– In non-preemptive algorithms, once a process enters the ready state, it'll be allowed to run for its allotted
time.

• Which algorithm a scheduler chooses to implement will depend on its goals. Some schedulers
might try to maximize throughput, or the amount of work they complete in a given time,
whereas others might aim to minimize latency, to improve the system's responsiveness.
Execution Scheduling
Scheduling
• Different operating systems have different purposes, and a desktop OS like Windows will
have a different set of goals and use a different type of scheduler than a real-time OS for
embedded systems.

• Avoid running programs expecting that multiple threads or processes will execute in a certain
order, or for an equal amount of time, because the operating system may choose to schedule
them differently from run to run.
Thread Lifecycle
Four Phases of the
Thread Lifecycle:
1. New
2. Runnable
3. Blocked
4. Terminated
Thread Lifecycle
• When a new process or program begins running, it will start with just one thread, which is
called the main thread, because it's the main one that runs when the program begins.
• That main thread can then start or spawn additional threads to help out, referred to as its child
threads, which are part of the same process but execute independently to do other tasks.
• Those threads can spawn their own children if needed, and as each of those threads
finish executing, they'll notify their parent and terminate, with the main thread usually being
the last to finish execution.
Thread Lifecycle
• Over the life cycle of a thread, from creation through execution and finally termination, threads
will usually be in one of four states.
• Part of creating a new thread is assigning it a function, the code it's going to execute.
• Some programming languages require to explicitly start a thread after creating it.
• In the runnable state, the operating system can schedule a process to execute.
• Through contact switches, a thread will get swapped out with other threads to run on one of
the available processors.
Thread Lifecycle
• When a thread needs to wait for an event to occur, like an external input or a timer, it goes
into a blocked state while it waits.
• A blocked thread is not using any CPU resources and will be returned to the runnable state by
the OS.
Daemon Thread
Garbage Collector
• We often create threads to provide some sort of service, or perform a periodic task in support
of the main program.
• A garbage collector is a form of automatic memory management that runs in the
background and attempts to reclaim garbage, or memory that's no longer being used by the
program.
• Threads that are performing background tasks, like garbage collection, can be detached from
the main program by making them what's called a daemon thread.
Daemon Thread
• A daemon thread is a thread that will not prevent the program from exiting if it's still running.
• By default, new threads are usually spawned as non-daemon or normal threads, and you
have to explicitly turn a thread into a daemon or background thread.
• When the main thread is finished executing and there aren't any non-daemon threads left
running, this process can terminate.
Daemon Thread
• That's fine in the case of a garbage collection routine, because all of the memory this process
was using will get cleared as part of terminating it.
• But if it was doing some sort of IO operation, like writing to a file, then terminating in the
middle of that operation could end up corrupting data.
• If you detach a thread to make it a background task, make sure it won't have any negative
side-effects if it prematurely exits.
Module 2B

Mutual Exclusion

CCS0049
CS Elective 1
Learning Objectives

Understand the issue of data race and how it can be overcome

CCS0049
CS Elective 1
Data Race
• One of the main challenges of writing concurrent programs is identifying the possible
dependencies between threads to make sure they don't interfere with each other and cause
problems.

• Data races are a common problem that can occur when two or more threads are
concurrently accessing the same location in memory and at least one of those threads
is writing to that location to modify it's value.

• Fortunately, you can protect your program against data races by using synchronization
techniques.
Data Race
• As concurrent threads, it's up to the operating system to schedule when each get to be
executed.

• The unpredictability of when threads get scheduled means sometimes the data race will occur
and cause problems but other times, everything might work just fine. That inconsistency
makes data races a real challenge to recognize and debug.
Mutual Exclusion
• Anytime multiple threads are concurrently reading and writing a shared resource, it creates
the potential for incorrect behavior, like a data race.
• But we can defend against that by identifying and protecting critical sections of code.
• A critical section or critical region is part of a program that accesses a shared resource,
such as a data structure memory, or an external device, and it may not operate correctly, if
multiple threads concurrently access it.
• The critical section needs to be protected so that it only allows one thread or process to
execute in it at a time.
Mutual Exclusion
Mutex
• Only one thread or process can have possession of the lock at a time, so it can be used to
prevent multiple threads from simultaneously accessing a shared resource, forcing them to
take turns.

• The operation to acquire the lock is an atomic operation, which means it's always executed
as a single, indivisible action.

• To the rest of the system, an atomic operation appears to happen instantaneously, even if
under the hood, it really takes multiple steps.
Mutual Exclusion
Mutex
• The key here is that the atomic operation is an uninterruptible.

• Threads that try to acquire a lock that's currently possessed by another thread, can pause
and wait till it's available.

• Since threads can get blocked and stuck waiting for a thread in the critical section to finish
executing, it's important to keep the section of code protected with the mutex as short as
possible.
References
• Kirk, D.(2016). Programming Massively Parallel Processors: A Hands-On Approach. USA:
Morgan Kaufmann
• Balaji, P.(2015). Programming Models for Parallel Computing (Scientific and Engineering
Computation). Massachusetts: The MIT Press
• Barlas, G(2015). Multicore and GPU Programming (An Integrated Approach). USA: Morgan
Kaufmann
• Stone, B. 2019, Parallel and Concurrent Programming with Java 1, LinkedIn Learning, viewed
31 March 2020, <https://www.linkedin.com/learning/parallel-and-concurrent-programming-
with-java-1>.

You might also like