Parallel Algorithms Complete Notes
Parallel Algorithms Complete Notes
An algorithm is a sequence of steps that take inputs from the user and after some computation,
produces an output. A parallel algorithm is an algorithm that can execute several instructions
simultaneously on different processing devices and then combine all the individual outputs to
produce the final result.
Concurrent Processing
The easy availability of computers along with the growth of Internet has changed the way we store
and process data. We are living in a day and age where data is available in abundance. Every day
we deal with huge volumes of data that require complex computing and that too, in quick time.
Sometimes, we need to fetch data from similar or interrelated events that occur simultaneously. This
is where we require concurrent processing that can divide a complex task and process it multiple
systems to produce the output in quick time.
Concurrent processing is essential where the task involves processing a huge bulk of complex data.
Examples include − accessing large databases, aircraft testing, astronomical calculations, atomic
and nuclear physics, biomedical analysis, economic planning, image processing, robotics, weather
forecasting, web-based services, etc.
What is Parallelism?
Parallelism is the process of processing several set of instructions simultaneously. It reduces the
total computational time. Parallelism can be implemented by using parallel computers, i.e. a
computer with many processors. Parallel computers require parallel algorithm, programming
languages, compilers and operating system that support multitasking.
What is an Algorithm?
An algorithm is a sequence of instructions followed to solve a problem. While designing an
algorithm, we should consider the architecture of computer on which the algorithm will be executed.
As per the architecture, there are two types of computers −
● Sequential Computer
● Parallel Computer
Depending on the architecture of computers, we have two types of algorithms −
● Sequential Algorithm − An algorithm in which some consecutive steps of instructions are
executed in a chronological order to solve a problem.
● Parallel Algorithm − The problem is divided into sub-problems and are executed in parallel
to get individual outputs. Later on, these individual outputs are combined together to get the
final desired output.
It is not easy to divide a large problem into sub-problems. Sub-problems may have data
dependency among them. Therefore, the processors have to communicate with each other to solve
the problem.
It has been found that the time needed by the processors in communicating with each other is more
than the actual processing time. So, while designing a parallel algorithm, proper CPU utilization
should be considered to get an efficient algorithm.
To design an algorithm properly, we must have a clear idea of the basic model of computation in a parallel
computer
Time Complexity
The main reason behind developing parallel algorithms was to reduce the computation time of an
algorithm. Thus, evaluating the execution time of an algorithm is extremely important in analyzing its
efficiency.
Execution time is measured on the basis of the time taken by the algorithm to solve a problem. The
otal execution time is calculated from the moment when the algorithm starts executing to the
moment it stops. If all the processors do not start or end execution at the same time, then the total
execution time of the algorithm is the moment when the first processor started its execution to the
moment when the last processor stops its execution.
Time complexity of an algorithm can be classified into three categories−
● Worst-case complexity − When the amount of time required by an algorithm for a given
input is maximum.
● Average-case complexity − When the amount of time required by an algorithm for a given
input is average.
● Best-case complexity − When the amount of time required by an algorithm for a given input
is minimum.
Asymptotic Analysis
The complexity or efficiency of an algorithm is the number of steps executed by the algorithm to get
the desired output. Asymptotic analysis is done to calculate the complexity of an algorithm in its
theoretical analysis. In asymptotic analysis, a large length of input is used to calculate the complexity
function of the algorithm.
Note − Asymptotic is a condition where a line tends to meet a curve, but they do not intersect. Here
the line and the curve is asymptotic to each other.
Asymptotic notation is the easiest way to describe the fastest and slowest possible execution time
for an algorithm using high bounds and low bounds on speed. For this, we use the following
notations −
● Big O notation
● Omega notation
● Theta notation
Big O notation
In mathematics, Big O notation is used to represent the asymptotic characteristics of functions. It
represents the behavior of a function for large inputs in a simple and accurate method. It is a method
of representing the upper bound of an algorithm’s execution time. It represents the longest amount of
time that the algorithm could take to complete its execution. The function −
f(n) = O(g(n))
iff there exists positive constants c and n0 such that f(n) ≤ c * g(n) for all n where n ≥ n0.
Omega notation
Omega notation is a method of representing the lower bound of an algorithm’s execution time. The
function −
f(n) = Ω (g(n))
iff there exists positive constants c and n0 such that f(n) ≥ c * g(n) for all n where n ≥ n0.
Theta Notation
Theta notation is a method of representing both the lower bound and the upper bound of an
algorithm’s execution time. The function −
f(n) = θ(g(n))
iff there exists positive constants c1, c2, and n0 such that c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n where n ≥
n0.
Speedup of an Algorithm
The performance of a parallel algorithm is determined by calculating its speedup. Speedup is
defined as the ratio of the worst-case execution time of the fastest known sequential algorithm for a
particular problem to the worst-case execution time of the parallel algorithm.
speedup =
Worst case execution time of the fastest known sequential for a particular problem
Worst case execution time of the parallel algorithm
Total Cost
Total cost of a parallel algorithm is the product of time complexity and the number of processors
used in that particular algorithm.
Total Cost = Time complexity × Number of processors used
Therefore, the efficiency of a parallel algorithm is −
Efficiency =
Worst case execution time of sequential algorithm
Worst case execution time of the parallel algorithm
Data Parallel
● Simple Model
● tasks are statistically assigned to processes and each task performs similar types of
operations on different data.
● it is a result of identical operations being applied concurrently o different data items cause data
parallelism e.g SIMD
● Work may be done in phases
● Data-parallel model can be applied on shared-address spaces and message-passing
paradigms.
● interaction overheads can be reduced by selecting a locality preserving decomposition, by
using optimized collective interaction routines, or by overlapping computation and interaction.
● The primary characteristic of data-parallel model problems is that the intensity of data
parallelism increases with the size of the problem, which in turn makes it possible to use more
processes to solve larger problems.
● effective to solve large problems
● Example − Dense matrix multiplication.
Task Graph Model
In the task graph model, parallelism is expressed by a task graph. A task graph can be either trivial
or nontrivial. In this model, the correlation among the tasks are utilized to promote locality or to
minimize interaction costs. This model is enforced to solve problems in which the quantity of data
associated with the tasks is huge compared to the number of computation associated with them. The
tasks are assigned to help improve the cost of data movement among the tasks.
Examples − Parallel quick sort, sparse matrix factorization, and parallel algorithms derived via
divide-and-conquer approach.
Here, problems are divided into atomic tasks and implemented as a graph. Each task is an
independent unit of job that has dependencies on one or more antecedent task. After the completion
of a task, the output of an antecedent task is passed to the dependent task. A task with antecedent
task starts execution only when its entire antecedent task is completed. The final output of the graph
is received when the last dependent task is completed (Task 6 in the above figure).
Pipeline Model
It is also known as the producer-consumer model. Here a set of data is passed on through a
series of processes, each of which performs some task on it. Here, the arrival of new data generates
the execution of a new task by a process in the queue. The processes could form a queue in the
shape of linear or multidimensional arrays, trees, or general graphs with or without cycles.
This model is a chain of producers and consumers. Each process in the queue can be considered as
a consumer of a sequence of data items for the process preceding it in the queue and as a producer
of data for the process following it in the queue. The queue does not need to be a linear chain; it can
be a directed graph. The most common interaction minimization technique applicable to this model is
overlapping interaction with computation.
Example − Parallel LU factorization algorithm.
Hybrid Models
A hybrid algorithm model is required when more than one model may be needed to solve a problem.
A hybrid model may be composed of either multiple models applied hierarchically or multiple models
applied sequentially to different phases of a parallel algorithm.
Example − Parallel quick sort
● Linked List
● Arrays
● Hypercube Network
Linked List
A linked list is a data structure having zero or more nodes connected by pointers. Nodes may or may
not occupy consecutive memory locations. Each node has two or three parts − one data part that
stores the data and the other two are link fields that store the address of the previous or next node.
The first node’s address is stored in an external pointer called head. The last node, known
as tail, generally does not contain any address.
There are three types of linked lists −
Arrays
An array is a data structure where we can store similar types of data. It can be one-dimensional or
multi-dimensional. Arrays can be created statically or dynamically.
● In statically declared arrays, dimension and size of the arrays are known at the time of
compilation.
● In dynamically declared arrays, dimension and size of the array are known at runtime.
For shared memory programming, arrays can be used as a common memory and for data parallel
programming, they can be used by partitioning into sub-arrays.
Hypercube Network
Hypercube architecture is helpful for those parallel algorithms where each task has to communicate
with other tasks. Hypercube topology can easily embed other topologies such as ring and mesh. It is
also known as n-cubes, where n is the number of dimensions. A hypercube can be constructed
recursively.
Parallel Algorithm - Design Techniques
Selecting a proper designing technique for a parallel algorithm is the most difficult and important
task. Most of the parallel programming problems may have more than one solution. In this chapter,
we will discuss the following designing techniques for parallel algorithms −
● Binary search
● Quick sort
● Merge sort
● Integer multiplication
● Matrix inversion
● Matrix multiplication
Greedy Method
In greedy algorithm of optimizing solution, the best solution is chosen at any moment. A greedy
algorithm is very easy to apply to complex problems. It decides which step will provide the most
accurate solution in the next step.
This algorithm is a called greedy because when the optimal solution to the smaller instance is
provided, the algorithm does not consider the total program as a whole. Once a solution is
considered, the greedy algorithm never considers the same solution again.
A greedy algorithm works recursively creating a group of objects from the smallest possible
component parts. Recursion is a procedure to solve a problem in which the solution to a specific
problem is dependent on the solution of the smaller instance of that problem.
Dynamic Programming
Dynamic programming is an optimization technique, which divides the problem into smaller
sub-problems and after solving each sub-problem, dynamic programming combines all the solutions
to get ultimate solution. Unlike divide and conquer method, dynamic programming reuses the
solution to the sub-problems many times.
Recursive algorithm for Fibonacci Series is an example of dynamic programming.
Backtracking Algorithm
Backtracking is an optimization technique to solve combinational problems. It is applied to both
programmatic and real-life problems. Eight queen problem, Sudoku puzzle and going through a
maze are popular examples where backtracking algorithm is used.
In backtracking, we start with a possible solution, which satisfies all the required conditions. Then we
move to the next level and if that level does not produce a satisfactory solution, we return one level
back and start with a new option.
Linear Programming
Linear programming describes a wide class of optimization job where both the optimization criterion
and the constraints are linear functions. It is a technique to get the best outcome like maximum
profit, shortest path, or lowest cost.
In this programming, we have a set of variables and we have to assign absolute values to them to
satisfy a set of linear equations and to maximize or minimize a given linear objective function.