Parallel and Distributed Computing Lecture 02
Parallel and Distributed Computing Lecture 02
and Distributed
Computing
Lecture # 02
Spring 2021
FAST – NUCES, Faisalabad Campus
Agenda
2
Karp-Flatt Metric
Types of Parallelism
Data-parallelism
Functional-parallelism
Pipelining
Multi-processor vs Multi-computer
Derivation
Let’s suppose you have a sequential code for a
problem that can be executed in total T(s)time.
T(p) be the parallel time for the same algorithm over
p processors.
Then speedup can be calculated using:-
𝑇(𝑠)
Speedup(P)=
𝑇(𝑝)
T(p) can be calculated as:
𝑇 𝑝 = 𝑠𝑒𝑟𝑖𝑎𝑙 𝑐𝑜𝑚𝑝𝑢𝑡. 𝑡𝑖𝑚𝑒 + 𝑃𝑎𝑟𝑎𝑙𝑙𝑒𝑙 𝑐𝑜𝑚𝑝. 𝑡𝑖𝑚𝑒
1−𝐹 .𝑇(𝑠)
T 𝑃 = 𝐹. 𝑇 𝑠 +
𝑃
Derivation
Again
𝑇(𝑠) 𝑇(𝑠)
Speedup(P)= ⇒
𝑇(𝑝) 1 − 𝐹 . 𝑇(𝑠)
𝐹. 𝑇 𝑠 +
𝑃
1
⇒ Speedup(P)=
1−𝐹
𝐹 +
𝑃
1. Data-parallelism
When there are independent sub-tasks applying the
same operation to different elements of a data set
When just distributing the data provides sufficient
parallelism
Example code
for i=0 to 99 do
a[ i ] = b[ i ] + c [ i ]
Endfor
Here same operation (i.e., addition) is being performed on first
100 elements of ‘b’ and ‘c’
All 100 iterations of the loop could be executed simultaneously.
CS416 - Spring 2021
Types of Parallelism
12
2. Functional-parallelism
When there are independent tasks applying
different operations to different data elements
Example code
1) a=2
2) b=3
3) m= (a+b)/2
4) s= (𝒂𝟐 + 𝒃𝟐 )/2
5) v= s - 𝒎𝟐
Here (1, 2) and (3, 4) statements could be
performed concurrently.
3. Pipelining
Usually used for the problems where single instance
of the problem can not be parallelized
The output of one stage is input of the other stage
Divide whole computation of each instance into
multiple stages (if there are multiple instances of the
problem)
An effective method of attaining parallelism
(concurrency) on the uniprocessor architectures
Also depends on pipelining abilities of the processor
Sequential Execution
Pipelining