0% found this document useful (0 votes)
152 views17 pages

Parallel and Distributed Computing Lecture 02

The document discusses parallel computing concepts including Amdahl's law, the Karp-Flatt metric, and types of parallelism such as data, functional, and pipelining parallelism. Amdahl's law defines the theoretical maximum speedup that can be achieved from parallel processing based on the fraction of a program that can be parallelized. The Karp-Flatt metric calculates the serial fraction of a parallel program based on its speedup on a given number of processors. Different types of parallelism are described including data parallelism where the same operation is applied to different data elements, functional parallelism where different operations are applied to different data, and pipelining parallelism which divides a problem into stages.

Uploaded by

abdullah ashraf
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
152 views17 pages

Parallel and Distributed Computing Lecture 02

The document discusses parallel computing concepts including Amdahl's law, the Karp-Flatt metric, and types of parallelism such as data, functional, and pipelining parallelism. Amdahl's law defines the theoretical maximum speedup that can be achieved from parallel processing based on the fraction of a program that can be parallelized. The Karp-Flatt metric calculates the serial fraction of a parallel program based on its speedup on a given number of processors. Different types of parallelism are described including data parallelism where the same operation is applied to different data elements, functional parallelism where different operations are applied to different data, and pipelining parallelism which divides a problem into stages.

Uploaded by

abdullah ashraf
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 17

CS416 – Parallel

and Distributed
Computing
Lecture # 02
Spring 2021
FAST – NUCES, Faisalabad Campus
Agenda
2

 Amdahl’s Law of Parallel Speedup

 Karp-Flatt Metric

 Types of Parallelism
 Data-parallelism
 Functional-parallelism
 Pipelining

 Multi-processor vs Multi-computer

 Cluster vs Network of workstations

CS416 - Spring 2021


Amdahl’s Law
3
 Amdahl’s was formulized in 1967
 It shows an upper-bound on the maximum speedup that
can be achieved
 Suppose you are going to design a parallel algorithm for
a problem
 Further suppose that fraction of total time that the
algorithm must consume in serial executions is ‘F’
 This implies fraction of parallel potion is (1- F)
 Now, Amdahl’s law states that
𝟏
𝐒𝐩𝐞𝐞𝐝𝐮𝐩 𝑷 =
𝟏−𝑭
𝑭 +
𝑷

 Here ‘p’ is total number of available processing nodes.

CS416 - Spring 2021


Amdahl’s Law
4

Derivation
 Let’s suppose you have a sequential code for a
problem that can be executed in total T(s)time.
 T(p) be the parallel time for the same algorithm over
p processors.
Then speedup can be calculated using:-
𝑇(𝑠)
Speedup(P)=
𝑇(𝑝)
 T(p) can be calculated as:
𝑇 𝑝 = 𝑠𝑒𝑟𝑖𝑎𝑙 𝑐𝑜𝑚𝑝𝑢𝑡. 𝑡𝑖𝑚𝑒 + 𝑃𝑎𝑟𝑎𝑙𝑙𝑒𝑙 𝑐𝑜𝑚𝑝. 𝑡𝑖𝑚𝑒
1−𝐹 .𝑇(𝑠)
T 𝑃 = 𝐹. 𝑇 𝑠 +
𝑃

CS416 - Spring 2021


Amdahl’s Law
5

Derivation
 Again
𝑇(𝑠) 𝑇(𝑠)
Speedup(P)= ⇒
𝑇(𝑝) 1 − 𝐹 . 𝑇(𝑠)
𝐹. 𝑇 𝑠 +
𝑃

1
⇒ Speedup(P)=
1−𝐹
𝐹 +
𝑃

 What if you have infinite number of processors?


 What you have to do for further speedup?

CS416 - Spring 2021


Amdahl’s Law
6

 Example 1: Suppose 70% of a sequential algorithm is


parallelizable portion. The remaining part must be
calculated sequentially. Calculate maximum
theoretical speedup for parallel variant of this
algorithm using i). 4 processors and ii). infinite
processors.

 F= 0.30 and 1-F= 0.70 use Amdahl’s law to calculate


theoretical speedups.

CS416 - Spring 2021


Amdahl’s Law
7

 Example 2: Suppose 25% of a sequential algorithm is


parallelizable portion. The remaining part must be
calculated sequentially. Calculate maximum
theoretical speedup for parallel variant of this
algorithm using 5 processors and infinite processors.
 ???

 Little challenge: Determine, according to Amdahl's


law, how many processors are needed to achieve
maximum theoretical speedup while sequential
portion remains the same?
 The answer may be surprising?
CS416 - Spring 2021
8
Karp-Flatt Metric

CS416 - Spring 2021


Karp-Flatt Metric
9

 The metric is used to calculate serial fraction for a


given parallel configuration.
 i.e., if a parallel program is exhibiting a speedup S while
using P processing units then serial fraction e is given by :-
1ൗ − 1ൗ𝑝
𝑆
𝑒=
1 − 1ൗ𝑝
 Example task: Suppose in a parallel program, for 5
processors, you gained a speedup of 1.25x, determine
sequential fraction of your program.

CS416 - Spring 2021


10
Types of Parallelism

CS416 - Spring 2021


Types of Parallelism
11

1. Data-parallelism
 When there are independent sub-tasks applying the
same operation to different elements of a data set
 When just distributing the data provides sufficient
parallelism
 Example code
for i=0 to 99 do
a[ i ] = b[ i ] + c [ i ]
Endfor
 Here same operation (i.e., addition) is being performed on first
100 elements of ‘b’ and ‘c’
 All 100 iterations of the loop could be executed simultaneously.
CS416 - Spring 2021
Types of Parallelism
12

2. Functional-parallelism
 When there are independent tasks applying
different operations to different data elements
 Example code
1) a=2
2) b=3
3) m= (a+b)/2
4) s= (𝒂𝟐 + 𝒃𝟐 )/2
5) v= s - 𝒎𝟐
 Here (1, 2) and (3, 4) statements could be
performed concurrently.

CS416 - Spring 2021


Types of Parallelism
13

3. Pipelining
 Usually used for the problems where single instance
of the problem can not be parallelized
 The output of one stage is input of the other stage
 Divide whole computation of each instance into
multiple stages (if there are multiple instances of the
problem)
 An effective method of attaining parallelism
(concurrency) on the uniprocessor architectures
 Also depends on pipelining abilities of the processor

CS416 - Spring 2021


Types of
Parallelism
3. Pipelining
 Example:
Assembly line
analogy

Sequential Execution

14 CS416 - Spring 2021


Types of
Parallelism
3. Pipelining
 Example:
Assembly line
analogy

Pipelining

15 CS416 - Spring 2021


Types of
Parallelism
3. Pipelining
 Example:
Overlap
instructions in
a single
instruction
cycle to
achieve
parallelism 4-stage Pipelining

16 CS416 - Spring 2021


Questions
26

CS416 - Spring 2021

You might also like