0% found this document useful (0 votes)
24 views33 pages

Parallel Programming Design Strategies

The document discusses parallel programming concepts, emphasizing the importance of task agglomeration to enhance efficiency by reducing overhead in task coordination. It outlines different parallel programming models, including SISD, SIMD, MISD, MIMD, and various communication methods like shared memory and message passing. Additionally, it highlights the significance of task dependency graphs and the critical path length in optimizing parallel performance.

Uploaded by

221433
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views33 pages

Parallel Programming Design Strategies

The document discusses parallel programming concepts, emphasizing the importance of task agglomeration to enhance efficiency by reducing overhead in task coordination. It outlines different parallel programming models, including SISD, SIMD, MISD, MIMD, and various communication methods like shared memory and message passing. Additionally, it highlights the significance of task dependency graphs and the critical path length in optimizing parallel performance.

Uploaded by

221433
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Parallel

Program
Design
Pacheco, Peter, and Matthew Malensek. An
Introduction to Parallel Programming, 2nd ed.
Morgan Kaufmann, 2021. (See Chapter 2 on
parallel programming models)
Agenda

Parallel Task
Programm PCAM Granuality Dependenc
ing y
Parallel Programming
Parallel Programming
Parallel Programming
Problem Understanding
Problem Understanding
Parallel Program Design Process
Parallel Program Design Process
Agglomeration in Parallel
Computing
Scenario: Distributing work among students

[Link] Agglomeration (Fine-Grained Tasks):


1. Task: Collect 100 books from library.
2. Each student is asked to collect 1 book at a time.
3. 100 students needed → lots of coordination and confusion.
4. Overhead: Too much communication and waiting.

[Link] Agglomeration (Coarse-Grained Tasks):


1. Task: Collect 100 books.
2. Each student is asked to collect 10 books at a time.
3. Only 10 students needed → less coordination.
4. Benefit: Reduces communication and makes work efficient.

Key Idea:
Combine small tasks into bigger tasks to reduce overhead and improve efficiency.
PCAM Design
Methodology
Granularity
Fine-grain Parallelism
Coarse-grain Parallelism
Which is Best?
Multiplying a Matrix with a Vector
Granularity of Task
Decompositions
Task Dependency
Task Dependency
Task Dependency Graph

Can be a graph or adjacency matrix


Task Dependency Graph
Database Query Processing
Example
Database Query Processing

An alternate decomposition of the given problem into


subtasks, along with their data dependencies.
Database Query Processing
Database Query Processing
Alternate Decomposition
Task Dependency Graph

The task dependency graphs of the two database query


decompositions.
Critical Path Length
Degree of Concurrency
Degree of Concurrency
Limits on Parallel Performance
•SISD: Single instruction, single data (sequential processing).
•SIMD: Single instruction, multiple data (parallel data streams).
•MISD: Multiple instructions, single data (rare, specialized use).
•MIMD: Multiple instructions, multiple data (general parallel
systems).
•Shared Memory Model: Multiple processors access common
memory.
•Message Passing Model: Processors communicate by
exchanging messages.
•Data Parallel Model: Same operation applied across different data
sets.
•Task Parallel Model: Different tasks executed concurrently across
processors.

Summary Week -03


Thank you

You might also like