0% found this document useful (0 votes)
775 views28 pages

Parallel and Distributed Computing

1. Parallel computing involves using multiple processors simultaneously to solve a single problem, while distributed computing uses multiple remote computers each with a role in solving a computation problem. 2. Parallel computing breaks a problem into discrete parts that can be solved concurrently on different processors, while distributed computing relies on message passing between networked computers to coordinate work. 3. There are four categories of parallel processing hardware architectures based on the number of instruction and data streams that can be processed simultaneously: single-instruction single-data (SISD), single-instruction multiple-data (SIMD), multiple-instruction single-data (MISD), and multiple-instruction multiple-data (MIMD).

Uploaded by

Jamel Pandiin
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
0% found this document useful (0 votes)
775 views28 pages

Parallel and Distributed Computing

1. Parallel computing involves using multiple processors simultaneously to solve a single problem, while distributed computing uses multiple remote computers each with a role in solving a computation problem. 2. Parallel computing breaks a problem into discrete parts that can be solved concurrently on different processors, while distributed computing relies on message passing between networked computers to coordinate work. 3. There are four categories of parallel processing hardware architectures based on the number of instruction and data streams that can be processed simultaneously: single-instruction single-data (SISD), single-instruction multiple-data (SIMD), multiple-instruction single-data (MISD), and multiple-instruction multiple-data (MIMD).

Uploaded by

Jamel Pandiin
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 28

PARALLEL AND DISTRIBUTED COMPUTIN

G
CLOUD COMPUTING
DEFINITION
•Parallel computing (processing):
• the use of two or more processors (computers), usually within a single
system, working simultaneously to solve a single problem.

•Distributed computing (processing):


• Any computing that involves multiple computers remote from each
other that each have a role in a computation problem or information
processing.
•Parallel programming:
• The human process of developing programs
that express what computations should be executed in
parallel.
WHAT IS PARALLEL COMPUTING?

Serial Computing:
•Traditionally, software has been written for serial
computation:
•A problem is broken into a discrete series of instructions
•Instructions are executed sequentially one after another
•Executed on a single processor
•Only one instruction may execute at any moment in time
SERIAL EXECUTION
PARALLEL COMPUTING

•In the simplest sense, parallel computing is the simultaneous


use of multiple compute resources to solve a computational
problem:
•A problem is broken into discrete parts that can be solved concurrently
Each part is further broken
••Instructions down part
from each to a seriesexecute
of instructions on
simultaneously processors different
•An overall control/coordination mechanism is employed
PARALLEL COMPUTING
THE INHERENT NEED FOR SPEED

•We want things done fast. If we can get it by the end of the
week, we actually want it tomorrow. If we can get it tomorrow, we
would really like it today. Let's face it, we're a society that doesn't
like to wait.

•Just think about the last time you stood in line at a fast food
restaurant and had to wait for more than a couple of minutes for
your order.
THE INHERENT NEED FOR SPEED

•This idea extends to other things like the weather. We routinely


check the hourly forecast to see what the weather will be like on
our commute to and from work. We expect that there is a
computer, behind the scenes, providing this information.

•But did you know that a single computer is often not up to the
task? That is where the idea of parallel computing comes in.
PARALLEL COMPUTING

•In simple terms, parallel computing is breaking up a task into smaller pieces
and executing those pieces at the same time, each on their own processor
or on a set of computers that have been networked together. Let's look at a
simple example. Say we have the following equation:
•Y = (4 x 5) + (1 x 6) + (5 x 3)
•On a single processor, the steps needed to calculate a value for Y might look
like:
•Step 1: Y = 20 + (1 x 6) + (5 x 3)
•Step 2: Y = 20 + 6 + (5 x 3)
•Step 3: Y = 20 + 6 + 15
•Step 4: Y = 41
PARALLEL COMPUTING

•But in a parallel computing scenario, with three processors


or computers, the steps look something like:
•Step 1: Y = 20 + 6 + 15
•Step 2: Y = 41
•Now, this is a simple example, but the idea is clear. Break the
task down into pieces and execute those pieces simultaneously.
MORE EXAMPLES

•Suppose you have a lot of work to be done, and want to get it


done much faster, so you hire 100 workers.

•For example, if the job is to build a house, it can be broken up


into plumbing, electrical, etc. However, while many jobs can be
done at the same time, some have specific orderings, such as
putting in the foundation before the walls can go up. If all of the
workers are there all of the time, then there will be periods when
most of them are just waiting around for some task (such as the
foundation) to be finished.
PARALLEL VS DISTRIBUTED
COMPUTING
•Parallel computing is a computation type in which multiple
processors execute multiple tasks simultaneously.
•Distributed computing is a computation type in which networked
computers communicate and coordinate the work through
message passing to achieve a common goal.

Number of Computers Required


•Parallel computing occurs on one system.
•Distributed computing occurs between multiple system.
PARALLEL VS DISTRIBUTED COMPUTING

Processing Mechanism
•In parallel computing multiple processors perform processing.
•In distributed computing, computers rely on message passing.

Memory
•In Parallel computing, computers can have shared memory
or distributed memory.
•In Distributed computing, each computer has their own
memory.
PARALLEL VS DISTRIBUTED COMPUTING

Usage
•Parallel computing is used to increase performance and
for scientific computing.
•Distributed computing is used to share resources and to
increase scalability.
Synchronization
•All processors share a single master clock for synchronization.
•There is no global clock in distributed computing, it
uses synchronization algorithms.
.
HARDWARE ARCHITECTURES FOR
PARALLEL PROCESSING
•The core elements of parallel processing are CPUs. Based
on the number of instructions and data streams, that can
be processed simultaneously,
computing classified into the following
systems are four
categories:
••Single-instruction,
Single-instruction,Single-data (SISD)
Multiple-data systems
(SIMD) systems
•Multiple-instruction, Single-data (MISD) systems
•Multiple-instruction, Multiple-data (MIMD)
systems
.
SINGLE – INSTRUCTION , SINGLE DATA (SISD)
SYSTEMS
•SISD computing system is a uni-processor machine capable of executing a
single instruction, which operates on a single data stream.
•Machine instructions are processed sequentially, hence computers adopting
this model are popularly called sequential computers.
•Most conventional computers are built using SISD model.
•All the instructions and data to be processed have to be stored in
primary
memory.
•The speed of processing element in the SISD model is limited by the rate at
which the computer can transfer information internally.
•Dominant representative SISD systems are IBM
PC, Macintosh, and
workstations.
SINGLE – INSTRUCTION , SINGLE DATA (SISD)
SYSTEMS

Instruction
Stream

Data Input Data


Output

Processo
r
SINGLE – INSTRUCTION , MULTIPLE DATA
(SIMD) SYSTEMS
•SIMD computing system is a multiprocessor machine capable of executing
the same instruction on all the CPUs but operating on different data
streams.
•Machines based on this model are well suited for scientific computing since
they involve lots of vector and matrix operations.
•For instance statement Ci = Ai * Bi, can be passed to all the processing
elements (PEs), organized data elements of vectors A and B can be divided
into multiple sets ( N- sets for N PE systems), and each PE can process one
data set.
•Dominant representative SIMD systems are Cray’s Vector processing machine
and Thinking Machines Cm*, and GPGPU accelerators.
SINGLE – INSTRUCTION , MULTIPLE DATA
(SIMD) SYSTEMS
Single Instruction Stream

Data Data
Input 1 Output 1

Processor 1

Data
Data
Output 2
Input 2

Processor
2

Data
Data
Output N
Input N

Processor N
MULTIPLE – INSTRUCTION , SINGLE DATA
(MISD) SYSTEMS
•MISD computing system is a multi processor machine capable of executing
different instructions on different Pes all of them operating on the same data
set.
•For example
•y = sin(x) + cos(x) + tan(x)
•Machines built using MISD model are not useful in most of the
applications.
•Few machines are built but none of them available commercially.
•This type of systems are more of an intellectual exercise than a
practical
configuration.
MULTIPLE – INSTRUCTION , SINGLE DATA
(MISD) SYSTEMS
Instructio Instructio Instructio
n Stream n Stream n Stream
1 2 N

Processor 1

Single Data Output Stream


Single Data Input Stream

Processor 2

Processor N
MULTIPLE – INSTRUCTION , MULTIPLE DATA
(MIMD) SYSTEMS
•MIMD computing system is a multi processor machine capable of executing
multiple instructions on multiple data sets.
•Each PE in the MIMD model has separate instruction and data streams,
hence
machines built using this model are well suited to any kind of application.
•Unlike SIMD, MISD machine, PEs in MIMD machines work asynchronously,
•MIMD machines are broadly categorized into shared-memory MIMD and
distributed memory MIMD based on the way PEs are coupled to the main
memory.
MULTIPLE – INSTRUCTION , MULTIPLE DATA
(MIMD) SYSTEMS
Instructi Instructi Instructi
on on on
Stream 1 Stream 2 Stream N

Data Input 1 Data Output


1

Processor
1

Data Input 2 Data Output


2

Processor
2

Data Input N Data Output 3

Processor
N
SHARED MEMORY MIMD
MACHINES
•All the PEs are connected to a single global memory and they all have
access
to it.
•Systems based on this model are also called tightly coupled multi processor
systems.
•The communication between PEs in this model
takes place through the shared memory.
•Modification of the data stored in the global memory by one PE is visible to
all other PEs.
•Dominant representative shared memory MIMD systems are silicon graphics
machines and Sun/IBM SMP ( Symmetric Multi-Processing).
SHARED M E M O R Y M I M D MACHIN
ES

Processor Processor Processor


1 2 N

Memor
y
Bus

Global System
Memory
DISTRIBUTED MEMORY MIMD MACHINES

•All PEs have a local memory. Systems based on this model are also
called
loosely coupled multi processor systems.
•The communication between PEs in this model takes place through the
interconnection network, the inter process communication channel, or
IPC.
•The network connecting PEs can be configured to tree, mesh, cube, and
so on.
•Each PE operates asynchronously, and if communication/synchronization
among tasks is necessary, they can do so by exchanging messages
between them.
DISTRIBUTED MEMORY MIMD MACHINE
S
IPC IPC
Channel Channel

Processor Processor Processor


1 2 2

Memory Memory Memory


Bus Bus Bus

Local Loc Loc


Memor al al
y Memor Memor
y y

You might also like