100% found this document useful (2 votes)
229 views3 pages

CS - 687 Parallel and Distributed Computing

This document outlines a course on parallel and distributed computing. The course covers topics like asynchronous/synchronous computation, concurrency control, fault tolerance, GPU programming, heterogeneity, interconnection topologies, load balancing, memory consistency models, message passing interface, parallel algorithms, programming models, performance analysis, and tools. The course objectives are to learn about parallel and distributed systems, write MPI programs, perform analytical modeling and performance analysis of parallel programs, and analyze problems with OpenMP. Students will be assessed through exams, assignments, and quizzes.

Uploaded by

Noureen Zafar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
100% found this document useful (2 votes)
229 views3 pages

CS - 687 Parallel and Distributed Computing

This document outlines a course on parallel and distributed computing. The course covers topics like asynchronous/synchronous computation, concurrency control, fault tolerance, GPU programming, heterogeneity, interconnection topologies, load balancing, memory consistency models, message passing interface, parallel algorithms, programming models, performance analysis, and tools. The course objectives are to learn about parallel and distributed systems, write MPI programs, perform analytical modeling and performance analysis of parallel programs, and analyze problems with OpenMP. Students will be assessed through exams, assignments, and quizzes.

Uploaded by

Noureen Zafar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 3

PMAS Arid Agriculture University Rawalpindi

University Institute of Information Technology

CS- 687 Parallel and Distributed


Computing
Credit Hours: 3(2-3) Prerequisites: None
Teacher:

Course Description:
Asynchronous/synchronous computation/communication, concurrency control, fault
tolerance, GPU architecture and programming, heterogeneity, interconnection
topologies, load balancing, memory consistency model, memory hierarchies, Message
passing interface (MPI), MIMD/SIMD, multithreaded programming, parallel algorithms
& architectures, parallel I/O, performance analysis and tuning, power, programming
models (data parallel, task parallel, process-centric, shared/distributed memory),
scalability and performance studies, scheduling, storage systems, synchronization, and
tools (Cuda, Swift, Globus, Condor, Amazon AWS, OpenStack, Cilk, gdb, threads,
MPICH, OpenMP, Hadoop, FUSE).
Course Objective:
– Learn about parallel and distributed computers.
– Write portable programs for parallel or distributed architectures using Message-Passing
Interface (MPI) library
– Analytical modeling and performance of parallel programs.
– Analyze complex problems with shared memory programming with OpenMP
Teaching Methodology:
Lectures, Assignments, Presentations, etc. Major component of the course should be
covered using conventional lectures.
Courses Assessment:
Exams, Assignments, Quizzes. Course will be assessed using a combination of written
examinations.
Reference Materials:
 Distributed Systems: Principles and Paradigms, A. S. Tanenbaum and M. V. Steen,
Prentice Hall, 2nd Edition, 2007
 Distributed and Cloud Computing: Clusters, Grids, Clouds, and the Future Internet, K
Hwang, J Dongarra and GC. C. Fox, Elsevier, 1st Ed.

Course Learning Outcomes (CLOs):


At the end of the course the students will be able to: Domain BT Level*
1. Learn about parallel and distributed computers.. C 2
2. Write portable programs for parallel or C 3
distributed architectures using Message-
Passing Interface (MPI)library
3. Analyticalmodelling and performance of parallel
C 3
programs
4. Analyzecomplex problems with shared memory
C 4
programming with openMP.
* BT= Bloom’s Taxonomy, C=Cognitive domain, P=Psychomotor domain, A= Affective
doma

Week/Lecture # Theory
Week 1 Lect- I & Lect-II Introduction, Parallel and Distributed Computing
Parallel and Distrubuted Architectures, Socket
Week 2 Lect-I & Lect-II programming, Flynn’s Taxonomy, Introduction to Multi-
Threading
Week 3 Lect-I & Lect-II Parallel Algorithms & architectures, parallel I/O
Parallel algorithms(data-parallel, task-parallel, process-
Week 4 Lect-I & Lect-II centric, shared/distributed memory)
performance analysis and tuning, scalability and
performance studies
Week 5 Lect-I & Lect-II
Scalable Algorithms, Message Passing
Week 6 Lect-I & Lect-II MPI and Teragrid
scheduling, load balancing, memory consistency model,
Week 7 Lect-I & Lect-II memory hierarchies, Distributed Systems, MapReduce,
Clusters
C GPU architecture and programming, heterogeneity,
Introduction to OpenCL ase Studies: From problem
Week 8 Lect-I & Lect-II specification to a parallelized solution.
Distributed Coordination, Security
Mid Term Exam
Week 9 Lect-I & Lect-II Distributed File Systems, Security
Week 10 Lect-I & Lect-II DFS
Week 11 Lect-I & Lect-II Distributed Shared Memory, Peer-to-Peer
power and energy consumption storage systems, and
Week 12 Lect-I & Lect-II synchronization

Week 13 Lect-I & Lect-II Message passing interface (MPI),concurrency control


Fault tolerance, interconnection topologies,
Week 14 Lect-I & Lect-II Asynchronous/synchronous computation/communication,
concurrency control, fault tolerance.
Advanced topics in parallel and Distributed computing.
Week 15 Lect-I & Lect-II
Cloud Computing.
Week 16 Lect-I & Lect-II Final Project Presentations
Final Term Exam

You might also like