DAT2| SW8
Aalborg University| Computer Science Dpt.| Control Engineering Dpt.
Home| Course
Course
› Schedule › Lecture 1 › Lecture 2 › Lecture 3 › Lecture 4 › Lecture 5 › Lecture 6 › Lecture 7 › Lecture 8 › Lecture 9 › Lecture 10 › Lecture 11 › Lecture 12 › Lecture 13 › Lecture 14 › Lecture 15
Home
› Welcome ! › Prerequisites › Course Ojectives › Text Books › Additional Materials › Course Grading › Contact

Schedule

See Calendar

Lecture Title Date Time exercise/lecture Room
1 Welcome
Introduction to Parallel Programming
14/2 8:15-10:00/10:15-12:00 0.2.13
2 Parallel Computers 18/2 8:15-10:00/10:15-12:00 0.2.13
3 1st Steps Towards Parallel Programming 21/2 12:30-14:15/14:30-16:15 0.2.13
4 Multithread Programming 28/2 12:30-14:15/14:30-16:15 0.2.13
5 Multithread Programming (cont.)
The PRAM Model and Optimality
7/3 12:30-14:15/14:30-16:15 0.2.13
6 Scalable Algorithm Techniques
Decompositions and Mapping
14/3 12:30-14:15/14:30-16:15 0.2.13
7 Scalable Algorithm Techniques (cont.)
Writing Parallel Programs
Reasoning About Performance
18/3 12:30-14:15/14:30-16:15 0.2.13
8 Programming Using MPI 21/3 12:30-14:15/14:30-16:15 0.2.13
9 Guest Lecture 1/4 12:30-14:15/14:30-16:15 0.2.13
10 Programming Using MPI (cont.)
Case-Study
11/4 12:30-14:15/14:30-16:15 0.2.13
11 OpenMP
ZPL and Other Global View Languages
15/4 12:30-14:15/14:30-16:15 0.2.13
12 (Implementation of) Communication Operations 18/4 12:30-14:15/14:30-16:15 0.2.13
13 Non-Blocking Algorithms
Assessing the State of the Art
Future Directions in Parallel Computing
29/4 12:30-14:15/14:30-16:15 0.2.13
14 Distributed Termination Detection
Graph Algorithms
2/5 12:30-14:15/14:30-16:15 0.2.13
15 Intel Threading Building Blocks
Google's Go Language
9/5 12:30-14:15/14:30-16:15 0.2.13

The schedule is preliminary and can be changed.
Since the exercises count in your assignments, no solution will be provided unless you all finish them. Assignment 5 is optional for DAT2, compulsory for SW8. The directions given here for working on the assignments are only directions, which means you should not wait and do nothing when you finish one assignment but rather jump to the next one. They are designed to follow the lectures, including assignment 5.

When handing in your assignment please make sure that you have commented and uncommented the right latex definitions at the beginning of the tex file as the comments say. Please fill in your group/room number as well.

Lecture 1: Welcome, Introduction to Parallel Programming

Abstract: In this lecture, I present the course, its goals, the textbook, etc.. I start on basics on "thinking parallel" and I present assignment 1.

Reading: Chapter 1 + 1st section of Chapter 3.

Slides: Welcome, Introduction to Parallel Programming, and Assignment 1.

Exercises:

Lecture 2: Parallel Computers

Abstract: I finish chapter 1 and I treat chapter 2 on parallel computers. You will get some complements on interconnects between computers. This lecture runs in parallel and is related with DNA so this will be covered only in MVP.

Reading: Chapter 2. Chapters 12, 17, and 18 of Essential of Computer Architectures (your DNA book) -- chapter 12 may help for assignment 1. I will not cover in details the chapters from your DNA book but I recommend you read them since they are mostly related to MVP.

Slides: Parallel Computers (notes).

Exercises: Continue on assignment 1.

Lecture 3: 1st Steps Towards Parallel Programming

Abstract: I will introduce the Peril-L notation that we will use as a pseudo-code syntax and use sorting as an example to illustrate how to formulate parallelism.

Reading: Chapter 4 and the copied additional material (Chapter 2 from Intel Threading Building Blocks).

Slides: 1st Steps Towards Parallel Programming.

Exercises: Continue on assignment 1.

Lecture 4: Multithread Programming

Abstract: I will present POSIX threads and different synchronization mechanisms, such as mutex and condition synchronizations. I will introduce assignment 2 as well.

Reading: Chapter 6 - POSIX threads.

Slides: Multithread programming, Assignment 2.

Exercises: Finish assignment 1. Deadline 14/3.
If you are using the VirtualBox image, you'll need to do sudo apt-get install texlive-full before you compile your assignment. Please ignore the (wrong) date in the assignment.

Lecture 5: Multithread Programming (cont.), The PRAM Model and Optimality

Abstract: I will finish on pthreads, I present a model for parallel RAM, and I introduce what optimality means in our context.

Reading: Chapter 6, Sven Skyum's lecture notes
(typo p6, read "O(n/lg n) processors" instead of "O(nlg n) processors"
typo p9, read "Time O(log(n + m))" instead of "Time O(1)").

Slides: Multithread programming, The PRAM Model and Optimality.

Exercises: Start assignment 2.

Lecture 6: Scalable Algorithm Techniques, Decompositions and Mapping

Abstract: I will present notions of scalability, different techniques to decompose and map computations to tasks, task dependency graphs, reduce and scan operations, and some examples.

Reading: Chapter 5.

Slides: Scalable Algorithm Techniques.

Exercises: Continue on assignment 2.

Lecture 7: Scalable Algorithm Techniques (cont.), Writing Parallel Programs, Reasoning about Performance

Abstract: I will finish on scalability, give you some hints on writing parallel programs, and I will start to present how to reason about performance.

Reading: Chapter 5, Chapter 11, and Chapter 3. Chapters 19 of Essential of Computer Architectures (your DNA book).

Slides: Scalable Algorithm Techniques, Writing Parallel Programs, Reasoning About Performance.

Exercises: Finish assignment 2. Deadline 1/4.

Lecture 8: Programming Using MPI

Abstract: MPI is the message passing interface API. It is a standard interface implemented in different libraries like Open MPI or LAM.

Reading: Chapter 3 and Chapter 7.

Slides: Programming Using MPI.

Exercises: Start assignment 3.

Lecture 9: Guest Lecture

Abstract: Andread Dalsgaard will give a guest lecture on clusters.

Slides: cluster.

Exercises: Andreas will help you with MPI so work on the MPI part of assignment 4.

Lecture 10: Programming Using MPI (cont.)

Abstract: I will finish on MPI and present a small case-study.

Reading: Chapter 7. Tutorial page of LAM/MPI.

Slides: Programming Using MPI, Case-study.

Exercises: Continue on assignment 3.

Lecture 11: OpenMP, ZPL and Other Global View Languages

Abstract: I will treat OpenMP, which is a pre-processing language built on top of C/Fortran. Then I will present global view languages with the example of the book. What is important from the lecture and the chapter is not ZPL itself but the concepts that it is using.

Reading: Chapter 6 (OpenMP) and Chapter 8.

Slides: OpenMP, ZPL and Other Global View Languages.

Exercises: Finish assignment 3. Deadline 29/4.

Lecture 12: (Implementation of) Communication Operations

Abstract: I will present different algorithms used to implement global communication operations in particular for MPI.

Slides: Communication Operations.

Exercises: Start assignment 4.

Lecture 13: Non-Blocking Algorithms, Assessing the State of the Art, Future Directions in Parallel Computing

Abstract: I have already told you of assignment 5 and I may remind it to you if I have the time. Its goal is to provide a finer data dependency analysis of the matrix inversion algorithm and to use split-barriers to improve performance. I will then present some non-blocking algorithms.

Reading: Chapter 9 and Chapter 10.
Michael-Scott paper, Interesting blog on Java with some links you should check, and to finish chapter 10 MapReduce.

Slides: Assignment 5, Non-blocking algorithms. State of the Art, Future Directions,

Exercises: Continue assignment 4.

Lecture 14: Distributed Termination Detection, Graph Algorithms

Abstract: I will present a distributed termination detection algorithm that can be used on parallel distributed systems when the end of the computation is not known when you start your algorithm. Then I will give you some parallel graph algorithms.

Slides: Distributed Termination Detection, Graph Algorithms.

Exercises: Finish assignment 4. Deadline 16/5.

Lecture 15: Intel Threading Building Blocks, Google's Go Language

Abstract: I will present you the Intel threading building block library. It is a C++ library for concurrrent programming. I will not focus on the C++ aspect but rather on the concepts implemented in the library and the nice constructs available. Then I will give a short introduction on Google's Go language and its support for concurrent programming.

Reading: Read again the copied chapter 2 from Intel Threading Building Blocks. Have a look at the Go homepage.

Slides: TBB, Go.

Exercises: Finish your completions if any. Assignment 5 (SW8): deadline 30/5.