Doctoral course - High Performance Computing: Programming Parallel Supercomputers

This course aims to provide an intense tutorial and knowledge exchange to graduate students and researchers on key topics regarding the main HPC architectures (tightly versus loosely coupled architectures), distributed memory and hybrid (distributed + shared memory) programming models, message-passing interface, threading, and massive parallelism with graphics processing units (GPUs).

Event information

Time

-

Location

Linnanmaa

Add event to calendar

Guest Lecturer:

Main Lecturer

  1. Maarit Korpi-Lagg, Associate Professor, Department of Computer Science, Aalto University, Finland, Email: maarit.korpi-lagg@aalto.fi; Web: https://research.aalto.fi/fi/persons/maarit-korpi-lagg

Other Teaching Staff Member

  1. Matthias Rheinhardt, Research Fellow, Department of Computer Science, Aalto University, Finland, Email: matthias.rheinhardt@aalto.fi; Web: https://research.aalto.fi/en/persons/matthias-rheinhardt
  2. Touko Puro, Doctoral Researcher, Department of Computer Science, Aalto University, Finland, Email: touko.puro@aalto.fi

Local organizer:

  1. Abhishek Kumar, Postdoctoral Researcher, Center for Ubiquitous Computing, Faculty of Information Technology and Electrical Engineering, Email: abhishek.kumar@oulu.fi, Web: https://ubicomp.oulu.fi/staff-members/abhishek-kumar
  2. Aarne Pohjonen, Postdoctoral Researcher, Centre for Advanced Steels Research, Faculty of Technology, Email: aarne.pohjonen@oulu.fi, Web: https://www.researchgate.net/profile/Aarne-Pohjonen
  3. Miguel Bordallo Lopez, Associate Professor, Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, Email: miguel.bordallo@oulu.fi, Web: https://sites.google.com/view/miguelbordallo

Planned Dates: 18 March 2024 – 28 March 2024 (Intensive teaching period), and four weeks after the intensive teaching period to complete the coding exercises.

Assessment: The evaluation for this course is exclusively based on programming assignments. The participants will have four weeks after the teaching period to submit the programming exercises. 1) To successfully pass the course, participants must achieve at least 50% of the total points, which corresponds to a grade of 1. Attaining the highest grade (Grade 5), requires securing over 90% of the total points. The grading scale will be linear between these two benchmarks. 2) Alternatively, participants may also take the course as PASS/FAIL grade. To get PASS grade, participants must achieve at least 50% of the total points and attendance in at least 6 sessions.

Registration: https://forms.office.com/e/63rYpaSYBv (Requires login through University account before the form can be filled)

Credit Units: 3 ECTS (1 ECTS = 27 hours of studies)

  • Contact teaching hours = 10 hour (Given by Prof. Korpi-Lagg in-person)
  • Contact exercise hours = 10 hour (coding sessions managed by other teaching staff members in-person and zoom)
  • Independent self-studies = 10 hour (5 hours of recorded lectures which should be self-studied before contact teaching and 5 hours of reading materials)
  • Lab work and assignments = 51 hour

Pre-requisite/Qualifications: Course 521288S Multiprocessor Programming is recommended. Good understanding of computer programming, algorithms and data structures is necessary. Basic programming skills with C is required, which is used in the assignments. Knowledge of Unix, shell scripting, and/or supercomputer architectures is a bonus. Assignments can also be returned in C++.

Learning objectives:

The topic of this course is scientific computing, aka. high-performance computing (HPC), namely heavy-duty computing on clusters or supercomputers with thousand(s) to million(s) of cores.

This course aims to provide an intense tutorial and knowledge exchange to graduate students and researchers on key topics regarding the main HPC architectures (tightly versus loosely coupled architectures), distributed memory and hybrid (distributed + shared memory) programming models, message-passing interface, threading, and massive parallelism with graphics processing units (GPUs). Upon completion of the course, the learners are able to

  • Get familiarized with the current HPC landscape to be able to choose the correct framework for their large-scale problem
  • Learn basic concepts on how to build efficient applications for clusters of supercomputers with thousand(s) to million(s) of cores
  • Master distributed memory and hybrid (distributed + shared memory) programming models
  • Learn essentials of message-passing interface
  • Learn essentials of HPC in hybrid architectures with graphics processing units (GPUs).

Course Contents:

  • Introduction to the current HPC landscape and supercomputing architectures
  • Introduction/Recap to different parallel programming models and parallel program design
  • Deeper dive into theory and practice of distributed memory and hybrid computing models
  • Message passing interface, from basics to advanced topics
  • Hybrid computing with MPI + openMP
  • Hybrid computing with GPUs with MPI + CUDA
  • Parallel I/O

Target Audience: An advanced graduate course primarily intended for doctoral researchers, but master students with the pre-requisite background may also join. Interested senior university staff members (teachers, postdocs, lecturers, and professors) may also join the course.

Logistic Information:

  1. The course program is scheduled to span a duration of two weeks, encompassing 9 working days. During each of these working days (except 25 March and 28 March), a lecture lasting two hours will be conducted. Sessions on 25 March and 28 March will be three-hour long.
  2. The initial week of the course is dedicated to exploring the fundamentals and theoretical aspects of High-Performance Computing (HPC).
  3. The following week will primarily concentrate on engaging in hands-on practical exercises utilizing CSC's supercomputer.

The current tentative program is arranged for 5 contact teaching sessions (the first week of the course) March 2024.

Week 01 (Instructor: Maarit Korpi-Lagg)

Teaching Session 1 on 18 March 2024 at 10-12 in TS101:

  • Introduction to the current HPC landscape
  • Learning basic definitions and taxonomies
  • Understanding the importance of the “network”
  • Learning basic performance models

Teaching Session 2, 19 March 2024 at 10-12 in TS101:

  • Becoming knowledgeable of the modern landscape of distributed memory programming
  • Understanding why in this course we will concentrate on low-level programming models
  • Getting acquainted with MPI: basics and synchronous and asynchronous point-to-point communication

Teaching Session 3, 20 March 2024 at 10-12 in AT115B:

  • Learning more about MPI
  • One-sided point-to-point communications
  • Collective communications

Teaching Session 4, 21 March 2024 at 10-12 in L9:

  • Programming MP hybrid architectures
  • Becoming knowledgeable of the spectrum of options
  • Understanding efficiency issues

Teaching Session 5, 22 March 2024 at 10-12 in AT115B:

  • Programming hybrid architectures with accelerators
  • Acquiring knowledge of CUDA-MPI programming mode

Week 02 (Instructor: Matthias Rheinhardt, Touko Puro, Abhishek Kumar, Aarne Pohjonen)

During the second week, which comprises four lab sessions, the emphasis will be entirely on coding exercises. These exercises are designed to build upon the lessons given by the principal instructor in the first week.

Session 6 on 25 March 2024 at 9-12 in TS101:

  • Introduction to the supercomputing environment
  • Command line usage
  • Compiling
  • Batch jobs, Git, VS Code
  • Basic MPI, Example codes
  • First coding exercise: going through the material, hints.
  • Practical work and questions.:

Session 7 on 26 March 2024 at 10-12 in TS101:

  • Advanced MPI and collectives
  • Example codes
  • 2nd coding exercise: going through the material, hints.
  • Practical work and questions.

Session 8 on 27 March 2024 at 10-12 in AT115B:

  • Hybrid computing with openMP
  • Example codes
  • 3rd coding exercise: going through the materials, hints.
  • Practical work and questions.

Session 9 on 28 March 2024 at 12-15 in AT128:

  • GPU computing with CUDA and MPI
  • Example codes
  • 4th coding exercise: going through the materials, hints.
  • Practical work and questions.
Last updated: 23.1.2024