Co

NHR-Tutorial: 3-day-workshop - From Zero to Multi-Node GPU Programming

Date
Sep 18, 2024 - Oct 2, 2024
Time
9:00 AM - 5:00 PM
Speaker
Dr. Sebastian Kuckuk and Markus Velten
Language
en
Main Topic
Informatik
Other Topics
Informatik
Description

This weekly workshop series is jointly organized by NHR@FAU (https://hpc.fau.de/teaching/tutorials-and-courses/), NHR@TUD (https://tu-dresden.de/zih/hochleistungsrechnen/nhr-training) and NVIDIA DLI (http://www.nvidia.com/dli). It covers the following DLI courses:

  • Fundamentals of Accelerated Computing with CUDA C/C++ (https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+C-AC-01+V1)
  • Accelerating CUDA C++ Applications with Multiple GPUs (https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+C-AC-04+V1)
  • Scaling CUDA C++ Applications to Multiple Nodes (https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+C-AC-07+V1)

Please indicate which parts you want to attend when registering.

Date and Time

The courses will be held online on September 18th, September 25th and October 2nd, from 9 am to 5 pm.

Prerequisites

A free NVIDIA developer account is required to access the course material. Please register before the training at https://learn.nvidia.com/join (https://learn.nvidia.com/join).

Part 1

  • Basic C/C++ competency, including familiarity with variable types, loops, conditional statements, functions, and array manipulations
  • No previous knowledge of CUDA programming is assumed

Parts 2 and 3

  • Successful attendance of Part 1 (Fundamentals of Accelerated Computing with CUDA C/C++ (https://hpc.fau.de/teaching/tutorials-and-courses/#collapse_3)) or equivalent experience implementing CUDA C/C++ applications, including
    • memory allocation, host-to-device and device-to-host memory transfers,
    • kernel launches, grid-stride loops, and
    • CUDA error handling. 
  • Familiarity with the Linux command line.
  • Experience using Makefiles to compile C/C++ code.

Learning Objectives

Day 1

At the conclusion of the workshop, participants will have an understanding of the fundamental tools and techniques for GPU- accelerating C/C++ applications with CUDA and be able to:

  • Write code to be executed by a GPU accelerator
  • Expose and express data and instruction-level parallelism in C/C++ applications using CUDA
  • Utilize CUDA-managed memory and optimize memory migration using asynchronous prefetching
  • Leverage command-line and visual profilers to guide your work
  • Utilize concurrent streams for instruction-level parallelism
  • Write GPU-accelerated CUDA C/C++ applications, or refactor existing CPU-only applications, using a profile-driven approach

Day 2

At the conclusion of the workshop, you will be able to:

  • Use concurrent CUDA streams to overlap memory transfers with GPU computation,
  • Utilize all GPUs on a single node to scale workloads across available GPUs,
  • Combine the use of copy/ compute overlap with multiple GPUs, and
  • Rely on the NVIDIA Nsight Systems timeline to observe improvement opportunities and the impact of the techniques covered in the workshop.

Day 3

At the conclusion of the workshop, you will be able to:

  • Use several methods for writing multi-GPU CUDA C++ applications,
  • Use a variety of multi-GPU communication patterns and understand their tradeoffs,
  • Write portable, scalable CUDA code with the single-program multiple-data (SPMD) paradigm using CUDA-aware MPI and NVSHMEM,
  • Improve multi-GPU SPMD code with NVSHMEM’s symmetric memory model and its ability to perform GPU-initiated data transfers, and
  • Get practice with common multi-GPU coding paradigms like domain decomposition and halo exchanges.

Language

The courses will be held in English.

Instructors

Dr. Sebastian Kuckuk (https://hpc.fau.de/person/dr-sebastian-kuckuk/) and Markus Velten (https://fis.tu-dresden.de/portal/de/researchers/markus-velten(717c6bee-bb44-46de-8ae1-59f7236a38e2).html), both certified NVIDIA DLI Ambassadors.

The course is co-organised by NHR@FAU (https://hpc.fau.de/teaching/tutorials-and-courses/), NHR@TUD (https://tu-dresden.de/zih/hochleistungsrechnen/nhr-training) and the NVIDIA Deep Learning Institute (DLI) (http://www.nvidia.com/dli).

Prices and Eligibility

The course is open and free of charge for participants from academia from European Union (EU) member states and countries associated under Horizon 2020 (https://ec.europa.eu/info/research-and-innovation/statistics/framework-programme-facts-and-figures/horizon-2020-country-profiles_en).

Withdrawal Policy

Please only register for the course if you are really going to attend. No-shows will be blacklisted and excluded from future events. If you want to withdraw your registration, please send an e-mail to sebastian.kuckuk@fau.de (mailto:sebastian.kuckuk@fau.de).

Wait List

To be added to the wait list after the course has reached its maximum number of registrations send an e-mail to sebastian.kuckuk@fau.de (mailto:sebastian.kuckuk@fau.de) with your name, university affiliation and days you want to attend.

Links

Last modified: Oct 1, 2024, 7:40:39 AM

Location

Center for Information Services and High Performance Computing (ZIH) (online)Zellescher Weg12-1401069Dresden
E-Mail
ZIH
Homepage
http://tu-dresden.de/die_tu_dresden/zentrale_einrichtungen/zih

Organizer

Center for Information Services and High Performance ComputingZellescher Weg12-1401069Dresden
Phone
+49 351 463-35450
Fax
+49 351 463-37773
E-Mail
TUD ZIH
Homepage
http://tu-dresden.de/zih
Scan this code with your smartphone and get directly this event in your calendar. Increase the image size by clicking on the QR-Code if you have problems to scan it.
  • BiBiology
  • ChChemistry
  • CiCivil Eng., Architecture
  • CoComputer Science
  • EcEconomics
  • ElElectrical and Computer Eng.
  • EnEnvironmental Sciences
  • Sfor Pupils
  • LaLaw
  • CuLinguistics, Literature and Culture
  • MtMaterials
  • MaMathematics
  • McMechanical Engineering
  • MeMedicine
  • PhPhysics
  • PsPsychology
  • SoSociety, Philosophy, Education
  • SpSpin-off/Transfer
  • TrTraffic
  • TgTraining
  • WlWelcome