PARALLEL PROGRAMMING ON GPU ARCHITECTURES

Academic Year 2018/2019 - 3° Year - Curriculum A
Teaching Staff: Giuseppe BILOTTA
Credit Value: 6
Scientific field: INF/01 - Informatics
Taught classes: 24 hours
Exercise: 24 hours
Term / Semester:

Learning Objectives

Knowledge and understanding: acquire the fundamentals of massively parallel computing on modern hardware (GPU, multicore CPU, accelerators) based on the stream computing paradigm.

Applying knowledge and understanding: acquire the competence to apply the knowledge to the development of parallel computing software using the main frameworks (CUDA and OpenCL).

Making judgements: acquire che capacity to identify fundamental coding paradigms (embarrasingly parallel problems, reductions, scans) and the corresponding opportunities to paralelize them.

Communication skills: acquire che capacity to describe with proper language both the theoretical and practical aspects of parallel computing on modern architectures.

Learning skills: develop the ability to understand specialized texts on the topic.


Course Structure

The course is composed of both active and practical lectures, with real-time development of sample code for each topic discussed, to provide the students with the necessary familiarity with both the theoretical and practical aspects of the material.


Detailed Course Content

  • History of graphic cards and GPGPU
  • GPGPU programming basics; introduction to CUDA
  • High-level CUDA programming: the CUDA runtime
  • Benchmarking, optimization and debugging
  • Low-level CUDA programming: the CUDA driver interface
  • OpenCL basics and heterogeneous GPGPU programming
  • Introduction to multi-GPU

Textbook Information

NVIDIA CUDA Programming Guide
OpenCL specification