School of Computer Science

Module 06-22755 (2010)

Programming Massively Parallel Architectures

Level 4/M

Dan Ghica Semester 2 10 credits
Co-ordinator: Dan Ghica
Reviewer: Georgios Theodoropoulos

The Module Description is a strict subset of this Syllabus Page.


The aims of this module are to:

  • understand GPU architecture
  • understand what applications areas are most suitable to parallel programming
  • practical understanding of parallel algorithms
  • practical understanding of parallel programming techniques
  • concrete skills in programming Nvidia GPUs using the CUDA framework
  • skills in using the software tools provided by the CUDA framework

Learning Outcomes

On successful completion of this module, the student should be able to:

  • describe and explain modern GPU architecture
  • describe and explain applications of parallel programming
  • describe and explain the CUDA programming model
  • design simple parallel algorithms
  • implement more advanced parallel algorithms using CUDA
  • use CUDA tools to debug and profile programs

Teaching methods

2 hrs/week lectures; 2 hrs/week practical sessions


  • Sessional: Normal (sessional): 1.5 hr examination (80%), continuous assessment (20%)
  • Supplementary: By examination only (100%)

Detailed Syllabus

The detailed syllabus is subject to change. There will be a set of weekly practicals. 1. Contrasting CPU and GPU architectures 2. The CUDA programming model 3. A first example: matrix multiplication 4. The CUDA memory model 5. GPU as part of the PC architecture 6. Detailed threading model 7. Detailed memory model 8. Control flow on the GPU 9. Floating-point aspects 10. Parallel programming concepts 11. Parallel algorithms 12. Reduction 13. Advanced features