School of Computer Science

Module 06-22755 (2011)

Parallel Programming (Extended)

Level 4/M

Dan Ghica Semester 2 10 credits
Co-ordinator: Dan Ghica
Reviewer: Behzad Bordbar

The Module Description is a strict subset of this Syllabus Page.


The aims of this module are to:

  • understand GPU architecture
  • understand what applications areas are most suitable to parallel programming
  • practical understanding of parallel algorithms
  • practical understanding of parallel programming techniques
  • concrete skills in programming Nvidia GPUs using the CUDA framework
  • skills in using the software tools provided by the CUDA framework

Learning Outcomes

On successful completion of this module, the student should be able to:

  • describe and explain modern parallel architectures
  • describe and explain applications of parallel programming
  • describe and explain parallel programming models
  • design simple parallel algorithms
  • implement more advanced parallel algorithms
  • demonstrate a research-informed critical understanding of parallel architectures and parallelization techniques

Taught with

Teaching methods

2 hrs/week lectures; 2 hrs/week practical sessions


  • Sessional: 1.5 hr examination (50%), continuous assessment (coursework and report) (50%). Both the examination and the continuous assessment are internal hurdles; students must pass both in order to pass the module
  • Supplementary: By examination only (100%)

Detailed Syllabus

The detailed syllabus is subject to change. There will be a set of weekly practicals. 1. Contrasting CPU and GPU architectures 2. The CUDA programming model 3. A first example: matrix multiplication 4. The CUDA memory model 5. GPU as part of the PC architecture 6. Detailed threading model 7. Detailed memory model 8. Control flow on the GPU 9. Floating-point aspects 10. Parallel programming concepts 11. Parallel algorithms 12. Reduction 13. Advanced features