School of Computer Science

Module 06-32212 (2022)

Neural Computation (Extended)

Level 4/M

Jinming Duan Konstantinos Kamnitsas Yunwen Lei Semester 1 20 credits
Co-ordinator: Konstantinos Kamnitsas
Reviewer: Jinming Duan

The Module Description is a strict subset of this Syllabus Page.

Outline

This course focuses on artificial neural networks and their use in machine learning. It covers the fundamental underlying theory, as well as methodologies for constructing modern deep neural networks, which nowadays have practical applications in a variety of industrial and research domains. The course also provides practical experience of designing and implementing a neural network for a real-world application.


Aims

The aims of this module are to:

  • Introduce some of the fundamental techniques and principles of neural networks
  • Investigate some common neural-network architectures and their applications
  • Present neural networks in the larger context of state-of-the-art techniques of automated learning

Learning Outcomes

On successful completion of this module, the student should be able to:

  • Describe and explain some of the principal architectures and learning algorithms of neural computation
  • Explain the learning and generalisation aspects of neural computation networks
  • Demonstrate an understanding of the benefits and limitations of neural networks in comparison to other machine learning methods.
  • Develop and apply neural network models to specific technical and scientific problems

Restrictions

Students must have A-Level Mathematics or equivalent


Taught with

Cannot be taken with


Assessment

  • Main Assessments: Examination (80%) and continuous assessment via coursework (20%)
  • Supplementary Assessments: Examination (100%)

Detailed Syllabus

  1. Introduction to Neural Networks and their History
  2. Linear Regression and Learning with Gradient Descent
  3. Single Layer Perceptrons
  4. Learning and Generalization in Single Layer Perceptrons
  5. The Generalized Delta Rule, Practical Considerations
  6. Learning in Multi-Layer Perceptrons - Back-Propagation
  7. Modern Optimization Algorithms
  8. Under-Fitting and Over-Fitting
  9. Convolutional Neural Networks
  10. Recurrent Neural Networks
  11. Auto-encoders
  12. Variational Auto-encoders
  13. Generative Adversarial Neural Networks

Programmes containing this module