School of Computer Science

Module 06-20416 (2013)

Neural Computation

Level 3/H

John Bullinaria Semester 1 10 credits
Co-ordinator: John Bullinaria
Reviewer: Peter Tino

The Module Description is a strict subset of this Syllabus Page.


This module introduces the basic concepts and techniques of neural computation, and its relation to automated learning in computing machines more generally. It covers the main types of formal neuron and their relation to neurobiology, showing how to construct large neural networks and study their learning and generalization abilities in the context of practical applications.


The aims of this module are to:

  • Introduce some of the fundamental techniques and principles of neural computation
  • Investigate some common neural-based models and their applications
  • Present neural network models in the larger context of state-of-the-art techniques of automated learning

Learning Outcomes

On successful completion of this module, the student should be able to:

1 understand the relationship between real brains and simple artificial neural network models
2 describe and explain some of the principal architectures and learning algorithms of neural computation
3 explain the learning and generalisation aspects of neural computation
4 demonstrate an understanding of the benefits and limitations of neural-based learning techniques in context of other state-of-the-art methods of automated learning

Cannot be taken with

  • 06-12412 - Introduction to Neural Computation

Teaching methods

2 hrs/week of lectures

Contact Hours: 23


Sessional: 1.5 hr examination (100%)

Supplementary (where allowed): As the normal assessment.

Detailed Syllabus

1.Introduction to Neural Networks and their History 2.Biological Neurons and Neural Networks, Artificial Neurons 3.Networks of Artificial Neurons, Single Layer Perceptrons 4.Learning and Generalization in Single Layer Perceptrons 5.Hebbian Learning, Gradient Descent Learning 6.The Generalized Delta Rule, Practical Considerations 7.Learning in Multi-Layer Perceptrons - Back-Propagation 8.Learning with Momentum, Conjugate Gradient Learning 9.Bias and Variance - Under-Fitting and Over-Fitting 10.Improving Generalization 11.Applications of Multi-Layer Perceptrons 12.Recurrent Neural Networks 13.Radial Basis Function Networks: Introduction 14.Radial Basis Function Networks: Algorithms 15.Radial Basis Function Networks: Applications 16.Self Organizing Maps: Fundamentals 17.Self Organizing Maps: Properties and Applications 18.Learning Vector Quantization 19.Committee Machines 20.Model Selection and Evolutionary Optimization

Programmes containing this module