# Module 06-12412 (2013)

## Introduction to Neural Computation

## Level 4/M

John Bullinaria | Semester 1 | 10 credits |

### Outline

This module introduces the basic concepts and techniques of neural computation, and its relation to automated learning in computing machines more generally. It covers the main types of formal neuron and their relation to neurobiology, showing how to construct large neural networks and study their learning and generalization abilities in the context of practical applications. It also provides practical experience of designing and implementing a neural network for a real world application.

### Aims

The aims of this module are to:

- Introduce some of the fundamental techniques and principles of neural computation
- Investigate some common neural-based models and their applications
- Present neural network models in the larger context of state-of-the-art techniques of automated learning

### Learning Outcomes

On successful completion of this module, the student should be able to:

1 Understand the relationship between real brains and simple artificial neural network models

2 Describe and explain some of the principal architectures and learning algorithms of neural computation

3 Explain the learning and generalisation aspects of neural computation
4 Apply neural computation algorithms to specific technical and scientific problems

5 Demonstrate an understanding of the benefits and limitations of neural-based learning techniques in context of other state-of-the-art methods of automated learning

### Restrictions

### Cannot be taken with

- 06-20416 - Neural Computation

### Teaching methods

2 hrs/week of lectures, assigned course work

Contact Hours: 23

### Assessment

Sessional: 1.5 hr examination (80%), continuous assessment (20%)

Supplementary (where allowed): 1.5 hr examination (80%), continuous assessment (20%)

### Detailed Syllabus

- Introduction to Neural Networks and their History
- Biological Neurons and Neural Networks, Artificial Neurons
- Networks of Artificial Neurons, Single Layer Perceptrons
- Learning and Generalization in Single Layer Perceptrons
- Hebbian Learning, Gradient Descent Learning
- The Generalized Delta Rule, Practical Considerations
- Learning in Multi-Layer Perceptrons - Back-Propagation
- Learning with Momentum, Conjugate Gradient Learning
- Bias and Variance - Under-Fitting and Over-Fitting
- Improving Generalization
- Applications of MLPs
- Recurrent Neural Networks
- Radial Basis Function Networks: Introduction
- Radial Basis Function Networks: Algorithms
- Radial Basis Function Networks: Applications
- Self Organizing Maps: Fundamentals
- Self Organizing Maps: Algorithms and Applications
- Learning Vector Quantization
- Committee Machines
- Mixture Models

### Programmes containing this module

- MEng Computer Science/Software Engineering [4754]
- MEng Computer Science/Software Engineering with an Industrial Year [9501]
- MRes Natural Computation [9048]
- MSc Advanced Computer Science [0014]
- MSc Computer Science [0008]
- MSc Human-Computer Interaction [9151]
- MSc Multidisciplinary Optimisation [9150]
- MSc Robotics [9889]
- MSci Computer Science [4443]
- MSci Computer Science with an Industrial Year [9509]
- MSci Computer Science with Study Abroad [5576]
- MSci Mathematics and Computer Science [5197]
- MSci Mathematics and Computer Science with an Industrial Year [9496]
- MSci Pure Mathematics and Computer Science [5256]
- MSci Pure Mathematics and Computer Science with an Industrial Year [9498]