Module 12417 (2005)
Syllabus page 2005/2006
06-12417
Nature Inspired Learning
Level 4/M
Links | Outline | Aims | Outcomes | Prerequisites | Teaching | Assessment | Books | Detailed Syllabus
The Module Description is a strict subset of this Syllabus Page. (The University module description has not yet been checked against the School's.)
Relevant Links
See Nature
Inspired Learning Teaching Page for module material and further
useful links.
Outline
This module gives an introduction to advanced learning techniques inspired by natural, physical and biological systems as well as some state-of-the-art machine learning techniques. Examples of such topics include: co-evolutionary learning, interaction between learning and evolution, the Baldwin effect, Lamarckian evolution, adaptive resonance theory neural networks, Boltzmann machines, active learning in natural computation, reinforcement learning, decision tree learning, Bayesian learning, support vector machines, mixture-model based learning.
Aims
The aims of this module are to:
- present a range of nature inspired and state-of-the-art learning techniques
- show how these techniques can be applied to particular problems
Learning Outcomes
| On successful completion of this module, the student should be able to: | Assessed by: | |
| 1 | demonstrate a good understanding of the nature inspired and state-of-the-art learning techniques presented in the module, as well as their relationship | Examination |
| 2 | demonstrate an understanding of the benefits and limitations of nature inspired learning techniques in contrast to state-of-the-art machine learning techniques | Examination |
| 3 | apply nature inspired and state-of-the-art learning algorithms to specific technical and scientific problems | Examination |
Restrictions, Prerequisites and Corequisites
Restrictions:
None
Prerequisites:
None
Co-requisites:
06-12412 (Introduction to Neural Computation) (unless 06-19341 (Introduction to Natural Computation) or equivalent has been taken previously); 06-12414 (Introduction to Evolutionary Computation) (unless 06-02411 (Evolutionary Computation) has been taken previously)
Teaching
Teaching Methods:
2 hrs/week lectures/tutorials
Contact Hours:
Assessment
- Supplementary (where allowed): As the sessional assessment
- 1.5 hr open book examination (100%).
Recommended Books
| Title | Author(s) | Publisher, Date |
| Machine Learning | Tom M. Mitchell | McGraw-Hill, 1997 |
| Neural Networks: A Comprehensive Foundation (Second Edition) | Simon Haykin | Prentice Hall, 1999 |
| Evolutionary Computation | Thomas Back, David B. Fogel, and Zbigniew Michalewicz (Eds.) | IOP Press, 2000 |
| Reinforcement Learning: An Introduction | Richard S. Sutton and Andrew G. Barto | MIT Press, 1998 |
| Adaptive Individuals in Evolving Populations: Models and Algorithms | R. K. Belew and M. Mitchell (Eds.) | Addison-Wesley, 1996 |
| Fundamentals of Neural Networks: Architectures, Algorithms, and Applications | Laurene Fausett | Prentice Hall, 1994 |
| An Introduction to Support Vector Machine and Other Kernel-Based Learning Methods | Nello Cristianini and John Shawe-Taylor | Cambridge University Press, 2000 |
| Pattern Classification (Second Edition) | Richard O. Duda, Peter E. Hart, and David G. Stork | John Wiley, Inc., 2001 |
| The Elements of Statistical Learning: Data Mining, Inference, and Prediction | Trevor Hastile, Robert Tibshirani, and Jerome Friedman | Springer, 2001 |
| The Handbook of Brain Theory and Neural Networks (2nd Edition) | M.A. Arbib (Ed.) | MIT Press, 2002 |
Detailed Syllabus
- Overview of what Nature Inspired Learning covers. Lessons learnt in the Neural Computation module.
- Neural models for processing temporal data. Recurrent neural networks and their generalizations (recursive networks). Learning in recurrent networks: Back-Propagation through time and Real-Time recurrent learning.
- Biologically more accurate neural models - spiking neurons. Data representation and learning in networks of spiking neurons.
- Introduction to probabilities and statistics. Probabilistic models of data. Classification as a probabilistic inference problem. Discriminatory vs. generative approaches to classification.
- Bayesian Learning -- Bayes theorem, maximum a posterior and maximum likelihood learning, Bayes optimal classifier, naive Bayesian classifier, Bayesian decision theory, Bayesian belief networks, the relationship to classical neural network models.
- Mixture-Model Based Learning -- mixture-model based unsupervised and supervised learning as well as EM learning algorithms, the relationship to modular neural network models. Latent-space models.
- Latent-space reformulations of neural topographic maps. Generative topographic mapping. Information theoretic formulations of topographic maps - channel noise models. Latent Trait models.
- Support Vector Machine -- generalisation in linear learning machine, large margin classifiers and learning algorithms, nonlinear support vector machines and other related issues. Kernel machines.
- Learning Theory - basic concepts. Complexity measures for learnng machines and their relation to bounds on generalization error.
- Co-evolutionary learning -- Competitive and cooperative co-evolutionary strategies and algorithms for solving learning problems and evolving artificial neural networks.
- Interaction between Learning and Evolution - Baldwin effect, Lamarckian evolution, how learning can guide evolution, illustrative examples on the effect of learning in evolution.
Last updated: 13 May 2005
Source file: /internal/modules/COMSCI/2005/xml/12417.xml
Links | Outline | Aims | Outcomes | Prerequisites | Teaching | Assessment | Books | Detailed Syllabus