Genetics-Based Machine Learning: Current Applications, Future Directions
This talk presents a progression of genetics-based machine learning (GBML) systems and ideas. It begins with a project where a GBML system acquired rules for novel fighter combat maneuvers through simulation. In this project, a genetic machine learning system was implemented to generate high angle-of-attack air combat tactics for the NASA X-31 research aircraft. This system, which was based on a learning classifier system (LCS) approach, employed a digital simulation model of 1-versus-1 air combat and a genetic algorithm to develop effective tactics for the X-31. The resulting maneuvers allowed the X-31 to successfully exploit its post-stall capabilities against a conventional fighter opponent, demonstrating the ability of the GLS to discover novel tactics in a dynamic air combat environment. The talk will include previously un-presented results where two planes co-evolve via genetics based machines learning.
This project demonstrates how a GBML system can acquire cooperative structures that jointly implement novel approaches to unforeseen problems via experience. The talk will then extend the notions in this GBML system to a new, general framework that is currently being designed. This system will extend the adaptive abilities of GBML to broader, agent-based systems. Final comments in the talk will discuss motivations for GBML as an adaptive paradigm in this new, broader setting.
Dr. Robert E. Smith is Director of The Intelligent Computing Systems Centre of the University of The West of England. He was previously an Associate Professor of Aerospace Engineering and Mechanics at the University of Alabama. He received his B.S. in Mechanical Engineering in 1985, his M.S. in Engineering Mechanics in 1988, and his Ph.D. in Engineering Mechanics in 1991. Dr. Smith is an active researcher in the application of adaptive and intelligent systems to engineering problems. His chief interests are in genetic algorithms for machine learning and optimization. Dr. Smith is the author of 15 journal articles and more than 20 conference articles on these subjects. Dr. Smith serves as Associate Editor of the international journal, Evolutionary Computation.
Adaptive Processing of Data Structures
In recent years, an alternative method is proposed to learn systems which can be expressed in terms of a graph structure, e.g., tree, list. In this talk we will indicate a number of results, using a multilayer perceptron in each node of the graph, and we will show how the weights of the node model in the graph can be learned from a set of input output training examples. The work will be illustrated using a number of examples.
Prof Tsoi is the Director of Information Services at the University of Wollongong in Australia. He was the Dean of the Faculty of Informatics in the same university and a professor at the University of Queensland before that. His major research interests include neural networks, machine learning and data mining. He has served on various panels in the Australian Research Council and published numerous refereed papers.
Genetic algorithms as Markov processes: an introduction.
Genetic algorithms are stochastic processes that move from one state (or population) to another. The fact that each generation depends only on the previous one means that the genetic algorithm is a Markov process. Applying some simple ideas from the theory of Markov processes enables us to address a number of general questions about genetic algorithms: Under what conditions will they converge to the optimum? How long might this take? What is the effect of adapting operators according to some annealing schedule? This is the most general level of genetic algorithm theory, and it forms the foundation for further theoretical developments.
It should be noted that this talk is pedagogical in nature, and will not contain any new research results!
GP Schema theory as a tool for understanding code growth in Genetic Programming
The seminar will be a review of recent work I've done with Riccardo Poli, some fairly polished and some "in progress". In the past two years Riccardo has extended Stevens and Waelbroeck's work on Genetic Algorithms (GA) schema theory to a schema theory for Genetic Programming (GP). In this talk I'll discuss applications of this theory to the long-standing problem of understanding GP code growth, or "bloat".
We have applied the theory to the case of a flat fitness landscape, with some surprising results: We get the opposite (in some sense) of bloat. We then extended that work by developing a simple notion of fitness where the theory predicts bloat, which is then observed in real GP runs.
The theory has also suggests some surprising hypotheses, including the idea that a small number of "glitches" in an otherwise flat fitness landscape may be sufficient to drive the average program length, which may be useful as a new tool for controlling program size in GP runs.
The Perception of Time Intervals and Reinforcement Learning
An important component of animal behaviour is the ability to wait an appropriate time for a reward. For example, an animal should forage for food for a time related to the expected time to find food. In order to do this, the animal requires an internal clock to measure the time interval and a reinforcement learning system to learn the appropriate waiting time.
Experiments on animals and humans in which a time interval must be estimated show a very striking property of the internal clock. In a wide range of experiments, it is found that the experimental curves scale with length of the time interval. For example, the standard deviation of the estimate of the time interval grows linearly with the length of the time interval. This is know as scalar timing. This suggests that the perception of time intervals has a characteristic form in a wide range of animals.
It has been difficult to produce a model of scalar timing which explains the scalar property rather than assuming it a priori. We have produced a simple connectionist model which exhibits the scalar property. The model consists of a collection of probabilistic spiking neurons. Time is encoded as the amount of activation in the network. The neurons are probabilistic; the expected number of spikes emitted by a neuron is proportional to the number of spikes it receives. When the loss of neural activity due to the probabilistic nature of the neurons balances the increase of neural activity due to synaptic fan-out, the scalar property approximately holds. When this is coupled to a simple reinforcement learning mechanism, the system simulates the results of experiments.
A problem with the model described above is that it requires the balancing of the spike loss and generation mechanisms. I will present a game-theoretic argument which suggests that such a balance could arise from the optimization of performance in animal contests.
Estimating the number of local optima in a fitness landscape
Neighbourhood search (NS) is a popular methods for finding good solutions to combinatorial optimization problems (COPs). Individual neighbourhoods induce different fitness landscapes, replete with "hills", "valleys", "ridges" etc., but simple NS only climbs one hill - a local optimum. While at least one hill will be globally optimal, experience shows that most of the time we don't find it, and the fitness of the one we do climb may be quite poor.
It would be helpful if we could have some idea of how many local optima there really are in a particular fitness landscape. In this talk I shall demonstrate how this problem can be formulated as an analogy of a well-know problem in ecology - estimating the numbers of an animal species in a well-defined area. Some theoretical results will be explained, and experimental evidence used to assess the plausibility of this approach.
Benefits and Applications of Self-Configurable Hardware-The Cell Matrix Architecture
This talk will provide an introduction to the cell matrix architecture. I will give a high-level description of its basic characteristics, followed by a brief overview of its cell-level behavior. I'll then discuss the benefits of self-configurable hardware, and explain how self-configurability can be used to support evolvable hardware, fault tolerance and massively parallel configuration. I'll then present some applications for the cell matrix architecture, including massively-parallel computation. I'll conclude with a summary of this research's current status and a discussion of future work.