Axel Großmann, School of Computer Science, The University of Birmingham.
Recently, our school has decided to buy a mobile robot for teaching
and research purposes. In this EEBIC seminar, we have the opportunity
(1) to introduce this robot (hardware components, sensing abilities, and
other features) and (2) to give an impression about the state of the art
in mobile robotics. The robot and its control software, which has been
developed at SRI in Stanford, has won the AAAI'96 robot competition.
In particular, we want to discuss approaches for navigation in office
environments.
The talk has the following main topics:
Inman Harvey, COGS, University of Sussex.
Most Genetic Algorithm work, and GA folklore, is concerned under some guise or another with function optimisation in a pre-specified search space. While all very worthy and useful for many purposes, this is a rather dull and constipated view of what GAs can do; as with natural Darwinian evolution in the real world, GAs can be used for incremental adaptive evolution exploring an open-ended search space. I will give a few examples from evolutionary robotics, from silicon hardware evolution and pharmaceutical molecular evolution, but I will concentrate on how GAs-as-adaptive-improvers differ from GAs-as-function-optimisers: e.g. worries about premature convergence are irrelevant, Genetic Programming as currently used is a non-starter, the hot research topic of the moment is junk DNA and neutral networks.
Prof. Terry Fogarty, Department of Computer Studies, Napier University.
A basic approach to solving a complex problem with genetic algorithms is to decompose the problem into relatively independant subproblems and solve each of these with a genetic algorithm making use of local fitness measures where possible and satisfying the dependancies between the sub- problems at the same time. Methods based on symbiosis, speciation, fitness sharing and the evolution of communication have been developed to do this and are being applied in various applications such as network routing, evolutionary robotics, genetic programming and hardware evolution.
Stephen Grand, Millennium Interactive Ltd, Cambridge, UK.
In November last year my company published a computer game called Creatures. This was the result of a four year project, in which I attempted to synthesise complete, believable, dynamically rich and vaguely intelligent artificial life forms. Roughly a million of these creatures now inhabit computers across Europe and their genetic material is regularly exchanged across the Internet. This talk will describe the thought processes I went through, the practical problems I faced and the solutions I devised as I attempted to combine neural structures, biochemistry and genetics into a single, coherent organism. I would also like to draw some conclusions about holism as an approach to AI, and the lessons that Biology can teach Computer Science. For some marketing hype about the program, and a lay description of my philosophical approach, see http://www.cyberlife.co.uk
A Schema Theory for GP Riccardo Poli
Joao Pujol, School of Computer Science, The University of Birmingham
The automatic design of neural networks is still a largely unsolved problem. Recently, methods based on evolutionary computation have been applied to the design of neural networks. However, most of them lack efficient tools to evolve the weights and the architecture simultaneously. The main reason for this is that most of these methods do not deal with neural networks as two-dimensional structures, which have natural building blocks, the neurons. In this talk, I present a new approach based on evolutionary computation which combines a linear and a two-dimensional representation to evolve the weights and the architecture simultaneously. I also report on results of the application of the method to binary classification problems and control.
Axel Großmann, School of Computer Science, The University of Birmingham
I am going to give a summary about the current state of the robot-learning project I am working on. Our Pioneer robot is supposed to learn object classification and successful action sequences in a specific environment. The robot has to pick up red and blue soda cans from the ground and collect them in a box. In particular, I am going to speak about
Dr Colin Reeves, School of Mathematical and Information Sciences, Coventry University
The genetic algorithm (GA), as the very name suggests, has most often been viewed from a biological perspective. The metaphors of natural selection, cross-breeding and mutation have been helpful in providing a framework in which to explain how and why they work. However, most practical applications of GAs are in the context of optimization, where alternative approaches may (and indeed in many cases do) prove more effective. In attempting to understand how GAs function as optimizers, several alternative viewpoints have been suggested. This tutorial will focus on one of these in some detail - an exploration of the links to neighbourhood search methods, and consequent explanations in terms of the GA landscape.
Dr. Adrian Thompson, Centre for Computational Neuroscience & Robotics, School of Cognitive & Computing Sciences, University of Sussex.
Evolutionary algorithms are computer programs that use the Darwinian tactic of selection acting on heritable variation. If the selection procedure (analogous to the `fitness' of individuals in natural populations) is related to an engineering specification, such algorithms can be used to `evolve' an engineering system (in my case, an electronic circuit) towards satisfying that specification. Usually, the system being automatically designed in this way is simulated in software to make it easy for its structure or parameters to be varied.
I'll show how an evolutionary algorithm can be allowed to manipulate a real physical medium - a reconfigurable VLSI silicon chip. By carefully considering the fundamental differences between the evolutionary process and the top-down methods used by human designers, I'll argue for a radical re-thinking of what kinds of electronic systems are possible. For those schooled in conventional design my conclusion is hard to swallow, so to aid the digestion I'll show an example of the ideas in action. This evolved silicon system is obviously beyond the scope of conventional design techniques. By exploiting the subtleties of semiconductor physics in a richly recurrent real-time analogue dynamical system, the circuit is amazingly efficient. In allowing evolution a free-hand to work with a physical medium, problems arise which have strong biological analogies: I'll describe how I'm using ideas from biology to tackle them.
Amr Radi, School of Computer Science, The University of Birmingham
The backpropagation learning rule is a widespread computational method for training multilayer networks. Unfortunately, backpropagation suffers from several problems. In this talk, a new technique based upon Genetic Programming (GP) is proposed to overcome some of these problems. We have used GP to discover new supervised learning algorithms. A set of such learning algorithms has been compared with the Standard BackPropagation (SBP) learning algorithm on different problems and has been shown to provide better performances. This study indicates that there exist many supervised learning algorithms better than, but similar to, SBP and that GP can be used to discover them.
Jeremy Wyatt, School of Computer Science, The University of Birmingham
One big difference between supervised learning and learning in an autonmous agent is that an agent can influence the sequence of experiences it has. Exploration problems are concerned with how to select an optimal sequence. The principle difficulty is that exploration problems are well known to be intractable. I will present a combination of new techniques which approximate optimal solutions, while working in reasonable time and space constraints.
Prof Aaron Sloman, School of Computer Science, The University of Birmingham
This is a "first draft" attempt to characterise a new unifying generalisation of the practice of software engineers, AI designers, developers of evolutionary forms of computation, designers of adaptive systems, etc. This topic overlaps with theoretical biology, developmental psychology and perhaps some aspects of social theory (yet to be developed!). Just as much of theoretical computer science follows the lead of engineering intuitions and tries to formalise them ([]Milner, 1996), I think there are also some important emerging high level cross disciplinary ideas about processes and architectures that can be unified and formalised, though this discussion reports only preliminary work on this task.
The key idea is that there are two linked spaces "inhabited" by behaving systems. One is the fairly familiar space of possible designs ("design space", where the notion of "design" does not presuppose any designer: a design is simply an abstract specification which can be instantiated in working systems that fit the design). The other is the space of sets of requirements, which I call "niche space", since I've come to view the biologist's notion of a niche as more or less the same thing as the engineer's notion of a set of requirements and constraints. (Both are abstract spaces. E.g. two insects or two plants in the same physical location can be in totally different niches, so niches are not determined by physical location.) Just as designs don't presuppose a designer, so in principle requirements (niches) don't presuppose a requirer. However there are different ways actual requirements can be generated: e.g. engineering goals vs biological needs. (Note that requirements are not the sort of thing sometimes described as a functional specification from which a program could be compiled.)
A generalisation of the study of dynamical systems (phase spaces) is the study of trajectories in design space and niche space, and the subtle and complex causal interactions between them and the trade-offs between different regions of design space (since the same class of behaviours can generally be produced by infinitely many different designs.)
Conjecture: there is a way of formally describing the structures of these spaces and their dynamic interactions which will revolutionise AI, theoretical biology, and perhaps computer science. For instance the notion of a "user-friendly" system could be replaced by a precise description of a region in niche-space, which will be related to two regions in design space: one for a class of users and one for a system that will be found friendly by such a user.
One of the important issues in understanding such dynamics is "circular causation": high level events and processes can have real causal powers, and can influence low level implementation mechanisms, even though the latter are in a sense causally complete. Causation in both directions is something we need to understand.
Ref:
Robin Milner, Semantic ideas in computing, in Computing
Tomorrow: Future research directions in computer science, ed.
Ian Wand and Robin Milner, Cambridge University Press, 1996,
pp246-283
Una-May O'Reilly, AI Lab, MIT
As programmers, we all make changes to our programs that result in unintended, inappropriate outcomes. That's because, (among other reasons!) a program is sensitive to the order in which its statements appear with respect to one another. Cossover in genetic programming blindly changes the ordering a program's primitive elements. So, how can it work? I shall set up a simple, parameterized GP problem that isolates statement dependency from the perspective of external dependencies resolving as subsolutions combine. As a result of running simple GP on this problem under different parameter regimes, an elucidation of the evolutionary search process and reasons for its success emerge.