EEBIC group


1998 Meetings


  • 26 January, 1998: AI Debate

    Panelists: Aaron Sloman, Manfred Kerber, Riccardo Poli, John Barnden, Russell Beale, Brian Logan (perhaps others) - School of Computer Science, The University of Birmingham

    This is an informal debate on some important AI topics mostly for the benefit of our UG, MSc and PhD students working in the area, but open to everybody. The audience will drive the discussion by asking questions or making comments on relevant AI topics, two or more panelists (with very different views!) will be allowed to answer. Interesting topics include:

    1. Logic is useful, but what can't it do?
    2. Are knowledge representations necessary for intelligence?
    3. Symbolic systems vs. neural nets.
    4. The scaling up problem: how to solve or avoid it.
    5. Is Alife part of AI or AI part of Alife?
    6. Should toy worlds and toy problems be avoided (or are they the drosophila of AI?)
    7. Will machines ever have feelings and thoughts?
    8. Is philosophy of any use for AI, or vice versa?
    9. Is AI relevant to psychology?
    10. Deterministic vs. non-deterministic search (i.e. AI search vs. evolutionary computation, simulated annealing and the like).

  • 2 February, 1998: 1000 Needles in a Billion Antstack

    W. B. Langdon, School of Computer Science, The University of Birmingham

    The problem of programming an artificial ant to follow the Santa Fe trail is used as an example program search space. Analysis of shorter solutions shows they have many of the characteristics often ascribed to manually coded programs.

    Enumeration of a small fraction of the total search space and random sampling characterise it as jagged with multiple local and global optima. This suggests it is difficult for hill climbing algorithms. Analysis of the program search space in terms of fixed length schema suggests it is highly deceptive and that for the simplest solutions large building blocks must be assembled before they have above average fitness.

    Previously reported genetic programming, simulated annealing and hill climbing performance is shown not to be much better than random search on the Ant problem.

  • 9 February, 1998: A Conceptual Foundation for Autonomous Learning in Unforeseen Situations

    Catriona Kennedy, Faculty of Computer Science, Dresden University of Technology

    It is desirable that agent architectures should include the capability to learn autonomously in completely unforeseen situations. These are situations which were not taken into account in a predefined model (i.e. they are "impossible") but they are detectable as unusual sensor measurements. These occurrences should trigger the learning of new concepts (discovery). For such a learning capability in the context of user-defined goals, it is assumed that a layered hybrid architecture provides the best framework. A conceptual foundation is presented here for the problem of anomaly detection and then the subsequent learning of a new class of behaviours (which was previously undefined). Steps towards a solution are possible if a form of symbol grounding is used (approximately as described by Harnad) along with the paradigm of "anticipation" (as introduced by Rosen). Particular emphasis is given to the requirement for continual distinction between "simulation" and "reality". Genetic programming is considered as a tool (both on the symbolic and the subsymbolic levels).

  • 16 February, 1998: Global Information Access, Espionage, and Other Optimisation Problems on which Evolutionary Computing is Significantly Better than Anything Else

    Dr. David Corne, Department of Computer Science, The University of Reading

    The talk will start with a quick discussion about EctelNet, the European Working Group for Evolutionary Computation in Telecommunications. I'll go on to extol the virtues of Evolutionary Algorithms, with a view to establish the following conclusion: Evolutionary Techniques, when properly applied, cannot (yes, cannot) fail to improve on rival optimisation methods. Examples on the way will include a few of the many important areas in which Evolutionary Computation methods are proven to be the best way so far to solve the problem. The `catch', which is of course that the improvement will occasionally be at the cost of undue extra time, will be given the attention it deserves.

  • 23 February 1998: Applications of Genetic Algorithms in Structural Chemistry.

    Prof. Kenneth Harris, School of Chemistry, The University of Birmingham

    The determination of crystal structures from single-crystal diffraction data can now, in general, be carried out routinely and straightforwardly. However, many solids are microcrystalline, and are therefore not amenable to investigation using conventional single-crystal diffraction methods. For such materials, the ability to determine the crystal structure directly from powder diffraction data is clearly essential. Thus, the development of procedures for crystal structure determination from powder diffraction data is an area of considerable importance, with enormous potential impact across several fundamental and applied scientific disciplines.

    Unfortunately, there are several intrinsic difficulties associated with solving crystal structures directly from powder diffraction data. In a powder diffraction pattern, three-dimensional diffraction data are essentially "compressed" into one dimension. As a consequence, there is usually substantial overlap of peaks in the powder diffraction pattern, leading to ambiguities in determining the intensities of the individual diffraction maxima. This is a major problem with regard to the application of traditional methods for structure solution, as these methods consider directly the intensities of individual diffraction maxima extracted from the powder diffraction pattern. The problems are particularly severe for low-symmetry structures for which there is generally extensive peak overlap.

    In view of these problems, there is clearly substantial demand for the development of new and improved methods for solving crystal structures from powder diffraction data, particularly in the case of low-symmetry molecular crystals. Recently, we have been exploring new approaches for crystal structure solution from powder diffraction data. Instead of extracting the intensities of individual diffraction maxima from the experimental powder diffraction pattern, our approaches adopt an alternative philosophy of postulating structural models in direct space, independently of the experimental diffraction data, and then assessing the suitability of these structural models on the basis of their agreement with the experimental diffraction data (using the profile R-factor as the goodness-of-fit criterion). Importantly, this philosophy avoids implicitly the problem of extracting the intensities of individual diffraction maxima directly from the experimental powder diffraction pattern. The approaches that we have used for generating trial structure solutions in direct space have been based on Monte Carlo sampling techniques and Genetic Algorithms.

    The lecture will highlight the scope and potential of approaches based on Genetic Algorithms for crystal structure solution from powder diffraction data, highlighting examples from a range of disciplines within solid state chemistry.

  • 9 March 1998: Simple Models of Evolutionary and Co-evolutionary Learning

    Jonathan Shapiro, Department of Computer Science, Manchester University

    Often it is not possible to determine the performance of a particular solution to a particular problem precisely. Rather, one can only test the solution on cases, and usually testing can only be done on a small fraction of the cases to which the solution will be applied. Problems of this type are found in machine learning applications, in genetic programming applications and other adaptive computing systems such as evolving cellular automata, in applications which must work in an unpredictable environment, and other situations.

    In this talk I will discuss very simple models of population-based, evolutionary algorithms in situations in which the fitness function is defined in terms of cases. This feature increases the size of finite-population effects. This can be compensated for by an increase in the number of test cases. I will also discuss what happens when the data is co-evolved, as suggested by Hillis, in very simple models.

  • 16 March 1998: Recursion, Lambda Abstractions and Genetic Programming.

    Tina Yu, Department of Computer Science, University College London

    Module creation and reuse are essential for Genetic Programming (GP) to be effective with larger and more complex problems. This research design a particular kind of program structure to support module creation and reuse: modules are represented as lambda abstractions and their reuse is achieved through an implicit recursion. A type system is used to preserve this structure. The structure of lambda abstraction and implicit recursion also provides structure abstraction in the program. Since the GP paradigm evolves program structure and contents simultaneously, structure abstraction can reduce the search effort for good program structure. Most evolutionary effort is then focused on the search for correct program contents rather than the structure. Experiments on the Even- N-Parity problem show that, with the structure of lambda abstractions and implicit recursion, GP is able to find a general solution which works for any value of N very efficiently.

  • 27 April 1998: On the Search Properties of Different Crossover Operators in Genetic Programming

    Riccardo Poli, School of Computer Science, The University of Birmingham

    In this talk I will present work in which we have studied and compared the search properties of different crossover operators in genetic programming (GP) using probabilistic models and experiments to assess the amount of genetic material exchanged between the parents to generate the offspring. These operators are: standard crossover, one-point crossover, an operator which we presented last year and which allowed us to extend the GA schema theorem to GP, and a new operator, uniform crossover. Our analysis suggests that standard crossover is a local and biased search operator not ideal to explore the search space of programs effectively. One-point crossover is better in some cases as it is able to perform a global search at the beginning of a run, but it suffers from the same problems as standard crossover later on. Uniform crossover largely overcomes these limitations as it is global and less biased.

  • 18 May 1998: The Use of Genetic Algorithms for Optimizing Geometries of Atomic Clusters

    Dr. Roy L. Johnston, School of Chemistry, The University of Birmingham

    Atomic clusters are aggregates of anywhere between 2 and (of the order of) 10^6 atoms. The atoms may all be of the same type (elemental clusters) or of diferent types (e.g. alloy clusters, ionic clusters). Clusters can be formed by nearly all the elements in the Periodic Table, such as the noble gases (e.g. argon), semi-conducting elements (e.g. carbon and silicon) and metals (e.g. sodium and iron). The bonding in these different types of clusters is very different ranging from weak dispersion forces (noble gas clusters) to covalent (carbon and silicon clusters) and delocalized/metallic (metal clusters). Clusters are of interest from a theoretical viewpoint as well as for their possible application in the growing field of nano-electronics. For an N-atom cluster there are a very large number of possible structures (isomers), many of which correspond to local minima on the Potential Energy Hypersurface. The problem we face is trying to find the lowest energy isomer (Global Minimum). In this seminar, I will discuss the application of Genetic Algorithms to achieve this global minimization, giving examples ranging from argon clusters (bound by a simple Lennard-Jones potential) to metal clusters bound by Morse and many-body potentials and carbon clusters, including the fullerenes.

  • 22 June 1998: Tutorial about "Learning to Learn" Approaches

    Axel Großmann , School of Computer Science, The University of Birmingham

    "Learning to Learn" is a promising research direction within machine learning that has recently gained considerable attention. As with traditional inductive machine learning methods, algorithms that learn to learn induce general functions from examples. Learning to learn methods include an extra feature which is that their learning bias is chosen based on learning experiences in other tasks. Humans often generalise correctly after a small number of training examples by transferring knowledge acquired in other tasks. Systems that learn to learn mimic this ability.

    I will try to summarise recent developments in this relatively new field and explain two methods in more detail:

    • Multi-task learning by Rich Caruana (CMU)
    • Explanation-based neural network learning by Sebastian Thrun (CMU)

  • 6 July 1998: Informal seminar/tutorial on recent extensions to my Pop-11 GP system

    Riccardo Poli, School of Computer Science, The University of Birmingham

    In this informal seminar/tutorial I will describe the new features I have recently introduced in my genetic programming system in Pop11. These include:

    1. Strongly typed GP
    2. A way of coordinating runs on different machines (a sort of island model, but slightly different).

    The seminar/tutorial has the following aims:

    • to let the (two or three...) users of my GP code know the new features, in case they wanted to use feature (1) for their runs.
    • to get advice on what I should do/undo for feature (1) and, particularly, (2).
    • to give/get advice on how to set up communication channels among Pop-11 processes running on different machines and also on how to setup your own HTTP servers/clients in Pop11 (this is something I have included in (2)).

    If you don't know my GP system you might find the seminar too technical, particularly as I will probably NOT be able to show examples on a computer (LR7 is unavailable), unless we move into a lab after a while.

  • 17 Sep 1998: A Population of Heads Are Better Than One

    Xin Yao, University of New South Wales

    This talk first introduces an evolutionary system for designing artificial neural networks (ANNs) automatically. ANN architectures (connectivities) and weights are evolved simultaneously. Behavioural rather than genetic evolution is emphasised. Then it is argued that learning is different from optimisation in practice. Population information can be exploited effectively to improve ANN's generalisation. Experimental results show that a combined population of ANNs performs better than the single best ANN. In order to improve a combined population (i.e., an ensemble) of ANNs further, a new learning algorithm is proposed to design individual ANNs cooperatively and simultaneously.

  • 5 Oct 1998: Linear Genetic Programming

    Wolfgang Banzhaf, Dortmund

    Genetic Programming has been originally suggested in a tree-based program representation. In this talk I discuss our approach to GP using elements from imparative languages. Specifically, I will talk about the machine language approach to GP. As it turns out there are advantages to linear GP with respect to the problem of code-growth and bloat that has been uncovered over the years. The talk concludes with the discussion of some recent applications of GP in our lab.

  • 2 Nov 1998: Brad Johanson, Stanford University

    GP-Music: An Interactive Genetic Programming System for Music Generation with Automated Fitness Raters

    Both automatic and computer assisted music composition have been a topic of research for many years. Typical approaches have been to analyze how humans compose music, and create computer programs which mimic these techniques. Randomization parameters has been used to try and make the pieces appear more creative. In this talk I will present a system for computer assisted music composition called the GP-Music System. It uses genetic programming to evolve short melodic note sequences. Using a simple function set, random musical sequences are created, and then evolved over a series of generations. The user of the program serves as the fitness function to determine which sequences are most likely to propagate from one generation to the next. The user interactive system was used to evolve several short sequences, each derived from slightly different function sets and genetic programming parameters.

    The system also has a method for automatic rating of melodies. This is accomplished by automated fitness raters, or auto-raters, which are neural networks with shared weights. The auto-raters are trained using back propagation on a training set composed of sequences and their human assigned ratings generated during human interactive runs of the GP-Music System. Two types of auto-rater were created, one which assigns a ranking, and another which indicates which of two sequences is better. The 'ranking' auto-rater was able get within 7% of the human rating, while the 'comparative' auto rater at best achieved a 60% accuracy in predicting which of two melodies the user preferred. Using the 'ranking' auto-rater we were able to make runs of up to 50 generations with 500 individuals per generation. The best of run pieces generated were pleasant but were not, in general, as nice as those generated in user interactive runs.

  • 9 Nov 1998: Ian Millington, School of Computer Science, The University of Birmingham

    Unnatural Selection: Fitness, Fertility and Fecundity in Evolutionary Algorithms

    Evolutionary algorithms (EAs) are just one class of population based search search techniques. Their only link with biological evolution is by metaphor, and since the most well known form of biological evolution is Darwin's 'Natural Selection', EAs are often seen as being examples of 'natural selection' (e.g. Koza's "Programming Computers by Means of Natural Selection").

    This talk challenges the use of natural selection as a metaphor for EAs. Biological evolution is broader than pure natural selection, and I will argue that by not understanding the distinction we may miss important properties of EAs. Genetic algorithms have more in common with biological artificial selection, and this may be a more fruitful metaphor to pursue.

    This distinction is not a trivial change of terminology, however. I will show that biological studies separate the two 'modes' of evolution for a good reason. By doing the same we can see that the 'fitness' function at the heart of EAs may be unpacked into three distinct quantities; aptness, fertility and fecundity. These may help us understand more about the dynamics of this search, and other population search methods.

  • 16 Nov 1998: Sub-machine-code Genetic Programming: A quantum leap in GP performance

    Riccardo Poli, School of Computer Science, The University of Birmingham

    CPUs are often seen as sequential, however they have a high degree of internal parallelism, typically operating on 32 or 64 bits simultaneously. In this talk I will present the idea of exploiting this internal parallelism to extend the scope of genetic programming (GP) and improve its efficiency. We call the resulting form of GP sub-machine-code GP. The differences between sub-machine-code GP and the usual form of GP are purely semantic and largely language independent, i.e. any GP system can potentially be used to do sub-machine code GP. In the talk I will present this form of GP and some of its applications. The speed up obtained with this technique on Boolean classification problems is nearly 2 orders of magnitude. To illustrate this, in the talk I will run a simple GP system in C implementing this technique, and show how this solves the Even-10 parity problem in seconds. This makes it the fastest GP system in the world.

  • 7 Dec 1998: Paul Marrow, BT Laboratories

    Evolvability and evolutionary measures

    Understanding the characteristics of evolutionary systems that result in their evolution is central to the design of new and more efficient evolutionary algorithms. Such characteristics have often been grouped under the heading of evolvability. Evolvability has been studied both in the context of biology and of evolutionary computation. I review ideas of evolvability from the perspective of a biological view of evolution, but with application to evolutionary computation and Artificial Life. Studying natural and artificial evolution can tell us about evolvability, and also shows that we need to measure evolutionary phenomena in order to compare different evolutionary systems. I discuss possible measures of important evolutionary phenomena, and what these tell us about evolvability. I then go on to present results from the application of several evolutionary measures to evolutionary algorithms used for information retrieval.

  • 14 Dec 1998: David E. Goldberg (deg@uiuc.edu)
    Department of General Engineering
    University of Illinois at Urbana-Champaign

    A Threefold Decomposition of GA Problem Difficulty and Its Core

    The design of competent genetic algorithms -- GAs that solve hard problems quickly, reliably, and accurately -- requires a working knowledge of those dimensions of problem difficulty that can thwart the operation of well designed GAs. Unfortunately, most discussions of GA problem difficulty center around a single dimension such as isolatedness, noise, or deception without going on to integrate the dimensions into a more complete theory. This talk outlines a more complete theory of GA problem difficulty by first generalizing the notion of a building block (BB) to include explicit reference to sequence, superiority, and context. Thereafter, the talk decomposes the problem difficulty question into three parts: intra-BB difficulty, inter-BB difficulty, and extra-BB difficulty. A difficulty core including deception, noise, and scaling difficulty is then outlined, and an argument is made that shows how the three types of inter-BB epistasis map back to the core. This argument suggests that any GA that can solve problems of bounded deception, scale, and noise, can solve problems that can be adequately modeled with the notion of a sequentially superior building block of a solution. The argument is then extended by relaxing to non-inferior BBs, and this leads to explicit consideration of the problem of degeneracy and a new archetypal problem the tobacco road function. These kinds of problems offer new, but not insurmountable, challenges for competent GA design, and the talk concludes by outlining approaches that might be adopted to overcome these difficulties.


  • Last modified: Fri Feb 19 14:45:04 GMT 1999