EEBIC group


2000 Meetings


  • 21 February 2000: Sameer Singh, Department of Computer Science, University of Exeter

    Digital Mammography

    This seminar provides an overview of the digital mammography research that is current undertaken at the Pattern Analysis and Neural Networks group at Exeter University. The seminar discusses the problem of breast cancer screening and highlights research areas where computer science can make a significant contribution. The seminar will discuss the various pattern recognition tools that can be applied to digital mammography. The seminar will highlight the main problems and challenges in the area of digital mammography. The seminar shows a novel set of measures that will quantify the level of mammogram enhancement. The results on the detection of masses in breasts are based on texture analysis using co-occurrence matrices of mammograms and the application of MLP and RBF neural nets on the MIAS international database. The analysis is discussed using ROC curves.

  • 6 March 2000: Professor Edward Tsang, Department of Computer Science, University of Essex

    A Family of Novel Stochastic Algorithms for Constraint Programming

    Professor Tsang and his colleagues have been working on novel combinations of constraint satisfaction, metaheuristics and connectionist methods for solving complex OR problems. In this talk, he will discuss the solution of constraint satisfaction and optimisation problems using Guided Local Search -- a class of computation models first developed with hardware implementation in mind, then extended to meta-heuristic methods based on penalty methods in OR. He will also describe extensions that combine Guided Local Search with ideas taken from the area of genetic algorithms. Finally, he will outline ways in which these techniques have been used to solve non-trivial real-world problems, including their incorporation in ILOG Dispatcher, a commercial package for vehicle routing.

  • 20 March 2000: Vesselin K. Vassilev, School of Computing, Napier University, Edinburgh

    Fitness Landscapes and Search in the Evolutionary Design of Digital Circuits

    The theory of fitness landscapes has been developed to provide a suitable mathematical framework for studying the evolvability of a variety of complex systems.

    In evolutionary computation the notion of evolvability refers to the efficiency of evolutionary search. The structure of a fitness landscape affects the ability of evolutionary algorithms to search. Three characteristics specify the structure of landscapes. These are the landscape smoothness, ruggedness and neutrality. The interplay of these characteristics plays a vital role in evolutionary search. This has motivated the appearance of a variety of techniques for studying the structure of fitness landscapes. An important feature of these techniques is that they characterise the landscapes by their smoothness and ruggedness, ignoring the existence of neutrality. Perhaps, the reason for this is that the role of neutrality in evolutionary search is still poorly understood.

    In this talk some recent results on the spectral properties of the algebraic structures of fitness landscapes are summarised to provide a basis for studying the landscape structure.

    This approach is further employed to introduce an information analysis that characterises the structure of fitness landscapes in terms of their smoothness, ruggedness and neutrality.

    The findings are finally applied in a study of the fitness landscapes generated by evolving digital circuits using Cartesian Genetic Programming. The landscapes of this engineering problem are quite different from many recently studied landscapes that tend to be defined over simplified combinatorial and optimisation problems. The difference originates from the genotype representation that is a configuration defined over two completely different alphabets. This makes the study of the corresponding landscapes much more involved. It is shown that circuit evolution landscapes are products of subspaces with different characteristics. They are landscapes with neutrality and sharply differentiated plateaus.

    The landscape neutrality is beneficial for circuit evolution. This is further used in the design of a neutral network that connects the conventional with other more efficient designs. Thus a method is described by which the process of evolutionary design is GUARANTEED to produce a functionally correct solution.

  • 27th March 2000: Julian Miller, School of Computer Science, The University of Birmingham

    Discovering New Principles and Cartesian Genetic Programming

    I will discuss a new way to evolve programs called Cartesian Genetic Programming (CGP). The technique encodes a graph as a string of integers (the genotype). Not all graph nodes have to be connected between the program inputs and outputs. This can greatly increase the number of genotypes that have the same fitness (neutrality). Empirical evidence will be presented that shows that this increased neutrality can be very beneficial to the search.

    I will also discuss in detail how CGP is applied to a number of problems particularly the evolution of extremely efficient digital circuits that carry out multiplication. The circuits are very unconventional in structure, however by studying a series of circuits of increasing size (2x3, 3x3, 4x3, multipliers) it may be possible to discern the principle of their operation. There is now strong evidence that there is indeed such a principle as the "optimum" circuits all are about 23.5% better than the best conventional alternative.

  • 27 April 2000: Nabil M Hewahi, Department of Computer Science, Islamic University of Gaza, Palestine

    Adaptive Hierarchical Censored Production Rule_Based System: A Genetic Algorithm Approach

    An adaptive system called GBHCPR (Genetic Based Hierarchical Censored Production Rule) system based on Hierarchical Censored Production Rule (HCPR) system is presented that relies on development of some ties between Genetic Based Machine Learning (GBML) and symbolic machine learning. Several genetic operators are suggested that include advanced genetic operators, namely, Fusion and Fission. An appropriate credit apportionment scheme is developed that supports both forward and backward chaining of reasoning process. A scheme for credit revision during the operations of the genetic operators Fusion and Fission is also presented. A prototype implementation is included and experimental results are presented to demonstrate the performance of the proposed system.

  • 10 August 2000: Paul Darwen, Department of Computer Science, University of Queensland, Australia

    How to abuse the No Free Lunch theorem to "prove" anything.

    The No Free Lunch says that the best search/optimization/learning algorithm for a problem depends on that problem. For an evolutionary computation approach, this means picking all of (1) representation, (2) operators, and (3) parameters to suit the problem.

    Many previous papers have tried to find the best choice of one of these (for a particular problem) by fixing the other two, and varying only the one being tested. They then conclude something about the suitability of that third property for that particular problem. This is analogous to clinical trials in medicine.

    Unfortunately, this approach is not suitable for evolutionary computation. The reason is that fixing the other two out of three attributes may be done with ad hoc, arbitrary choices. So what's really being tested, is the ability of the the third attribute to fix the deficiencies of the other two.

    This talk gives a concrete example, for co-evolutionary learning of the game of Backgammon. By fixing a representation and parameters, and varying the operators, it can be shown that for a sufficiently large population, crossover improves learning. But for a sufficiently small population, this is not the case. Does that allow us to write two conflicting papers about how crossover is good or bad for the problem of Backgammon? Of course not. This merely demonstrates that many previous studies are flawed, by their poor choice of the two attributes that are held constant.

    A related issue is that previous studies in evolutionary computation have (rightly) emphasized the importance of maintaining diversity. But in learning (as opposed to optimization) genotypic diversity does not necessarily translate to phenotypic (behavioural) diversity, which is what really counts. This talk also illustrates how genotypic diversity can be quite deceiving, even with a quite reasonable genotype/phenotype mapping. Diversity from the inital randomized population, and from the effects of mutation step size, can have surprisingly perverse effects on phenotypic/behavioural diversity.

  • 25th September 2000: Adam Prugel-Bennett, Image, Speech and Intelligent Systems research group, Department of Electronics and Computer Science, University of Southampton

    Modelling Evolving Populations and Crossover Induced Linkage

    There are an ever growing number of algorithms for solving technical problems which are based on an evolutionary paradigm. These include genetic algorithms (GA), evolutionary strategies (ES) and genetic programming (GP). Yet, these schemes pose a severe challenge to our understanding: the search space is high dimensional and typically epistatic; the evolution involves a large population that interact through selection and crossover; and the genetic operations are controlled by a large number of parameters. How then are we to make sense of evolutionary algorithms sufficiently to make rational design decisions?

    In this talk I will discuss one approach to this task---often referred to as the statistical mechanics formalism. In this, we seek a description of the dynamics of an evolution algorithm (typically a simple GA) in terms of a small number of statistical variables. The approach is typically approximate in that the variables are insufficient to obtain all the information relevant to evolution. However, we can usually infer the missing information well enough to obtain a good quantitative prediction of the dynamics of an actual GA. Although, the approach is restricted to particular problems, it provides many intuitions that are not available from empirical studies.

    After describing the approach and giving some examples of problems that my collaborators and I have studied, I will present some new work on modelling the linkage caused by multi-point crossover and how this affects the evolution.

  • 2nd October 2000: Sara De Maeght, Max Planck Institute for Psychological Research, Munich, Germany

    On how involuntary movements reveal our thoughts.

    Ideomotor phenomena are involuntary body movements induced by the contents of perception, i.e., observing the activity of another person or physical object. For example, when bowling people often move their bodies as if they could steer the thrown bowlingball in their desired direction. Although they know they have no influence in the scene, people still make movements as if they have. These movements can be explained by two induction principles: 1) perceptual induction, `we do what we see' and 2) intentional induction, `we do what we'd like to see'. In order to test these induction principles we conducted several billiards-like experiments in which the goal was to achieve as many hits as possible. Intentional conditions varied in two ways, i.e., with respect to the subjects' role (active or passive observer) and task (ball- or target related). In all conditions the observed scenes were identical, hence perceptual induction should be the same in all cases. In the talk, I will not only discuss the phenomenon of ideomotor behavior in further detail. I also want to discuss the theoretical implications it adresses, i.e., how representations can influence our behavior and our perception as well.

  • 9th October 2000: Ian Parmee, Director Plymouth Engineering Design Centre, School of Computing, University of Plymouth

    The Integration of Evolutionary Search, Exploration and Optimisation Strategies with Complex Design and Decision-making Processes

    The seminar will commence with a brief introduction to the problems facing the designer during conceptual, embodiment and detailed design. Such problems relate in the main to multi-variate decision-making environments characterised by high dimension, heavy constraint and multi-criteria and aggravated by varying degrees of uncertainty and ill-definition relating to the problem at hand. An overview of the various evolutionary and adaptive search strategies that have been developed at the Plymouth Engineering Design Centre to address these generic problems follows with particular emphasis upon:

    • Problem decomposition
    • Whole system design
    • Constraint satisfaction and optimisation
    • Multi-criteria satisfaction
    • Systems identification
    Experiences relating to this work and significant industrial collaboration has resulted in a concentration of research effort upon the development of interactive evolutionary design systems for, primarily, conceptual design where the designer plays a major role within an iterative evolutionary search and exploration environment. Here, evolutionary techniques are utilised to gather optimal design information relating to the problem at hand which is passed to the engineer in an appropriate form. The objective is that off-line processing of this information and associated discussion will result in a re-definition of the initial design space through changes to variable range, constraint penalties, objective preferences etc. Thus the design space evolves as solutions and associated design information are identified.

    The second part of the talk will concentrate upon the initial development of this interactive system with demonstrations of experimental software.

  • 16th October 2000: Robert Smith, Intelligent Computer Systems Centre, The University of The West of England

    Genetics-Based Machine Learning: Current Applications, Future Directions

    This talk presents a progression of genetics-based machine learning (GBML) systems and ideas. It begins with a project where a GBML system acquired rules for novel fighter combat maneuvers through simulation. In this project, a genetic machine learning system was implemented to generate high angle-of-attack air combat tactics for the NASA X-31 research aircraft. This system, which was based on a learning classifier system (LCS) approach, employed a digital simulation model of 1-versus-1 air combat and a genetic algorithm to develop effective tactics for the X-31. The resulting maneuvers allowed the X-31 to successfully exploit its post-stall capabilities against a conventional fighter opponent, demonstrating the ability of the GLS to discover novel tactics in a dynamic air combat environment. The talk will include previously un-presented results where two planes co-evolve via genetics based machines learning.

    This project demonstrates how a GBML system can acquire cooperative structures that jointly implement novel approaches to unforeseen problems via experience. The talk will then extend the notions in this GBML system to a new, general framework that is currently being designed. This system will extend the adaptive abilities of GBML to broader, agent-based systems. Final comments in the talk will discuss motivations for GBML as an adaptive paradigm in this new, broader setting.

    Dr. Robert E. Smith is Director of The Intelligent Computing Systems Centre of the University of The West of England. He was previously an Associate Professor of Aerospace Engineering and Mechanics at the University of Alabama. He received his B.S. in Mechanical Engineering in 1985, his M.S. in Engineering Mechanics in 1988, and his Ph.D. in Engineering Mechanics in 1991. Dr. Smith is an active researcher in the application of adaptive and intelligent systems to engineering problems. His chief interests are in genetic algorithms for machine learning and optimization. Dr. Smith is the author of 15 journal articles and more than 20 conference articles on these subjects. Dr. Smith serves as Associate Editor of the international journal, Evolutionary Computation.

  • 18th October 2000, 3pm (please note the unusual day and time): Prof Ah Chung Tsoi, University of Wollongong, Australia

    Adaptive Processing of Data Structures

    In recent years, an alternative method is proposed to learn systems which can be expressed in terms of a graph structure, e.g., tree, list. In this talk we will indicate a number of results, using a multilayer perceptron in each node of the graph, and we will show how the weights of the node model in the graph can be learned from a set of input output training examples. The work will be illustrated using a number of examples.

    Prof Tsoi is the Director of Information Services at the University of Wollongong in Australia. He was the Dean of the Faculty of Informatics in the same university and a professor at the University of Queensland before that. His major research interests include neural networks, machine learning and data mining. He has served on various panels in the Australian Research Council and published numerous refereed papers.

  • 30th October 2000: Jonathan Rowe, School of Computer Science, The University of Birmingham

    Genetic algorithms as Markov processes: an introduction.

    Genetic algorithms are stochastic processes that move from one state (or population) to another. The fact that each generation depends only on the previous one means that the genetic algorithm is a Markov process. Applying some simple ideas from the theory of Markov processes enables us to address a number of general questions about genetic algorithms: Under what conditions will they converge to the optimum? How long might this take? What is the effect of adapting operators according to some annealing schedule? This is the most general level of genetic algorithm theory, and it forms the foundation for further theoretical developments.

    It should be noted that this talk is pedagogical in nature, and will not contain any new research results!

  • 6th November 2000: Nic McPhee, Division of Science and Mathematics, University of Minnesota, Morris (USA), currently on sabbatical here in the School

    GP Schema theory as a tool for understanding code growth in Genetic Programming

    The seminar will be a review of recent work I've done with Riccardo Poli, some fairly polished and some "in progress". In the past two years Riccardo has extended Stevens and Waelbroeck's work on Genetic Algorithms (GA) schema theory to a schema theory for Genetic Programming (GP). In this talk I'll discuss applications of this theory to the long-standing problem of understanding GP code growth, or "bloat".

    We have applied the theory to the case of a flat fitness landscape, with some surprising results: We get the opposite (in some sense) of bloat. We then extended that work by developing a simple notion of fitness where the theory predicts bloat, which is then observed in real GP runs.

    The theory has also suggests some surprising hypotheses, including the idea that a small number of "glitches" in an otherwise flat fitness landscape may be sufficient to drive the average program length, which may be useful as a new tool for controlling program size in GP runs.

  • 20th November 2000: Jon Shapiro, Department of Computer Science, University of Manchester

    The Perception of Time Intervals and Reinforcement Learning

    An important component of animal behaviour is the ability to wait an appropriate time for a reward. For example, an animal should forage for food for a time related to the expected time to find food. In order to do this, the animal requires an internal clock to measure the time interval and a reinforcement learning system to learn the appropriate waiting time.

    Experiments on animals and humans in which a time interval must be estimated show a very striking property of the internal clock. In a wide range of experiments, it is found that the experimental curves scale with length of the time interval. For example, the standard deviation of the estimate of the time interval grows linearly with the length of the time interval. This is know as scalar timing. This suggests that the perception of time intervals has a characteristic form in a wide range of animals.

    It has been difficult to produce a model of scalar timing which explains the scalar property rather than assuming it a priori. We have produced a simple connectionist model which exhibits the scalar property. The model consists of a collection of probabilistic spiking neurons. Time is encoded as the amount of activation in the network. The neurons are probabilistic; the expected number of spikes emitted by a neuron is proportional to the number of spikes it receives. When the loss of neural activity due to the probabilistic nature of the neurons balances the increase of neural activity due to synaptic fan-out, the scalar property approximately holds. When this is coupled to a simple reinforcement learning mechanism, the system simulates the results of experiments.

    A problem with the model described above is that it requires the balancing of the spike loss and generation mechanisms. I will present a game-theoretic argument which suggests that such a balance could arise from the optimization of performance in animal contests.

  • 4th December 2000: Colin Reeves, School of Mathematical and Information Sciences, Coventry University

    Estimating the number of local optima in a fitness landscape

    Neighbourhood search (NS) is a popular methods for finding good solutions to combinatorial optimization problems (COPs). Individual neighbourhoods induce different fitness landscapes, replete with "hills", "valleys", "ridges" etc., but simple NS only climbs one hill - a local optimum. While at least one hill will be globally optimal, experience shows that most of the time we don't find it, and the fitness of the one we do climb may be quite poor.

    It would be helpful if we could have some idea of how many local optima there really are in a particular fitness landscape. In this talk I shall demonstrate how this problem can be formulated as an analogy of a well-know problem in ecology - estimating the numbers of an animal species in a well-defined area. Some theoretical results will be explained, and experimental evidence used to assess the plausibility of this approach.

  • 12th December 2000: Nicholas J. Macias, Cell Matrix Corporation, Email: cmatrix@cellmatrix.com

    Benefits and Applications of Self-Configurable Hardware-The Cell Matrix Architecture

    This talk will provide an introduction to the cell matrix architecture. I will give a high-level description of its basic characteristics, followed by a brief overview of its cell-level behavior. I'll then discuss the benefits of self-configurable hardware, and explain how self-configurability can be used to support evolvable hardware, fault tolerance and massively parallel configuration. I'll then present some applications for the cell matrix architecture, including massively-parallel computation. I'll conclude with a summary of this research's current status and a discussion of future work.


  • Last modified: Wed Aug 23 11:05:17 BST 2000