EEBIC group


2001 Meetings


  • 19th February 2001: Dr. David Corfield, Kings College, London

    Building Mathematicians

    This talk follows a meandering path through the world of automated mathematical reasoning. We pass by way of the automation of analogical reasoning, the proof of the Robbins problem and diagrammatic reasoning, with the aim of gaining a clearer view of what it is to do mathematics and how AI may contribute in the future.

  • 26th March 2001: Peter Martin, Marconi Communications and School of Computer Science, University of Birmingham

    A Hardware Implementation of Genetic Programming

    Genetic Programming (GP) is a technique for evolving computer programs using the principles of natural selection. To date, GP systems have been realised almost exclusively using general purpose computers. This work describes a hardware implementation of a Genetic Programming system using readily available Field Programmable Gate Arrays, and realised using a high level hardware compilation system. A significant improvement in performance has been achieved when compared to a traditional software implementation.

  • 23rd April 2001: Sunny Bains, Open University

    Physical Computation and Big AI

    Traditional approaches to computation have proved extremely successful for some kinds of task: particularly those that humans do badly. However, it has been less easy to extend conventional computer technology to do the things we do well. One reason for this is that the engineering requirements for intelligence are poorly specified. Conventional approaches involving modules of algorithms that perform particular functions or tasks are not suited to such an ill-defined problem. This presentation will discuss a class of "physical computers": devices that compute in a way that does not involve the conversion of physical signals into data (symbols) and back again. We will consider the case for these machines being more biologically valid and more flexible than their conventional counterparts, and their potential computational power (in the context of arguments about Turing and super-Turing computation by Penrose and Siegelmann).

  • 30th April 2001: Dr Martyn Amos, School of Biological Sciences and Department of Computer Science, University of Liverpool

    Cellular Computing

    The recent completion of the first draft of the human genome has led to an explosion of interest in genetics and molecular biology. The view of the genome as a network of interacting computational components is well-established, but researchers are now trying to reverse the analogy, by using living organisms to construct logic circuits. In this talk we describe current work on engineering logic into bacteria.

  • 21st May 2001: Professor Derek Partridge, Department of Computer Science, University of Exeter

    Inductive programming: not a panacea but a valuable plaster

    I will describe with examples various techniques for extracting information from operational data using inductive technologies such as neural networks. I will then show how such information can be used to enhance classical operational software both through specification improvement and direct implementation improvement. This is not a return to a previous era of inductively generating software systems, a goal that was over ambitious and ultimately proved to be unworkable. It is an attempt to augment classical software engineering by circumventing the specification bottleneck.

  • 15th June 2001: Professor Lei Xu, Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong

    Temporal BYY Learning for State Space Approach, Hidden Markov Model and Blind Source Separation

    Temporal BYY (TBYY) learning has been presented for modeling signal in a general state space approach, which provides not only a unified point of view on Kalman filter, hidden Markov model (HMM), independent component analysis (ICA) and blind source separation (BSS) with extensions, but also further advances on these studies, including a higher order HMM, independent HMM for binary BSS, temporal ICA (TICA) and temporal factor analysis for real BSS without and with noise. Adaptive algorithms are developed for implementation, and criteria are provided for selecting an appropriate number of states or sources. Moreover, theorems are given on the conditions for source separation by linear and nonlinear TICA. Particularly, it has been shown that not only nongaussian but also gaussian sources can also be separated by TICA via exploring temporal dependence. Experiments are also demonstrated. (Details in IEEE Trans. Signal processing, Vol.48, No.7, July, 2000.)

    Lei Xu is a professor of Dept Computer Sci. and Eng. at Chinese Univ Hong Kong(CUHK). He is also a full professor at Peking Univ., and a adjunct Professor at three other universities in China and UK. After receiving his Ph.D from Tsinghua Univ. in 1986, he joined Peking Univ. where he became one of ten university-wide exceptionally promoted young associate professors in 1988 and further been exceptionally promoted to a full professor in 1992. During 1989-93, he worked at several universities in Finland, Canada and USA, including Harvard and MIT. He joined in CUHK since 1993 as a senior lecturer and then took the current professor position since 1996. He has published over 200 academic papers, with a number of them being well cited contributions. He has given a number of keynote/plenary/invited/tutorial talks in intl. major Neural Networks (NN) conferences, such as WCNN, IEEE-ICNN, IJCNN, ICONIP, etc. He is on the Governor Board of INNS (i.e., Intl. NN Society), a past president of APNNA, and an associate editor for six Intl. journals on NN, including Neural Networks, IEEE Trans. Neural Networks. He was a ICONIP'96 program committee chair and a general chair of IDEAL'98, IDEAL'00. Also, he has served as program committee members in intl. major NN conferences in recent years, including IJCNN (97,99,00,01), WCNN(95,96), IEEE-ICNN (96), etc. He has received several Chinese national prestigious academic awards (including the National Nature Science Award) and also some international awards (including an 1995 INNS Leadership Award). Prof. Xu is an IEEE Fellow.


  • AINC Seminars


    2001


    15th October 2001

    On the performance of some evolutionary and local search heuristics

    Anton Eremeev
    Omsk Branch of Sobolev Institute of Mathematics
    Omsk, Russia

    The performance of several mutation-based evolutionary algorithms and the local search heuristic is going to be considered from the point of view of discrete optimisation. In particular this presentation will discuss the upper and lower bounds on probability to generate the solutions of certain quality in the (1+1)-ES, the (1,lambda)-ES evolutionary strategies and in a simplified version of genetic algorithm with tournament selection. Some recent results on comparison of the (1+1)-ES to other evolutionary algorithms and to the local search will be discussed. The ways of statistical estimation of the landscape properties will be considered in conclusion.


    22nd October 2001

    Structured Prioritised Sweeping

    Richard Dearden
    NASA Ames Lab, USA

    Many problems in planning and reinforcement learning involve states and actions described in terms of some set of features or state variables, rather than using an explicit enumeration of all possible states. In decision-theoretic planning, the structured policy iteration algorithm [Boutilier et al. 1995] has been used to construct optimal policies for these problems by exploiting any structure present in the representation to keep the policy compact. This allows problems to be solved more efficiently, and allows solutions to problems that can't be solved by other methods due to space constraints. In this talk, I will review the structured policy iteration algorithm and show how it can be applied locally to produce a structured version of the prioritised sweeping algorithm. In structured prioritized sweeping, we maintain a structured estimate of the value function and update the values of aggregate states when learning occurs. This increased the scope of local value function changes and allows more efficient learning because learned information is more quickly propogated between states.


    29th October 2001

    Artificial Genomes

    Torsten Reil
    Oxford University

    Artificial Genomes are models of biological DNA and its template-matching properties. As has been previously shown, they can be used to study the dynamics of gene regulatory networks in a similar manner as Random Boolean Networks, but with the advantage a biologically plausible structural underpinning. Among other things, this opens the possibility of experiments on structural genome evolution. For instance, questions as to the importance of regulatory vs. coding sequences, crossover events, or genome length can be readily addressed.


    5th November 2001

    The Walsh Transform and the Theory of Genetic Algorithms

    Alden Wright
    University of Montana, USA

    The Walsh/Fourier Transform and the Walsh Basis are powerful tools that can be used with dynamical system models of the genetic algorithm. The Walsh transform represents a population in a basis that is closely related to the schema averages of the population. The representation of mixing (crossover and mutation) is essentially the same as the representation of crossover. This enables a model of the gene pool genetic algorithm that incorporates both mutation and crossover whereas previous models only included crossover.


    12th November 2001

    Binary representations, Gray codes and Local Search

    Jon Rowe
    School of Computer Science
    University of Birmingham

    When trying to optimise a real-valued function of a real variable, one approach is to discretise the search space, encode this set of points using a binary representation, and then apply a local search algorithm (e.g. steepest descent). There are many ways of encoding a set of points in binary. The conditions under which different encodings lead to identical behaviour of a local search algorithm are investigated and some implications drawn. Gray code representations have some nice properties in such applications - for example, a steepest descent algorithm using the reflective Gray code will optimise a unimodal function in a linear number of steps. Rotating a Gray code produces a different Gray code. This technique can be used to help a local search algorithm escape from a local optimum, by a change of representation. The probability that a random shift will allow such an escape can be estimated. Finally, the class of "long-path" problems are known to be unimodal in Hamming-space, but require an exponential amount of time to be optimised by local search. We ask what class of real functions they correspond to, under a reflective Gray encoding. The answer is that they are fractals with an exponential number of local optima.


    26th November 2001

    The Presence of Old Alus in GC-Rich Regions of the Human Genome - A Genetic Algorithm Perspective

    Stevan Anastasoff
    School of Computer Science
    University of Birmingham

    Even before the completion of the first working draft of the human genome, it was known that most of our DNA was so-called 'junk', serving no apparent biological function. However, this junk, rather than being the random garbage that the name might suggest, actually contains a considerable amount of structure. One example of this structure can be found with the Alu repeat, a DNA sequence able to have copies of itself inserted back into its host genome, acting as a sort of molecular level parasite. Analyses of the distribution of Alus throughout the human genome have revealed a number of interesting and counter-intuitive results. The work to be presented here has involved using a genetic algorithm based simulation of transposable elements (of which the Alu is just one example) to model these patterns. Parameter sweeps and analysis of this simulation suggest revisions to the ways in which the evolutionary dynamics of Alus are conceptualised, and highlight inadequacies in the current theories that attempt to describe the particular observed genomic distribution of Alus. The development and use of the simulation also raises methodological questions as to the proper application of such modelling enterprises within biology.


    3rd December 2001

    An Ant System Algorithm for the Vehicle Routing Problem with Backhauls

    Anne Wade
    School of Mathematics and Statistics
    University of Birmingham

    An ant system algorithm designed for the solution of the Vehicle Routing Problem with Backhauls (VRPB) is put forward. The mixed VRPB where linehaul and backhaul customers can be visited in any order along routes is considered together with a `restricted' VRPB. The restricted VRPB allows the user to determine the mix of linehaul and backhaul customers on a route by using the percentage of linehaul load delivered along the route to restrict the position where the first backhaul may be served. Such a flexibility enhances the results of the classical VRPB, where the backhauls are included on a route only when all linehauls are served, while reducing the risk of implementing an impractical solution produced by the fully mixed VRPB. Some ideas with respect to the size of the candidate list, initial placement of ants and updating of the trail will be proposed to enhance the basic ant system. Results based on test problems will be presented.


    10th December 2001

    An Introduction to Multi-Objective Evolutionary Algorithms

    Kalyanmoy Deb
    Indian Institute of Technology
    Kanpur, India

    Most real-world optimisation problems are better posed as multi-objective optimisation problems involving conflicting objectives, but due to the lack of suitable solution techniques they have been solved as single-objective optimisation problems. Optimising a product for minimum cost as well as for maximum reliability, for example, is often desired but is difficult to achieve with classical approaches. In reality, such conflicting objectives give rise to a set of optimal solutions, known as Pareto-optimal solutions. In tackling such problems, a less-subjective and useful technique is to first find a set of trade-off solutions and then to choose one from the set. In this talk, a number of state-of-the-art multi-objective evolutionary methods, capable of finding multiple trade-off solutions, will be presented. A few case studies from different areas of science and engineering will be shown. Finally, some salient research issues will be discussed.


    This page is maintained by John Bullinaria. Last updated on 29 January 2002.