The First IEEE Symposium
on
Combinations of Evolutionary Computation and Neural Networks

Co-sponsored by
IEEE Neural Network Council
The Center for Excellence in Evolutionary Computation

May 11-13, 2000
The Gunter Hotel, San Antonio, TX, USA


Tentative Program for ECNN2000 (Subject to changes)


May 11, 2000, Thursday

8:30 am REGISTRATION OPENS

9:00 am - 10:00 am OPENING REMARKS and TUTORIAL on ECNN by David Fogel

10:00 am - 10:15 am BREAK

10:15 am û 11:00 am KEYNOTE

"Computation: Evolutionary, Neural, Molecular"

Michael Conrad

Abstract

Biologically motivated computing paradigms involving neural and evolutionary ideas were prominent early in the history of computing, but were pushed into the desert for forty years by handcrafted (creationist) approaches that could deliverer immediate products. A confluence of factors emanating from computer science, biology, and technology have brought self-organizing approaches back to the fore. Neural networks in particular provide high evolvability platforms for variation-selection search strategies. The
neuron doctrine and the fundamental nature of computing come into question. Is a neuron an atom of the brain (artificial or natural) or is it itself a complex information processing system whose interior molecular dynamics can be elicited and exploited through the evolution process? We here argue the latter point of view, illustrating how high evolvability dynamics can be achieved with artificial neuromolecular computer designs and how such designs might in due course be implemented using molecular computing devices. A table top enzyme-driven prototype developed by our group is presented; it can be thought of as a sort of artificial neuron in which the context sensitivity of enzyme recognition is used to transform injected signal patterns into output activity.

 

11:00 am û 11:25 am

"Exploring Different Coding Schemes for the Evolution of an Artificial Insect Eye"
Ralf Salomon and Lukas Lichtensteiger

Abstract

The existing literature proposes various (neuronal) architectures for object avoidance, which is one of the very fundamental tasks of autonomous, mobile robots. Due to certain hardware limitations, existing research resorts to prespecified sensor systems that remain fixed during all experiments, and modifications are done only in the controllersÆ software components. Only recent research (Lichtensteiger and Eggenberger, 1999) has tried to do the opposite, i.e., prespecifying a simple neural network and evolving the sensor distribution directly in hardware. Even though first experiments have been successful in evolving some solutions by means of evolutionary algorithms, they have also indicated that systematic comparisons between different evolutionary algorithms and coding schemes are required in order to optimize the evolutionary process. Since these comparisons cannot be done on the robot due to experimentation time, this paper reports the result of a set of extensive simulations.

 

11:25 am û 11:50 am

"Evolving Neural Trees for Time Series Prediction Using Bayesian Evolutionary Algorithms"

Byoung-Tak Zhang and Dong-Yeon Cho

Abstract:
Bayesian evolutionary algorithms (BEAs) are a probabilistic model for evolutionary computation. Instead of simply generating new populations as in conventional evolutionary algorithms, the BEAs attempt to explicitly estimate the posterior distribution of the individuals from their prior probability and likelihood, and then sample offspring from the distribution. In this paper we apply the Bayesian evolutionary algorithms to evolving neural trees, i.e., tree-structured neural networks. Explicit formulae for specifying
the distributions on the model space are provided in the context of neural trees. The effectiveness and robustness of the method is demonstrated on a time series prediction problem. We also study the effect of the population size and the amount of information exchanged by subtree crossover and subtree mutations. Experimenal results show that small-step mutation-oriented variations are most effective when the population size is small, while large-step recombinative variations are more effective for large population
sizes.

11:50 am û 1:30 pm LUNCH

1:30 pm û 1:55 pm

"On the Use of Biologically-Inspired Adaptive Mutations to Evolve Artificial Neural Network Structures"

D.A. Miller, Garry Greenwood, and C. Ide

Abstract

Evolutionary algorithms have been used to successfully evolve artificial neural network structures. Normally the evolutionary algorithm has several different mutation operators available to randomly change

the number and location of neurons or connections. The scope of any mutation is typically limited by a user-selected parameter. Nature, however, controls the total number of neurons and synaptic connections in more predictable ways, which suggests the methods typically used by evolutionary algorithms may be inefficient. This paper describes a simple evolutionary algorithm that adaptively mutates the network structure where the adaptation emulates neuron and synaptic growth in the rhesus monkey. Our preliminary results indicate it is possible to evolve relatively sparse connected networks that exhibit quite reasonable performance.

1:55 pm û 2:20 pm

"Population Optimization Algorithm Based on ICA"

Qingfu Zhang, Nigel M. Allinson, and Hujun Yin

Abstract
In this paper, we propose a new population optimization algorithm called Univariate Marginal Distribution
Algorithm with Independent Component Analysis(UMDA/ICA). Our main idea is to incorporate ICA into UMDA algorithm in order to tackle the interrelations among variables. We demonstrate that UMDA/ICA performs better than UMDA for a test function with highly correlated variables.

2:20 pm û 2:45 pm

"Dynamic Genetic Operators for Evolving Neural Networks Using Attribute Grammars"

Talib S. Hussain and Roger A. Browse

2:45 pm û 3:15 pm BREAK

3:15 pm û 3:40 pm

"Non-Standard Norms in Genetically Trained Neural Networks"

Angel Kuri Morales

Abstract

We discuss alternative norms to train Neural Networks (NNs). We focus on the so called Multilayer Perceptrons (MLPs). To achieve this we rely on a Genetic Algorithm called an Eclectic GA (EGA). By using the EGA we avoid the drawbacks of the standard training algorithm in this sort of NNs: the backpropagation algorithm. We define four measures of distance: a) The mean exponential error (MEE), b) The mean absolute error (MAE), c) The maximum square error (MSE) and d) The maximum (supremum) absolute error (SAE). We analyze the behavior of an MLP NN on two kinds of problems: Classification and Forecasting. We discuss the results of applying an EGA to train the NNs and show that alternative norms yield better results than the traditional RMS norm. We also discuss the resilience of genetically trained NNs to the change of the transfer function in the output layer.

3:40 pm û 4:05 pm

"A New Metric for Evaluating Genetic Optimization of Neural Networks"

Jaime Davila

Abstract
In recent years researchers have used genetic algorithm techniques to evolve neural network topologies. Although these researchers have had the same end result in mind (namely, the evolution of topologies that
are better able to solve a particular problem), the approaches they used varied greatly. Random selection of a genome coding scheme can easily result in sub-optimal genetic performance, since the efficiency of different evolutionary operations depends on how they affect schemata being processed in the population. In addition, the computational complexity involved in creating and evaluating neural networks usually does not allow for repetition of genetic experiments under different genome coding.
In this paper I present an evaluation method that uses schema theory to aid the design of genetic codings for NN topology optimization. Furthermore, this methodology can help determine optimal balances between different evolutionary operators depending on the characteristics of the coding scheme. The methodology is tested on two GA-NN hybrid systems: one for natural language processing, and another for robot navigation.

4:05 pm û 4:30 pm

"Evolution and Design of Introspective Neural Networks"
Thomas Philip Runarsson and Magnus Thor Jonsson

Abstract:
The paper described the application of neural networks as learning rules for neural networks, i.e. introspective neural networks. The learning rules are evolved using an evolution strategy. The survival of a learning rule is based on its performance in training neural networks on a set of tasks. Training algorithms will be evolved for single layer artificial neural networks. Experimental results show that a neural network is very capable of generating an efficient training algorithm for neural networks.

May 12

9:00 am û 9:25 am

"Neuro-Evolution and Natural Deduction"
Nirav S. Desai and Risto Miikkulainen

Abstract

Natural deduction is essentially a sequential decision task, similar to many game-playing tasks. Such a task is perfectly suited to benefit from the techniques of neuro-evolution. Symbiotic, Adaptive Neuro-Evolution
(SANE; Moriarty and Miikkulainen 1996a) has proven extremely successful at evolving networks for sequential decision tasks. This paper will show that SANE can be used to evolve a natural deduction system on a neural network. Particularly, it will show that (1) incremental evolution through progressively more challenging problems results in more effective networks than does direct evolution, and (2) an effective network can be evolved faster if the network is allowed to ``brainstorm'' or suggest any move regardless of its applicability, even though the highest-ranked valid move
is always applied.

 

9:25 am û 9:50 am

"Convergence Analysis of a Segmentation Algorithm for the Evolutionary Training of Neural Networks"

Harald Huening

Abstract
In the evolutionary training of neural networks often different sets of parameters (weights, topology) are encoded in different segments of a single string to optimise. This paper analyses the behaviour of an evolutionary algorithm that uses these segment boundaries as fixed crossover points, and an algorithm is developed for further changes of the segmentations to achieve global convergence.
In contrast to standard genetic algorithms which progress from generation to generation, we adopt the viewpoint of the reactor algorithm (Dittrich & Banzhaf, 1998) which permits an analysis similar to Eigen's (1971) molecular evolution model. From this viewpoint, we consider employing cross-over at every time-step, and present fixed-point analysis and phase portraits of the competitive dynamics.
A problem is that higher-order interactions like cross-over can get stuck in non-optimal solutions due to properties of the population dynamics. The segmentation algorithm presented here solves this problem by creating different population systems for such cases of competition that cannot be solved correctly by the population dynamics. The different population systems have different segmentation boundaries, which are generated by combining well converged components into new segments. This gives
first-order replicators that can dynamically compete with new solutions. Furthermore, the population systems communicate information about which solution strings have been found already, so new ones can be favoured. Thus the segmentation algorithm performs global optimisation and the use of the segmentations themselves as building blocks is discussed.

9:50 am û 10:15 am BREAK

10:15 am û 10:40 am

"Neural Network Structures and Isomorphisms: Random Walk Characteristics of the Search Space"

Peter Stagge and Christian Igel

Abstract

In this article we deal with a quite general topic in evolutionary structure optimization, namely redundancy in the encoding due to isomorphic structures. This problem is well known in topology optimization of neural networks (NNs).

In the context of structure optimization of NNs we observe similar phenomena of rare and frequent structures as are known from molecular biology. The degree to which isomorphic structures, i.e. classes of equivalent NN topologies, enlarge the search space depends on the restrictions of the allowed structures and on the representation of the search space. For restricted network topologies, like NNs with a

maximum number of layers, some properties can be analyzed analytically, for more general structures we estimate the characteristics of the search space using data stemming from random walks.

For restricted NN topologies, the search process is affected by isomorphic structures. However, in the absence of restrictions, the search space becomes so large that the bias induced by isomorphisms can be neglected.

10:40 am û 11:05 am

"Case Studies in Applying Fitness Distributions in Evolutionary Algorithms. II. Comparing the Improvements from Crossover and Gaussian Mutation on Simple Neural Networks"

Ankit Jain and David B. Fogel

Abstract

Previous efforts in applying fitness distributions of Gaussian mutation for optimizing simple neural networks in the XOR problem are extended by conducting a similar analysis for three types of crossover operators. One-point, two-point, and uniform crossover are applied to the best-evolved neural networks at each generation in an evolutionary trial. The maximum expected improvement under Gaussian mutation with a single fixed standard deviation is then compared to that which can be obtained using crossover. The results indicate that the benefits of each type of crossover varies as a function of the generation number. Furthermore, these fitness profiles are notably similar (i.e., there is little functional difference between the various crossover operators). This does not support a "building block hypothesis" for explaining the gains that can be made via recombination. The results indicate cases where mutation alone can outperform recombination and vice versa.

 

11:05 am û 11:30 am

"Optimization for Problem Classes û Neural Networks that Learn to Learn"

Michael Huesken, Jens E. Gayko, and Bernhard Sendhoff

Abstract:

The main focus of the optimization of artificial neural networks has been the design of a problem dependent network structure in order to reduce the model complexity and to minimize the model error. Driven by a concrete application we identify in this paper another desirable property of neural networks, which has to be the result of evolutionary optimization of the network structure -- the ability of the network to efficiently solve related problems denoted as a class of problems. In a more theoretical framework the aim is to develop neural networks for adaptability -- networks that learn (during evolution) to learn (during operation). The process of evolutionary optimization is time consuming, it is therefore also from the

perspective of efficiency desirable to design structures which are applicable to many related problems. In this paper, two different approaches to solve this problem are studied, called ensemble method and generation method. We empirically show that an averaged Lamarckian inheritance seems to be the most efficient way to optimize networks for problem classes, both for artificial regression problems as well as for real-world system state diagnosis problems.

11:30 am û 11:55 am

"Human MagneoCardioGram (MCG) Modeling Using Evolutionary Artificial Neural Networks"
E.F. Georgopoulos, A.V. Adamopoulos, and S.D. Likothanassis

Abstract
In the present work MagnetoCardioGram (MCG) recordings of normal subjects were analyzed using a hybrid training algorithm. This algorithm combines genetic algorithms and a training method based on the localized Extended Kalman Filter (EKF), in order to evolve the structure and train Multi-Layered Perceptrons (MLP) networks. Our goal is to examine the predictability of the MCG signal on a short predicting horizon.

 

11:55 am û 1:30 pm LUNCH

1:30 pm û 1:55 pm

"Combining Incrementally Evolved Neural Networks Based on Cellular Automata for Complex Adaptive Behaviors"

Geum-Beom Song and Sung-Bae Cho

Abstract

There has been extensive work to construct an optimal controller for a mobile robot by evolutionary approaches such as genetic algorithm, genetic programming, and so on. However, evolutionary approaches have a difficulty to obtain the controller for complex and general behaviors. In order to overcome this shortcoming, we propose an incremental evolution method for neural networks based on cellular automata (CA) and a method of combining several evolved modules by a rule-based approach. The incremental evolution method evolves the neural network by starting with simpler environment needed simple behavior and gradually making it more complex and general for complex behaviors. The multi-modules integration
method can make complex and general behaviors by combining several modules evolved or programmed to do simple behavior. Experimental results show the potential of the incremental evolution and multi-modules integration methods as techniques to make the evolved neural network to do complex and general behaviors.

1:55 pm û 2:20 pm

"Extracting Comprehensible Rules from Neural Networks via Genetic Algorithms"

Raul T. Santos, Julio C. Nievola, Alex A. Freitas

Abstract
A common problem in KDD (Knowledge Discovery in Databases) is the presence of noise in the data being mined. Neural networks are robust and have a good tolerance to noise, which makes them suitable for mining very noisy data. However, they have the well-known disadvantage of not discovering any
high-level rule that can be used as a support for human decision making. In this work we present a method for extracting accurate, comprehensible rules from neural networks. The proposed method uses a genetic algorithm to find a good neural network topology. This topology is then passed to a rule extraction algorithm, and the quality of the extracted rules is then fed back to the genetic algorithm. The proposed system is evaluated on two public-domain data sets and the results show that the approach is a valid
one.

 

2:20 pm û 2:45 pm

"An Adaptive Scheme for Real Function Optimization Acting as a Selection Operator"

Arnaud Berny

Abstract

We propose an adaptive scheme for real function optimization whose dynamics is driven by selection. The method is parametric and relies explicitly on the Gaussian density seen as an infinite search population. We define two gradient flows acting on the density parameters, in the spirit of neural network learning rules, which maximize either the function expectation relatively to the density or its logarithm. The first one leads
to reinforcement learning and the second one leads to selection learning. Both can be understood as the effect of three operators acting on the density: translation, scaling, and rotation. Then we propose to approximate those systems by discrete time dynamical systems by means of three different methods: Monte Carlo integration, selection among a finite population, and reinforcement learning. This work synthesizes previously independent approaches and intends at showing that evolutionary strategies and reinforcement learning are strongly related.

2:45 pm û 3:15 pm BREAK

3:15 pm û 3:40 pm

"Cooperative Co-evolutionary Algorithm û How to Evaluate a Module?"

Q.F. Zhao, O. Hammami, K. Kuroda, and K. Saito

Abstract

When we talk about co-evolution, we often consider it as competitive co-evolution (CompCE). Examples include co-evolution of training data and neural networks, co-evolution of game players, and so on.
Recently, several researchers have studied another kind of co-evolution --- cooperative co-evolution (CoopCE). While CompCE tries to get more competitive individuals through evolution, the goal of
CoopCE is to find individuals from which better systems can be constructed. The basic idea of CoopCE is divide-and-conquer: divide a large system into many modules, evolve the modules separately, and
then combine them together again to form the whole system. Depending on how to divide-and-conquer, different cooperative co-evolutionary algorithms (CoopCEAs) have been proposed in the literature. Results
obtained so far strongly support the usefulness of CoopCEAs. To study the CoopCEAs systematically, we proposed a society model, which is a common framework of many existing CoopCEAs. From this society model we can easily see that there are still many open problems in using CoopCEAs. In this paper, we focus the discussion on evaluation of the modules. To be concrete, we will apply the model to evolutionary learning of RBF-neural networks, and show the effectiveness of
different evaluation methods through experiments.

3:40 û 4:05 pm

"Inductive Genetic Programming of Polynomial Learning Networks"

Nikolay I. Nikolaev and Hitoshi Iba

Abstract

Learning networks have been empirically proven suitable for function approximation and regression. Our concern is finding well performing polynomial learning networks by inductive Genetic Programming (iGP). The proposed iGP system evolves tree-structured networks of simple transfer polynomials in the hidden units. It discovers the relevant network topology for the task, and rapidly computes the network
weights by a least-squares method. We implement evolutionary search guidance by an especially developed fitness function for controling the overfitting with the examples. This study reports that iGP with
the novel fitness function has been successfully applied to benchmark time-series prediction, and data mining tasks.

4:05 û 5:00 OPEN DISCUSSION

May 13

9:00 am û 9:25 am

"Hierarchical Genetic Algorithm Based Multi-Layer Feedforward Neural Network Design"
Gary G. Yen and Haiming Lu

9:25 am û 9:50 am

"Coevolutionary Design of a Control System for Uncertain Nonlinear Plants"

Mihaela R. Cistelecan

Abstract

This paper proposes and analyses an alternative for autonomously developing control systems for nonlinear, uncertain plants. The proposed alternative uses a neural-like controller with a special architecture and also a specific developing algorithm. The developing algorithm requires intensive computer computation, but at the same time it has a high degree of autonomy. The developing algorithm estimates the parameters and the structure of the controller during an off-line stage. The off-line training uses a virtual plant model of the real plant. The controller obtained at the end of the off-line training is for the on-line use no longer requiring an extra-training. The developing algorithm is implemented like a multi-agent system, so that the agents cooperate one another. The location of the agents in the space where they move is established by the nodes of a uniform grid. For each agent the developing algorithm implements a local evolution and a temporal evolution. Through a coevolutionary algorithm the local evolution estimates the best "segment" of the control function for the worst virtual plant model, for the partition where the agent works. Since in their movement the agents are guided by the trajectories that they induce, the developing algorithm implements a temporal evolution that estimates the best-induced trajectory of the control system. We present two different implementations of the developing algorithm. The first implementation, based on a global cooperation between agents, is performed through changing controllers structures between agents. The second implementation, based on a local cooperation between agents, is performed through changing individual wavelets between agents. Both implementations were framed for sequential computing architectures.

9:50 am û 10:15 am BREAK

10:15 am û 10:40 am

"Artificial Neural Development for Pulsed Neural Network Design û A Simulation Experiment on AnimatsÆ Cognitve Map Genesis"

Masayasu Atsumi

Abstract

The design of artificial neural networks can result in the design of artificial genomes with lowered dimensions by mimicking the neurogenesis process of living things, automatically generating them based on genomes' decoding and neurergic regulation. We propose the artificial neural development method that generates the three-dimensional multi-regional pulsed neural network arranged in three layers of the nerve area layer, the nerve sub-area layer, and the cell layer. The three-layered structure is introduced to model neural structure at a level above that of the individual neuron but below that of an overall neural system, for example modules such as brain regions and their substrata.
In our method, that is named NEUROGEN'2000, the neural development process consists of the first genome-controlled spatiotemporal generation of a neural network structure and the latter spiking activity-dependent regulation of it. In the first process, by decoding a genome which represents spatiotemporal structure of neurogenesis, 1)a nerve sub-area is generated in each nerve area and neurons are produced in it, 2)axonal outgrowth target sub-areas are recognized according to the attraction and repulsion rule, and 3)synapse formation is controlled under the topology preservation projection rule between origin cells and target cells. In the latter process, for connection regulation among nerve sub-areas, 4)programmed cell death occurs to adjust the size of original cells and target cells under control of spiking activity and a neurotrophic factor, then 5)synaptic efficacy is regulated according to the spike-based hebbian rule and weakened synapses are eliminated as a result of target government competition among presynapses. For design of genomes, the steady state genetic algorithm is introduced and it is applied to initial genomes partially designed manually.
To evaluate our artificial neural development method, simulation experiments are conducted to generate a pulsed neural network of an animal-like robot (animat) which moves in an environment. It is known in rats that assembly of place cells that fires at the same time express a place in an environment. It is also examined that the place recognition circuit is a multi-regional circuit which contains the place cell area, the head-direction cell area, the outer-world sensory area, and so forth. For these reasons, we evolve and develop an animat's cognitive map as a multi-regional place recognition circuit that focuses on the place cell area. The Bayesian place reconstruction mapping from a spiking pattern of place cells to a place in an environment is used for evaluation of animat's place recognition, and the accuracy of this reconstruction is used as fitness for evolution.
Through these experiments, we could generate a multi-regional place recognition circuit that showed place recognition features similar to rat's place cells. It was found that performance of place recognition depended on the composition of the cognitive map and the well-cognitive maps could be designed by our artificial neural development method in combination with evolutionary computation. By these results, it was confirmed that NEUROGEN'2000 was useful for designing a biologically realistic multi-regional pulsed neural network of the animat.

10:40 am û 11:05 am

"Comparison of Blind and Directed Evolutionary Selection of RBF Neural Networks"

Peter Andras and Andras Andras

11:05 am û 11:30 am

"Using a Clustering Genetic Algorithm for Rule Extraction from Artificial Neural Networks"

Eduardo R. Hruschka and Nelson F.F. Ebecken

Abstract

The main challenge to the use of supervised neural networks in data mining applications is to get explicit knowledge from these models. For this purpose, a study on knowledge acquirement from supervised neural networks employed for classification problems is presented. The methodology is based on the clustering of the hidden units activation values. A clustering genetic algorithm for rule extraction from neural networks is developed. A simple encoding scheme that yields to constant-length chromosomes is used, thus allowing the application of the standard genetic operators. A consistent algorithm to avoid some of the drawbacks of this kind of representation is also developed. In addition, a very simple heuristic is applied to generate the initial population. The individual fitness is determined based on the Euclidean distances among the objects, as well as on the number of objects belonging to each cluster. The developed algorithm is experimentally evaluated in two data mining benchmarks: Iris Plants Database and Pima Indians Diabetes Database. The results are compared with those obtained by the Modified RX Algorithm, which is also an algorithm for rule extraction from neural networks.

 

11:30 am û 11:55 am

"The Multi-Tiered Tournament Selection for Evolutionary Neural Network Synthesis"

Devert Wicker, Mateen M. Rizki, and Louis A. Tamburino

Abstract

This paper introduces Multi-Tiered Tournament Selection. Traditional tournament selection algorithms are appropriate for single objective optimization problems but are too limited for the multi-objective task of evolving complete recognition systems. Recognition systems need to be accurate as well as small to improve generalization performance. Multi-tiered Tournament Selection is shown to improve search for smaller neural network recognition systems.

11:55 am û 1:30 pm LUNCH

1:30 pm û 1:55 pm

"Dynamic Modelling and Time-Series Prediction by Incremental Growth of Lateral Delay Neural Networks"

Lipton Chan and Yun Li

Abstract

The difficult problems of predicting chaotic time series and modelling chaotic systems is approached using an innovative neural network design. By combining evolutionary techniques with others, good results can be obtained swiftly via incremental network growing. The network architecture and training algorithm make the creation of dynamic models efficient and hassle-free. The networks results accurately reflect the outputs of the chaotic systems being modelled and preserve complex attractor structures of these structures of these systems.

1:55 pm û 2:20 pm

"Specifying Intrinsically Adaptive Architectures"

S.M. Lucas

Abstract

This paper describes a method for specifying (and evolving) intrinsically adaptive neural architectures.
These architectures have back-propogation style gradient descent behavior built into them at a cellular level. The significannce of this is that we can now use back-propogation to train evolved feed-forward networks of any structure (provided that individual nodes are differentiable). Networks evolved in this way can potentially adapt to their environment {\em in situ}. This is in contrast to more conventional techniques such as using a genetic algorithm or simulated annealing to train the network. The method
can be seamlessly integrated with any method for evolving neural network architectures. The performance of the method is investigated on the simple synthetic benchmarks of parity and intertwined spiral problems, and also on a 2d `golf' simulation.

2:20 pm û 2:45 pm

"Simple Evolving Connectionist Systems and Experiments on Isolated Phoneme Recognition"

Michael Watts and Nik Kasabov

Abstract

Evolving connectionist systems (ECoS) are systems that evolve their structure through on-line, adaptive learning from incoming data. This paradigm complements the paradigm of evolutionary computation based on population-based search and on optimisation of individual systems through generations of populations. The paper presents the theory and the architecture of a simple evolving system called SECoS that evolves through one pass learning from incoming data. A case study of multi-modular SECoS systems evolved from a database of New Zealand English phonemes is used as an illustration of the method.

 

2:45 pm û 3:10 pm

"Evolution of Recurrent Cascade Correlation Networks with a Distributed Collaborative Species"

N.A. Ghada and A.S. Mohamed

Abstract

Evolutionary Artificial Neural Networks (EANN) is a young research area. The vast research and experimental work of using EANN to evolve neural networks had achieved many successes, yet it also revealed some limitations. Aiming at boosting the EANN speed improving its performance, a new approach of Cooperative Co-evolution is introduced. This new approach introduces modularity into the evolutionary algorithm by utilizing a divide-and-conquer technique. Instead of one evolutionary algorithm that attempts to solve the whole problem, species, representing simpler subtasks, are evolved as separate instances of an evolutionary algorithm. Collaborations among species are formed to represent the complete solution.

The primary goal of this research is to investigate the performance of a distributed version of the collaborative species approach when discrete time steps are introduced into the problem, by applying the approach to the evolution of recurrent cascade correlation networks. A research tool is designed and implemented to simulate the evolution and the running of the recurrent neural network. Results are presented in which the Distributed Cooperative Coevolutionary Genetic Algorithm (DCCGA) produced higher quality solutions in fewer evolutionary cycles when compared to the standard Genetic Algorithm (GA). The performance of the two algorithms is analyzed and compared in two tasks: learning to recognize characters of the Morse code and learning a finite state grammar from examples.

 

3:10 pm û 3:45 pm BREAK

3:45 pm û 5:00 pm Panel and Discussion

END OF SYMPOSIUM