Abstract for AISB 2000: How to Design a Functioning Mind
AUTHOR: Andres Perez-Uribe, Computer Science Institute
University of Fribourg, Switzerland
TITLE: Of implementing neural epigenesis, reinforcement learning,
and mental rehearsal in a mobile autonomous robot
Based on the hypothesis that the physical matter underlying the
mind is not at all special, and that what is special is how it is
organized [Edelman92], one come to the idea of building or
simulating systems with functional capacities similar to those
observed in nervous systems and brains to try to understand the
From a biological point of view, it has been determined that the
genome contains the formation rules that specify the outline of
the nervous system. Nevertheless, there is growing evidence that
nervous systems follow an environmentally-guided neural circuit
building (neural epigenesis) [SipperetAl97] that increases their
learning flexibility and eliminates the heavy burden that
nativism places on genetic mechanisms [QuartzSejnowski97]. The
seminal work of the Nobel laureates D.H. Hubel and T.N. Wiesel on
the brain's mechanism of vision [HubelWiesel79] describes a prime
example of the role of experience in the formation of the
The nervous system of living organisms thus represents a mixture
of the innate and the acquired: "... the model of the world
emerging during ontogeny is governed by innate predispositions of
the brain to categorize and integrate the sensory world in
certain ways. [However], the particular computational world model
derived by a given individual is a function of the sensory
exposure he is subjected to..." [LLinasPare91].
Categorization, i.e., the process by which distinct entities are
treated as equivalent, is considered one of the most fundamental
cognitive activities because categorization allows us to
understand and make predictions about objects and events in our
world. This is essential in humans, for instance, to be able to
handle the constantly changing activation of around 10^8
photo-receptors in each eye [Dietterich99]. Computational models
of adaptive categorization have been developed and tested with
success, and have been used to explain some sensory and cognitive
processes in the brain such as perception, recognition,
attention, working memory, etc. [Grossberg98a]. However, other
types of learning, such as reinforcement learning, seem to govern
spatial and motor skill acquisition [Barto97].
While in the former case only resonant states can drive new
learning (i.e., when the current inputs sufficiently match the
system's expectations) [Grossberg98a], in the latter ``learning
is driven by changes in the expectations about future salient
events such as rewards and punishments'' [Schultz97].
We have developed an artificial neural network architecture based
on the above premises: environmentally-guided neural circuit
building for unsupervised adaptive clustering and trial-and-error
learning of behaviors. First, a learning algorithm called FAST
for Flexible Adaptable-Size Topology [PerezSanchez96] was
developed to handle the problem of dynamic categorization of the
robots' three 8-bit infra-red "eyes" (which correspond to 24
binary receptors). No external supervisor provides the desired
outputs. Second, a trial-and-error learning process coupled with
punishment and reward signals [SuttonBarto98] was considered to
allow the robot generate behavioral responses as a function of
its sensations. Third, a model of the environment is dynamically
created to improve the interaction with the actual environment
[Sutton90]. The system alternately operates on the environment
and on the learned model of the environment by a process of
Finally, we have combined the capabilities of the incremental
learning FAST neural architecture with reinforcement learning
techniques and planning to learn an obstacle avoidance task with
a Khepera [MondadaEtAl93] autonomous mobile robot
[PerezSanchez99,Perez99]. This architecture and its
implementation using programmable hardware devices [Sanchez96]
may be viewed as a first step towards the development of more
complex neurocontrollers implementing many diverse cooperating
brain-like structures. Indeed, the implementation of the learning
paradigms presented above should enable us to think of a new kind
of machines, where, effectively, learning by examples and
interaction replace programming (without needing to emulate such
principles using a programmable computing machine). In this kind
of machines, the learning algorithm would emerge from the
dynamics of the interconnection of the processing elements,
which may be the key to realize a system endowed with
"semantics" (i.e., a system that is capable of associating a
meaning to the symbols it uses for computing)
[Searle80,Searle90], and not merely with "syntax", as it is the
case of our current computing machines.
[Edelman92] G. Edelman.
"Bright Air, Brilliant Fire", Basic Books,
New York, 1992
[SipperetAl97] M. Sipper,E. Sanchez,D. Mange,M. Tomassini,
A. Pirez-Uribe and A. Stauffer.
"A Phylogenetic, Ontogenetic, and Epigenetic View of Bio-Inspired
Hardware Systems", IEEE Transactions on Evolutionary Computation,
Vol 1, Number 1, April 1997, pp. 83-97.
[QuartzSejnowski97] R. Quartz and T.J. Sejnowski.
"The neural basis of cognitive development: A constructivism
manifesto.", Behavioral and Brain Sciences, 20(4):537+, December 1997.
[HubelWiesel79] D.H. Hubel and T.N. Wiesel.
"Brain Mechanisms of Vision, Scientific American, 241(1), 1979.
[LLinasPare91] R.R. LLinas and D. Pare.
"Of Dreaming and Wakefulness", Neuroscience 44(3), 521-535, 1991.
[Dietterich99] T. Dietterich.
"Machine learning", The MIT Encyclopedia of the Cognitive Sciences, pages
497--498. The MIT Press, 1999.
[Grossberg98a] S. Grossberg.
"The Link between Brain Learning, Attention, and Conciousness.
Technical Report CAS/CNS-TR-97-018, Department of Cognitive and
Neural Systems, Boston University, June 1998 (also in Conciousness
and Cognition, 8, 1, 1999).
[Barto97] A.G. Barto.
Reinforcement learning in motor control. In M.A. Arbib, editor,
Handbook of Brain Theory and Neural Networks, pages 809--812.
MIT Press, 1995.
[Schultz97] W. Schultz, P. Dayan, and P. Read Montague.
A Neural Substrate of Prediction and Reward, Science, 275:1593--1599,
14 March 1997.
[SuttonBarto98] R.S. Sutton and A.G. Barto.
Reinforcement Learning: An Introduction, The MIT Press, 1998.
[Sutton90] R.S. Sutton.
Integrated architectures for Learning, Planning, and Reacting based
on approximating Dynamic Programming. In Morgan Kaufmann, editor,
Proceedings of the Seventh International Conference on Machine Learning,
pages 216--224, 1990.
[PerezSanchez96] A. Perez-Uribe and E. Sanchez.
The FAST Architecture: A Neural Network with Flexible Adaptable-Size
Topology. In Proceedings of the Fourth International Conference on
Microelectronics for Neural Networks and Fuzzy Systems Microneuro'96, pages
337--340, Lausanne, Switzerland, February 1996. IEEE Press.
[PerezSanchez99] A. Perez-Uribe and E. Sanchez.
A Digital Artificial Brain Architecture for Mobile Autonomous
Robots. In M. Sugisaka and H. Tanaka, editors, Proceedings of the Fourth
International Symposium on Artificial Life and Robotics AROB'99, pages
240--243, Oita, Japan, 1999.
[Perez99] A. Perez-Uribe.
"Structure-Adaptable Digital Neural Networks", PhD Thesis 2052,
Swiss Federal Institute of Technology-Lausanne, Switzerland, 1999.
[MondadaEtAl93] F. Mondada, E. Franzi, and P. Ienne.
Mobile robot miniaturization: A tool for investigating in control
algorithms, In Proceedings of the Third International Symposium on
Experimental Robotics, Kyoto, Japan, 1993.
[Sanchez96] E. Sanchez.
"Field Programmable Gate Array (FPGA) circuits", In Towards Evolvable
Hardware, E. Sanchez and M. Tomassini (Eds.), Springer-Verlag 1996, pages 1-18.
[Searle80] J. Searle.
"Minds, Brains, and Programs", Behavioral and Brain Sciences, 3:417-424,
[Searle90] J. Searle.
"Is the Brain's Mind a Computer Program?", Scientific American, 262:26-31,
Andres Perez-Uribe received a diploma from the Universidad del Valle,
Cali, Colombia, in 1993. From 1994 to September 1999, he was with the
Logic Systems Laboratory at the Swiss Federal Institute of
Technology-Lausanne, working on the digital implementation of neural
networks with adaptable topologies, in collaboration with the Centre
Suisse d'Electronique et de Microtechnique SA (CSEM). He is currently
a postdoctoral fellow at the Parallelism and Artificial Intelligence
Group of the Computer Science Institute of the University of Fribourg
in Switzerland. His main subjects of interest are: Collective
Intelligence, Multi-Agent Learning Systems, Autonomous Robots,
Reinforcement Learning, Artificial Neural Networks, and Evolutionary
- A. Perez-Uribe, "Structure-Adaptable Digital neural networks",
PhD thesis 2052, Swiss Federal Institute of Technology-Lausanne,
- M. Sipper,E. Sanchez,D. Mange,M. Tomassini,A. Pirez-Uribe and
A. Stauffer,"A Phylogenetic, Ontogenetic, and Epigenetic View of
Bio-Inspired Hardware Systems", IEEE Transactions on Evolutionary
Computation, Vol 1, Number 1, April 1997, pp. 83-97.
- ``Evolvable Systems: From Biology to Hardware''. Lecture Notes in
Computer Science 1478, D. Mange, M. Sipper and A. Pirez-Uribe (Eds.),
Springer Verlag, 1998
- A. Pirez-Uribe, ``Artificial Neural Networks:Algorithms and Hardware
Implementation'' in Bio-Inspired Computing Machines: Toward Novel
Computational Architectures, D. Mange and M. Tomassini, Eds. Presses,
Polytechniques et Universitaires Romandes, Lausanne, Switzerland,
1998, pp. 289-316.
- A. Pirez-Uribe, E. Sanchez, ``Structure Adaptation in Artificial
Neural Networks through Adaptive Clustering and through Growth in
State Space'', Proceedings of the International Work-Conference on
Artificial and Natural Neural Networks IWANN'99, Lecture Notes in
Computer Science 1606, Alicante, June 2-4, 1999, vol. 1, pp. 556-565.
- A. Pirez-Uribe, E. Sanchez, ``A Digital Brain Architecture for
Mobile Autonomous Robots'', Proceedings of the Fourth International
Symposium on Artificial Life and Robotics AROB'99, Oita, Japan,
January, 1999, pp. 240-243