Embodied Evolution of Learning Ability

Created by W.Langdon from gp-bibliography.bib Revision:1.3872

@PhdThesis{Elfwing:thesis,
  author =       "Stefan Elfwing",
  title =        "Embodied Evolution of Learning Ability",
  school =       "KTH School of Computer Science and Communication",
  year =         "2007",
  type =         "Doctoral Thesis",
  address =      "SE-100 44 Stockholm, Sweden",
  month =        nov,
  keywords =     "genetic algorithms, genetic programming, Embodied
                 Evolution, Evolutionary Robotics, Reinforcement
                 Learning, Shaping Rewards, Meta-parameters,
                 Hierarchical Reinforcement Learning, Learning and
                 Evolution. Meta-learning, Baldwin Effect, Lamarckian
                 Evolution",
  URL =          "http://www.irp.oist.jp/nc/elfwing/Elfwing_thesis_final_electronic.pdf",
  size =         "162 pages",
  isbn13 =       "978-91-7178-787-3",
  abstract =     "Embodied evolution is a methodology for evolutionary
                 robotics that mimics the distributed, asynchronous, and
                 autonomous properties of biological evolution. The
                 evaluation, selection, and reproduction are carried out
                 by cooperation and competition of the robots, without
                 any need for human intervention. An embodied evolution
                 framework is therefore well suited to study the
                 adaptive learning mechanisms for artificial agents that
                 share the same fundamental constraints as biological
                 agents: self-preservation and self-reproduction.

                 The main goal of the research in this thesis has been
                 to develop a framework for performing embodied
                 evolution with a limited number of robots, by using
                 time-sharing of subpopulations of virtual agents inside
                 each robot. The framework integrates reproduction as a
                 directed autonomous behaviour, and allows for learning
                 of basic behaviors for survival by reinforcement
                 learning. The purpose of the evolution is to evolve the
                 learning ability of the agents, by optimising
                 meta-properties in reinforcement learning, such as the
                 selection of basic behaviours, meta-parameters that
                 modulate the efficiency of the learning, and additional
                 and richer reward signals that guides the learning in
                 the form of shaping rewards. The realization of the
                 embodied evolution framework has been a cumulative
                 research process in three steps: 1) investigation of
                 the learning of a cooperative mating behaviour for
                 directed autonomous reproduction; 2) development of an
                 embodied evolution framework, in which the selection of
                 pre-learned basic behaviours and the optimisation of
                 battery recharging are evolved; and 3) development of
                 an embodied evolution framework that includes
                 meta-learning of basic reinforcement learning behaviors
                 for survival, and in which the individuals are
                 evaluated by an implicit and biologically inspired
                 fitness function that promotes reproductive ability.
                 The proposed embodied evolution methods have been
                 validated in a simulation environment of the Cyber
                 Rodent robot, a robotic platform developed for embodied
                 evolution purposes. The evolutionarily obtained
                 solutions have also been transferred to the real
                 robotic platform.

                 The evolutionary approach to meta-learning has also
                 been applied for automatic design of task hierarchies
                 in hierarchical reinforcement learning, and for
                 co-evolving meta-parameters and potential-based shaping
                 rewards to accelerate reinforcement learning, both in
                 regards to finding initial solutions and in regards to
                 convergence to robust policies.",
  notes =        "TRITA-CSC-A 2007:16 ISSN-1653-5723
                 ISRN-KTH/CSC/A--07/16--SE

                 Akademisk avhandling som med tillstand av Kungliga
                 Tekniska hogskolan framlagges till offentlig granskning
                 for avlaggande av teknologie doktorsexamen mandagen den
                 12 november 2007 kl. 10.00 i sal F3, Lindstedtsvagen
                 26, Kungliga Tekniska hogskolan, Stockholm. Stefan
                 Elfwing, 2007 Tryck: Universitetsservice US AB",
}

Genetic Programming entries for Stefan Elfwing

Citations