Evolution-based discovery of hierarchical behaviors

Created by W.Langdon from gp-bibliography.bib Revision:1.3973

@InProceedings{rosca:1996:edhb,
  author =       "J. Rosca and D. H. Ballard",
  title =        "Evolution-based discovery of hierarchical behaviors",
  booktitle =    "Proceedings of the Thirteenth National Conference on
                 Artificial Intelligence (AAAI-96)",
  year =         "1996",
  publisher =    "AAAI / The MIT Press",
  keywords =     "genetic algorithms, genetic programming",
  URL =          "ftp://ftp.cs.rochester.edu/pub/u/rosca/gp/96.aaai.ps.gz",
  URL =          "https://www.aaai.org/Papers/AAAI/1996/aaai96-132.php",
  size =         "7 pages",
  abstract =     "The complexity of policy learning in a reinforcement
                 learning task deteriorates primarily with the increase
                 of the number of observations. Unfortunately, the
                 number of observations may be unacceptably high even
                 for simple problems. In order to cope with the scale up
                 problem we adopt procedural representations of
                 policies. Procedural representations have two
                 advantages. First they are implicit, allowing for good
                 inductive generalization over a very large set of input
                 states. Second they facilitate modularization. In this
                 paper we compare several randomized algorithms for
                 learning modular procedural representations. The main
                 algorithm, called Adaptive Representation through
                 Learning (ARL) is a genetic programming extension that
                 relies on the discovery of subroutines. ARL is suitable
                 for learning hierarchies of subroutines and for
                 constructing policies to complex tasks. When the
                 learning problem cannot be solved because the
                 specification is too loose and the domain is not well
                 understood, ARL will discover regularities in the
                 problem environment in the form of subroutines, which
                 often lead to an easier problem solving. ARL was
                 successfully tested on a typical reinforcement learning
                 problem of controlling an agent in a dynamic and
                 non-deterministic environment where the discovered
                 subroutines correspond to agent behaviors.",
  notes =        "

                 Pac-Man ARL {"}differential fitness and block
                 activation heuristics{"} {"}two distinct tiers{"}
                 typed, random search, simulated annealing (PSA),
                 hand-coding, See also \cite{rosca:1996:video}",
}

Genetic Programming entries for Justinian Rosca Dana H Ballard

Citations