Finding Methods for Evolving Competent Agents in Multiple Domains

Created by W.Langdon from gp-bibliography.bib Revision:1.3949

@PhdThesis{BenbassatDissertation,
  author =       "Amit Benbassat",
  title =        "Finding Methods for Evolving Competent Agents in
                 Multiple Domains",
  school =       "Ben-Gurion University of the Negev",
  year =         "2014",
  address =      "Israel",
  month =        sep,
  keywords =     "genetic algorithms, genetic programming, MTCS",
  URL =          "https://dl.dropboxusercontent.com/u/36726425/ThesisFinalSubmissionWithTitle.pdf",
  size =         "124 pages",
  abstract =     "We present the application of genetic programming (GP)
                 to search in zero-sum, deterministic, full-knowledge
                 board games. We use multiple board games and multiple
                 search algorithms as test cases in order to exhibit the
                 flexibility of our system. We conduct experiments
                 evolving players for variants of Checkers, Reversi,
                 Dodgem, Nine Men's Morris and Hex, evolving them in
                 conjunction with the Alpha-Beta search and Monte Carlo
                 Tree Search (MCTS) algorithms.

                 Throughout our research we rely on modern neo-Darwinian
                 theory specifically, the gene-centred view of evolution
                 to guide the design of our setup. Our evolutionary
                 system implements strongly typed GP trees, explicitly
                 defined introns, various mutation operators, a novel
                 selective crossover operator, and multi-tree
                 individuals.

                 Explicitly defined introns in the genome allow for
                 information selected out of the population to be kept
                 as a reserve for possible future use. Selective genetic
                 operators allow us to apply additional selection
                 pressure during the procreation stage. Multi-tree
                 individuals allow us to evolve software components that
                 can be integrated into existing search algorithms where
                 they improve play level over hand-crafted baseline
                 players.

                 Our results demonstrate patent improvement in play
                 level for every game, clearly showing that GP is
                 applicable to evolving search in board games. Results
                 show differing levels of scalability, with the best
                 scalability shown when using the MCTS algorithm. We
                 also present our highly scalable EvoMCTS system
                 designed as a scalable, easy-to-use, quick learning
                 tool to improve the play level in games without need
                 for any expert domain knowledge.

                 Pursuing the goal of general game playing (GGP) we
                 present a system that can serve as a stepping stone on
                 the way to general game learning (GGL), where a system
                 can learn a game upon getting its rule set, and the
                 human developer can improve the resulting players by
                 supplying the learning system with relevant information
                 about the game.",
  notes =        "Supervisor: Moshe Sipper",
}

Genetic Programming entries for Amit Benbassat

Citations