Wave: Incremental Erosion of Residual Error

Created by W.Langdon from gp-bibliography.bib Revision:1.3872

@InProceedings{Medernach:2015:GECCOcomp,
  author =       "David Medernach and Jeannie Fitzgerald and 
                 R. Muhammad Atif Azad and Conor Ryan",
  title =        "Wave: Incremental Erosion of Residual Error",
  booktitle =    "GECCO 2015 Semantic Methods in Genetic Programming
                 (SMGP'15) Workshop",
  year =         "2015",
  editor =       "Colin Johnson and Krzysztof Krawiec and 
                 Alberto Moraglio and Michael O'Neill",
  isbn13 =       "978-1-4503-3488-4",
  keywords =     "genetic algorithms, genetic programming, Semantic
                 Methods in (SMGP'15) Workshop",
  pages =        "1285--1292",
  month =        "11-15 " # jul,
  organisation = "SIGEVO",
  address =      "Madrid, Spain",
  URL =          "http://doi.acm.org/10.1145/2739482.2768503",
  DOI =          "doi:10.1145/2739482.2768503",
  publisher =    "ACM",
  publisher_address = "New York, NY, USA",
  abstract =     "Typically, Genetic Programming (GP) attempts to solve
                 a problem by evolving solutions over a large, and
                 usually pre-determined number of generations. However,
                 overwhelming evidence shows that not only does the rate
                 of performance improvement drop considerably after a
                 few early generations, but that further improvement
                 also comes at a considerable cost (bloat). Furthermore,
                 each simulation (a GP run), is typically independent
                 yet homogeneous: it does not re-use solutions from a
                 previous run and retains the same experimental
                 settings.

                 Some recent research on symbolic regression divides
                 work across GP runs where the subsequent runs optimise
                 the residuals from a previous run and thus produce a
                 cumulative solution; however, all such subsequent runs
                 (or iterations) still remain homogeneous thus using a
                 pre-set, large number of generations (50 or more). This
                 work introduces Wave, a divide and conquer approach to
                 GP whereby a sequence of short but sharp, and dependent
                 yet potentially heterogeneous GP runs provides a
                 collective solution; the sequence is akin to a wave
                 such that each member of the sequence (that is, a short
                 GP run) is a period of the wave. Heterogeneity across
                 periods results from varying settings of system
                 parameters, such as population size or number of
                 generations, and also by alternating use of the popular
                 GP technique known as linear scaling.

                 The results show that Wave trains faster and better
                 than both standard GP and multiple linear regression,
                 can prolong discovery through constant restarts (which
                 as a side effect also reduces bloat), can innovatively
                 leverage a learning aid, that is, linear scaling at
                 various stages instead of using it constantly
                 regardless of whether it helps and performs reasonably
                 even with a tiny population size (25) which bodes well
                 for real time or data intensive training.",
  notes =        "Also known as \cite{2768503} Distributed at
                 GECCO-2015.",
}

Genetic Programming entries for David Medernach Jeannie Fitzgerald R Muhammad Atif Azad Conor Ryan

Citations