Benchmarks that matter for genetic programming

Created by W.Langdon from gp-bibliography.bib Revision:1.3872

@InProceedings{Woodward:2014:GECCOcomp,
  author =       "John Woodward and Simon Martin and Jerry Swan",
  title =        "Benchmarks that matter for genetic programming",
  booktitle =    "GECCO 2014 4th workshop on evolutionary computation
                 for the automated design of algorithms",
  year =         "2014",
  editor =       "John Woodward and Jerry Swan and Earl Barr",
  isbn13 =       "978-1-4503-2881-4",
  keywords =     "genetic algorithms, genetic programming",
  pages =        "1397--1404",
  month =        "12-16 " # jul,
  organisation = "SIGEVO",
  address =      "Vancouver, BC, Canada",
  URL =          "http://doi.acm.org/10.1145/2598394.2609875",
  DOI =          "doi:10.1145/2598394.2609875",
  publisher =    "ACM",
  publisher_address = "New York, NY, USA",
  abstract =     "There have been several papers published relating to
                 the practice of benchmarking in machine learning and
                 Genetic Programming (GP) in particular. In addition, GP
                 has been accused of targeting over-simplified 'toy'
                 problems that do not reflect the complexity of
                 real-world applications that GP is ultimately intended.
                 There are also theoretical results that relate the
                 performance of an algorithm with a probability
                 distribution over problem instances, and so the current
                 debate concerning benchmarks spans from the theoretical
                 to the empirical.

                 The aim of this article is to consolidate an emerging
                 theme arising from these papers and suggest that
                 benchmarks should not be arbitrarily selected but
                 should instead be drawn from an underlying probability
                 distribution that reflects the problem instances which
                 the algorithm is likely to be applied to in the
                 real-world. These probability distributions are
                 effectively dictated by the application domains
                 themselves (essentially data-driven) and should thus
                 re-engage the owners of the originating data.

                 A consequence of properly-founded benchmarking leads to
                 the suggestion of meta-learning as a methodology for
                 automatically designing algorithms rather than manually
                 designing algorithms. A secondary motive is to reduce
                 the number of research papers that propose new
                 algorithms but do not state in advance what their
                 purpose is (i.e. in what context should they be
                 applied). To put the current practice of GP
                 benchmarking in a particular harsh light, one might ask
                 what the performance of an algorithm on Koza's
                 lawnmower problem (a favourite toy-problem of the GP
                 community) has to say about its performance on a very
                 real-world cancer data set: the two are completely
                 unrelated.",
  notes =        "Also known as \cite{2609875} Distributed at
                 GECCO-2014.",
}

Genetic Programming entries for John R Woodward Simon Martin Jerry Swan

Citations