Apprentissage statistique et programmation g\'en\'etique: la croissance du code est-elle in\'evitable?

Created by W.Langdon from gp-bibliography.bib Revision:1.3973

@InProceedings{DBLP:conf/cfap/GellyTBS05,
  author =       "Sylvain Gelly and Olivier Teytaud and 
                 Nicolas Bredeche and Marc Schoenauer",
  title =        "Apprentissage statistique et programmation
                 g{\'e}n{\'e}tique: la croissance du code est-elle
                 in{\'e}vitable?",
  booktitle =    "Actes de CAP 05, Conf{\'e}rence francophone sur
                 l'apprentissage automatique",
  year =         "2005",
  editor =       "Fran\c{c}ois Denis",
  pages =        "163--178",
  address =      "Nice, France",
  month =        "31 " # may # "-3 " # jun,
  publisher =    "PUG",
  note =         "A Statistical Learning Theory Approach of Bloat",
  bibsource =    "DBLP, http://dblp.uni-trier.de",
  keywords =     "genetic algorithms, genetic programming, VC, Bloat",
  URL =          "http://www.lri.fr/~gelly/paper/bloatCap2005.pdf",
  size =         "16 pages",
  abstract =     "Code bloat, the excessive increase of code size, is an
                 important issue in Genetic Programming (GP). This paper
                 proposes a theoretical analysis of code bloat in the
                 framework of symbolic regression in GP, from the
                 viewpoint of Statistical Learning Theory, a well
                 grounded mathematical toolbox for Machine Learning. Two
                 kinds of bloat must be distinguished in that context,
                 depending whether the target function lies in the
                 search space or not. Then, important mathematical
                 results are proved using classical results from
                 Statistical Learning. Namely, the Vapnik-Cervonenkis
                 dimension of programs is computed, and further results
                 from Statistical Learning allow to prove that a
                 parsimonious fitness ensures Universal Consistency (the
                 solution minimising the empirical error does converge
                 to the best possible error when the number of samples
                 goes to infinity). However, it is proved that the
                 standard method consisting in choosing a maximal
                 program size depending on the number of samples might
                 still result in programs of infinitely increasing size
                 with their accuracy ; a more complicated modification
                 of the fitness is proposed that theoretically avoids
                 unnecessary bloat while nevertheless preserving the
                 Universal Consistency.",
  notes =        "CAP 2005 http://www.lif.univ-mrs.fr/~fdenis/cap05/ In
                 english.

                 an improved version of \cite{gelly:2005:longBloat}

                 Part of DBLP:conf/cfap/2005

                 see also oai:CiteSeerX.psu:10.1.1.696.8127
                 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.696.8127
                 https://hal.inria.fr/inria-00000546/document/",
}

Genetic Programming entries for Sylvain Gelly Olivier Teytaud Nicolas Bredeche Marc Schoenauer

Citations