Ensemble Bayesian Model Averaging in Genetic Programming

Created by W.Langdon from gp-bibliography.bib Revision:1.3872

@InProceedings{Agapitos:2014:CEC,
  title =        "Ensemble {Bayesian} Model Averaging in Genetic
                 Programming",
  author =       "Alexandros Agapitos and Michael O'Neill and 
                 Anthony Brabazon",
  pages =        "2451--2458",
  booktitle =    "Proceedings of the 2014 IEEE Congress on Evolutionary
                 Computation",
  year =         "2014",
  month =        "6-11 " # jul,
  editor =       "Carlos A. {Coello Coello}",
  address =      "Beijing, China",
  ISBN =         "0-7803-8515-2",
  keywords =     "genetic algorithms, Genetic programming, Data mining,
                 Classification, clustering and data analysis",
  DOI =          "doi:10.1109/CEC.2014.6900567",
  abstract =     "This paper considers the general problem of function
                 estimation via Genetic Programming (GP). Data analysts
                 typically select a model from a population of models,
                 and then proceed as if the selected model had generated
                 the data. This approach ignores the uncertainty in
                 model selection, leading to over-confident inferences
                 and lack of generalisation.

                 We adopt a coherent method for accounting for this
                 uncertainty through a weighted averaging of all models
                 competing in a population of GP. It is a principled
                 statistical method for post-processing a population of
                 programs into an ensemble, which is based on Bayesian
                 Model Averaging (BMA).

                 Under two different formulations of BMA, the predictive
                 probability density function (PDF) of a response
                 variable is a weighted average of PDFs centred around
                 the individual predictions of component models that
                 take the form of either standalone programs or
                 ensembles of programs. The weights are equal to the
                 posterior probabilities of the models generating the
                 predictions, and reflect the models' skill on the
                 training dataset.

                 The method was applied to a number of synthetic
                 symbolic regression problems, and results demonstrate
                 that it generalises better than standard methods for
                 model selection, as well as methods for ensemble
                 construction in GP.",
  notes =        "WCCI2014",
}

Genetic Programming entries for Alexandros Agapitos Michael O'Neill Anthony Brabazon

Citations