Learning noise

Created by W.Langdon from gp-bibliography.bib Revision:1.4208

  author =       "Michael D. Schmidt and Hod Lipson",
  title =        "Learning noise",
  booktitle =    "GECCO '07: Proceedings of the 9th annual conference on
                 Genetic and evolutionary computation",
  year =         "2007",
  editor =       "Dirk Thierens and Hans-Georg Beyer and 
                 Josh Bongard and Jurgen Branke and John Andrew Clark and 
                 Dave Cliff and Clare Bates Congdon and Kalyanmoy Deb and 
                 Benjamin Doerr and Tim Kovacs and Sanjeev Kumar and 
                 Julian F. Miller and Jason Moore and Frank Neumann and 
                 Martin Pelikan and Riccardo Poli and Kumara Sastry and 
                 Kenneth Owen Stanley and Thomas Stutzle and 
                 Richard A Watson and Ingo Wegener",
  volume =       "2",
  isbn13 =       "978-1-59593-697-4",
  pages =        "1680--1685",
  address =      "London",
  URL =          "http://www.cs.bham.ac.uk/~wbl/biblio/gecco2007/docs/p1680.pdf",
  DOI =          "doi:10.1145/1276958.1277289",
  publisher =    "ACM Press",
  publisher_address = "New York, NY, USA",
  month =        "7-11 " # jul,
  organisation = "ACM SIGEVO (formerly ISGEC)",
  keywords =     "genetic algorithms, genetic programming, dynamical
                 systems, stochastic elements, symbolic regression",
  abstract =     "In this paper we propose a genetic programming
                 approach to learning stochastic models with
                 unsymmetrical noise distributions. Most learning
                 algorithms try to learn from noisy data by modelling
                 the maximum likelihood output or least squared error,
                 assuming that noise effects average out. While this
                 process works well for data with symmetrical noise
                 distributions (such as Gaussian observation noise),
                 many real-life sources of noise are not symmetrically
                 distributed, thus this approach does not hold. We
                 suggest improved learning can be obtained by including
                 noise sources explicitly in the model as a stochastic
                 element. A stochastic element is a random sub-process
                 or latent variable of a hidden system that can
                 propagate nonlinear noise to the observable outputs.
                 Stochastic elements can skew and distort output
                 features making regression of analytical models
                 particularly difficult and error minimising approaches
                 inhibiting. We introduce a new method to infer the
                 analytical model of a system by decomposing non-uniform
                 noise observed at the outputs into uniform stochastic
                 elements appearing symbolically inside the system.
                 Results demonstrate the ability to regress exact
                 analytical models where stochastic elements are
                 embedded inside nonlinear and polynomial hidden
  notes =        "GECCO-2007 A joint meeting of the sixteenth
                 international conference on genetic algorithms
                 (ICGA-2007) and the twelfth annual genetic programming
                 conference (GP-2007).

                 ACM Order Number 910071",

Genetic Programming entries for Michael D Schmidt Hod Lipson