Regularised Gradient Boosting for Financial Time-series Modelling

Created by W.Langdon from gp-bibliography.bib Revision:1.4504

@Article{Agapitos:2018:CMS,
  author =       "Alexandros Agapitos and Anthony Brabazon and 
                 Michael O'Neill",
  title =        "Regularised Gradient Boosting for Financial
                 Time-series Modelling",
  journal =      "Computational Management Science",
  year =         "2017",
  volume =       "14",
  number =       "3",
  pages =        "367--391",
  month =        jul,
  keywords =     "genetic algorithms, genetic programming, Boosting
                 algorithms, Gradient boosting, Stagewise additive
                 modelling, Regularisation, Financial time-series
                 modelling, Financial forecasting, Feedforward neural
                 networks, ANN, Noisy data, Ensemble learning",
  DOI =          "doi:10.1007/s10287-017-0280-y",
  abstract =     "Gradient Boosting (GB) learns an additive expansion of
                 simple basis-models. This is accomplished by
                 iteratively fitting an elementary model to the negative
                 gradient of a loss function with respect to the
                 expansion's values at each training data-point
                 evaluated at each iteration. For the case of
                 squared-error loss function, the negative gradient
                 takes the form of an ordinary residual for a given
                 training data-point. Studies have demonstrated that
                 running GB for hundreds of iterations can lead to
                 overfitting, while a number of authors showed that by
                 adding noise to the training data, generalisation is
                 impaired even with relatively few basis-models.
                 Regularisation is realised through the shrinkage of
                 every newly-added basis-model to the expansion. This
                 paper demonstrates that GB with shrinkage-based
                 regularisation is still prone to overfitting in noisy
                 datasets. We use a transformation based on a sigmoidal
                 function for reducing the influence of extreme values
                 in the residuals of a GB iteration without removing
                 them from the training set. This extension is built on
                 top of shrinkage-based regularisation. Simulations
                 using synthetic, noisy data show that the proposed
                 method slows-down overfitting and reduces the
                 generalisation error of regularised GB. The proposed
                 method is then applied to the inherently noisy domain
                 of financial time-series modelling. Results suggest
                 that for the majority of datasets the method
                 generalises better when compared against standard
                 regularised GB, as well as against a range of other
                 time-series modelling methods.",
}

Genetic Programming entries for Alexandros Agapitos Anthony Brabazon Michael O'Neill

Citations