Evolving a Locally Optimized Instance Based Learner

Created by W.Langdon from gp-bibliography.bib Revision:1.3963

@InProceedings{DBLP:conf/dmin/JohanssonKN08,
  author =       "Ulf Johansson and Rikard Konig and Lars Niklasson",
  title =        "Evolving a Locally Optimized Instance Based Learner",
  booktitle =    "The 2008 International Conference on Data Mining",
  year =         "2008",
  pages =        "124--129",
  address =      "Las Vegas, USA",
  month =        jul # " 14-17",
  publisher =    "CSREA Press",
  keywords =     "genetic algorithms, genetic programming,
                 instance-based learner, kNN, classification",
  URL =          "http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.1011.1",
  URL =          "http://bada.hb.se:80/bitstream/2320/4208/2/Johansson%2C%20K%C3%B6nig%2C%20Niklasson%20-%202008%20-%20Evolving%20a%20Locally%20Optimized%20Instance%20Based%20Learner.pdf",
  annote =       "The Pennsylvania State University CiteSeerX Archives",
  bibsource =    "OAI-PMH server at citeseerx.ist.psu.edu",
  language =     "en",
  oai =          "oai:CiteSeerX.psu:10.1.1.1011.1",
  abstract =     "Standard kNN suffers from two major deficiencies, both
                 related to the parameter k. First of all, it is
                 well-known that the parameter value k is not only
                 extremely important for the performance, but also very
                 hard to estimate beforehand. In addition, the fact that
                 k is a global constant, totally independent of the
                 particular region in which an instance to be classified
                 falls, makes standard kNN quite blunt. In this paper,
                 we introduce a novel instance-based learner,
                 specifically designed to avoid the two drawbacks
                 mentioned above. The suggested technique, named G-kNN,
                 optimises the number of neighbours to consider for each
                 specific test instance, based on its position in input
                 space; i.e. the algorithm uses several, locally
                 optimised k, instead of just one global. More
                 specifically, G-kNN uses genetic programming to build
                 decision trees, partitioning the input space in
                 regions, where each leaf node (region) contains a kNN
                 classifier with a locally optimised k. In the
                 experimentation, using 27 datasets from the UCI
                 repository, the basic version of G-kNN is shown to
                 significantly outperform standard kNN, with respect to
                 accuracy. Although not evaluated in this study, it
                 should be noted that the flexibility of genetic
                 programming makes sophisticated extensions, like
                 weighted voting and axes scaling, fairly
                 straightforward.",
  notes =        "http://www.dmin-2008.com/programme.htm",
}

Genetic Programming entries for Ulf Johansson Rikard Konig Lars Niklasson

Citations