Towards adaptive and autonomous humanoid robots: From Vision to Actions

Created by W.Langdon from gp-bibliography.bib Revision:1.3872

@PhdThesis{Leitner:thesis,
  author =       "Juergen Leitner",
  title =        "Towards adaptive and autonomous humanoid robots: From
                 Vision to Actions",
  school =       "Faculty of Informatics of the Universita della
                 Svizzera Italiana",
  year =         "2014",
  address =      "Switzerland",
  keywords =     "genetic algorithms, genetic programming, cartesian
                 genetic programming",
  bibsource =    "OAI-PMH server at doc.rero.ch",
  language =     "english",
  oai =          "oai:doc.rero.ch:20151106093403-UA",
  URL =          "http://doc.rero.ch/record/257528",
  URL =          "http://doc.rero.ch/record/257528/files/2014INFO020.pdf",
  size =         "225 pages",
  abstract =     "Although robotics research has seen advances over the
                 last decades robots are still not in widespread use
                 outside industrial applications. Yet a range of
                 proposed scenarios have robots working together,
                 helping and coexisting with humans in daily life. In
                 all these a clear need to deal with a more
                 unstructured, changing environment arises. I herein
                 present a system that aims to overcome the limitations
                 of highly complex robotic systems, in terms of autonomy
                 and adaptation. The main focus of research is to
                 investigate the use of visual feedback for improving
                 reaching and grasping capabilities of complex robots.
                 To facilitate this a combined integration of computer
                 vision and machine learning techniques is
                 employed.

                 From a robot vision point of view the combination of
                 domain knowledge from both imaging processing and
                 machine learning techniques, can expand the
                 capabilities of robots. I present a novel framework
                 called Cartesian Genetic Programming for Image
                 Processing (CGP-IP). CGP-IP can be trained to detect
                 objects in the incoming camera streams and successfully
                 demonstrated on many different problem domains. The
                 approach requires only a few training images (it was
                 tested with 5 to 10 images per experiment) is fast,
                 scalable and robust yet requires very small training
                 sets. Additionally, it can generate human readable
                 programs that can be further customized and tuned.
                 While CGP-IP is a supervised-learning technique, I show
                 an integration on the iCub, that allows for the
                 autonomous learning of object detection and
                 identification. Finally this dissertation includes two
                 proof-of-concepts that integrate the motion and action
                 sides. First, reactive reaching and grasping is shown.
                 It allows the robot to avoid obstacles detected in the
                 visual stream, while reaching for the intended target
                 object. Furthermore the integration enables us to use
                 the robot in non-static environments, i.e. the reaching
                 is adapted on-the- fly from the visual feedback
                 received, e.g. when an obstacle is moved into the
                 trajectory. The second integration highlights the
                 capabilities of these frameworks, by improving the
                 visual detection by performing object manipulation
                 actions.",
  notes =        "Supervisor Juergen Schmidhuber and Alexander Forster",
}

Genetic Programming entries for Juergen Leitner

Citations