These videos demonstrate work described in our 2019 JAIR article on a Refinement-based architecture for representing and reasoning with commonsense knowledge on robots. (project funded by US ONR)
Real-Time Gaze Estimation
This video demonstrates work described in our 2018 ECCV paper on ground truth gaze estimation in natural settings; it also resulted in a challenging new dataset for gaze estimation.
Variable Impedance Manipulation
This video demonstrates work described in our 2019 Humanoids paper on using incrementally learned feed-forward models and a hybrid force-motion controller for variable impedance (compliant) control of manipulation in continuous contact tasks. (collaboration with the EU Honda Research Institute).
Motion Retargeting by Disentangling Pose and Movement
This video demonstrates work described in our 2019 BMVC paper , which introduced a deep learning framework for unsupervised motion retargeting that separately learns frame-by-frame poses and overall movement.
Dora the Explorer
This video describes our 2016 AIJ paper on robot task planning and explanation in open and uncertain worlds. (work from CogX project)
One Shot Learning of Dexterous Grasps for Novel Objects
This video describes our 2016 IJRR paper . The method allows learning of dexterous grasps in one shot per grasp type. The method generalises to novel objects. (work from PaCMan project)