|[jacobssonetal07binding] Henrik Jacobsson, Nick Hawes, Geert-Jan Kruijff and Jeremy Wyatt. Crossmodal Content Binding in Information-Processing Architectures Architectures. In Luis Seabra Lopes, Tony Belpaeme and Stephen J, Cowley (editors) Symposium on Language and Robots 2007 (LangRo 2007), pages 43--52. Deprecated! Please see the HRI paper of the same name instead. December 2007. [pdf] [bib]|
Operating in a physical context, an intelligent robot faces two fundamental problems. First, it needs to combine information from its different sensors to form a representation of the environment that is more complete than any of its sensors on its own could provide. Second, it needs to combine high-level representations (such as those for planning and dialogue) with its sensory information, to ensure that the interpretations of these symbolic representations are grounded in the situated context. Previous approaches to this problem have used techniques such as (low-level) information fusion, ontological reasoning, and (high-level) concept learning. This paper presents a framework in which these, and other approaches, can be combined to form a shared representation of the current state of the robot in relation to its environment and other agents. Preliminary results from an implemented system are presented to illustrate how the framework supports behaviours commonly required of an intelligent robot.
|Download: pdf (185 KB) bib|
|Links: [Google Scholar] [CiteSeer]|