CogX: Cognitive Systems that Self-Understand and Self-Extend

CogX Header

An overview of the aims of the CogX Project

The challenge we wish to meet is to understand the principles according to which cognitive systems should be built if they are to handle situations unforeseen by their designers, other forms of novelty, and open-ended, challenging environments with uncertainty and change. Our aim is to meet this challenge by creating a theory - grounded and evaluated in robots - of how a cognitive system can model its own knowledge, use this to cope with uncertainty and novelty during task execution, extend its own abilities and knowledge, and extend its own understanding of those abilities.

Imagine a cognitive system that models not only the environment, but its own understanding of the environment and how this understanding changes under action. It identifies gaps in its own understanding and then plans how to fill those gaps so as to deal with novelty and uncertainty in task execution, gather information necessary to complete its tasks, and to extend its abilities and knowledge so as to perform future tasks more efficiently.

One way to characterise such a system’s behaviour is to say that it is a system that has self-understanding. In this characterisation we use the terms ‘understanding’ and ’self-understanding’ in a limited sense. By ‘understanding’ we mean the system’s collection of models of the environment, and the way they are used by the system to achieve its tasks. These models could be of an enormous variety. In this project we will mainly study: models of action effects; maps; observation models; and type hierarchies (or networks) that organise information about objects, their properties and relations, and the actions that can be performed upon them. By ’self-understanding’ we therefore refer both to models of these models of the environment and also to the ability to learn and reason about them. In other words to have self-understanding the system must have beliefs about beliefs and use these to model and reason about how its actions will change its beliefs. Such a system should be then capable of identifying, planning how to fill, and then filling gaps in its models of the environment; in other words, capable of self-extension. Note that in using the term self-extension we are not referring to systems that just learn, but to systems that can represent what they don’t know, reason about what they can learn, how to act so as to learn it, execute those actions and then learn from the resulting experience. Our planned work is therefore based on the idea that self-understanding is necessary for self-extension (as we have defined the terms). We can summarise the aim of the project as to develop:

a unified theory of self-understanding and self-extension with a convincing instantiation and implementation of this theory in a robot.

The technical challenges that arise from this aim are significant. Different types of information (e.g. stemming from various sensory modalities) require different kinds of representations, and thus representations of the accuracy or completeness of those representations (i.e. beliefs about beliefs) will also vary with the type of information being modelled. This means that devising a unifying framework for the representation of beliefs about beliefs will be challenging. In addition to this, methods for efficiently reasoning about beliefs, and planning in the kinds of belief spaces that we will consider, will require significant progress beyond the state of the art.

Links