From intelligent organisms to intelligent social systems: how evolution of meta-management supports social/cultural advances.
Aaron Sloman
School of Computer Science
The University of Birmingham

Invited talk for the AISB2000 Convention

NOTE:

Slides for more recent talks related to this one can be found here http://www.cs.bham.ac.uk/~axs/misc/talks

Draft Abstract:


It is now fairly common in AI to think of humans and other animals, and also many intelligent robots and software agents, as having an information processing architecture which includes different layers which operate in parallel, and which, in the case of mammals, evolved at different stages.

The idea is also quite old in neuroscience. E.g. Albus [1] presents a notion of a layered brain with a reptilian lowest level and at least two more recently evolved (mammalian) levels above that. AI researchers have been exploring a number of variants, of varying sophistication and plausibility, and varying kinds of control relations between layers.

In our own work (see [2]) we have assumed a coarse threefold sub-division between concurrently active reactive, deliberative and meta-management(reflective) layers, all operating partly independently of the others, all with specialised sensory inputs from layered perceptual mechanisms, and with access to a hierarchical motor control system. Different classes of mental processes, including motivations, moods, emotions and types of awareness depend on the different layers.

The meta-management layer, which evolved latest and is rarest in animals, is assumed to be able to monitor, categorise, evaluate, and to some extent control other layers, e.g. redirecting attention or altering the mode of deliberation, though it may sometimes be disrupted by other mechanisms, e.g. in emotional states where attention is repeatedly drawn to an object or topic of concern, even against one's will.

The common reference to "executive function" by psychologists and brain scientists seems to conflate aspects of the deliberative layer and aspects of the meta-management layer. That they are different is shown by the existence of AI systems with sophisticated planning and problem solving and plan execution capabilities without meta-management (reflective) capabilities. A symptom would be a planner that doesn't notice an obvious type of redundancy in the plan it produces. One consequence of having the third layer is the ability to attend to and reflect on one's own mental states, which could cause intelligent robots to discover qualia, and wonder whether humans have them.

There is some evidence that in humans the third layer is not a fixed system: not only does it develop from very limited capabilities in infancy, but even in a normal adult it is as if there are different personalities "in charge" at different times and in different contexts (e.g. at home with the family, driving a car, in the office, at the pub with mates).

Taking most of that for granted, this talk will speculate about the influence of a society or culture on the contents and capabilities of the third layer in humans. The existence of such a layer does not presuppose the existence of an external human language (e.g. chimpanzees may have some reflective capabilities), though it does presuppose the availability of some internal formalism, as do the reactive and deliberative layers. When an external language develops, one of its functions may be to provide the categories and values to be used by individuals in judging their own mental processes (e.g. as selfish, or sinful, or clever, etc.) This would be a powerful form of social control, far more powerful than mechanisms for behavioural imitation, for instance. It might have evolved precisely because it allows what has been learnt by a culture to be transmitted to later generations far more rapidly than if a genome had to be modified. However, even without this social role the third layer would be useful to individuals, and that might have been a requirement for its original emergence in evolution.

All this may provide some food for thought for AI researchers working on multi agent systems, as well as philosophers, brain scientists, social scientists and biologists studying evolution.

This talk is related to the themes of the symposium on How to design a functioning mind

Last updated 18 Feb 2001