Aaron Sloman's
For BBC/OU discussion on Artificial Intelligence
Scheduled for Thursday March 14th,
30minutes after midnight GMT
(i.e. 00-30 am GMT on Friday March 15th).

Part of the BBC/Open University series
The Next Big Thing


Artificial Intelligence is a highly interdisciplinary field, which overlaps with philosophy, psychology, neuroscience, biology, linguistics, computer science and software engineering.

I am primarily a philosopher working on old philosophical problems about the nature of mind and the relations between mind and body, but I like to work in the School of Computer Science at The University of Birmingham because I have discovered that the best way to make progress in philosophy of mind is to investigate ways of designing and building working models of minds, or fragments of minds.

That's because minds are not static or passive entities simply floating around the universe. Instead they do things: they take in information (through perception), remember things, learn things, have desires, preferences and emotions, take decisions and make things happen, including things in the physical world.

So our theories of mind should explain how all that is possible. Artificial Intelligence is a discipline that has been attempting to do that since computers became available in the 1950s (though many of the ideas go back much further to automatic machines such as calculators, game playing machines, weaving machines, card-sorting machines and machines for controlling other machines).

My own work is based on the observation that human minds are not the only sort of mind. There are many kinds of animals with minds, and potentially many kinds of robots with minds --- although we are nowhere near producing robots that approach human sophistication except in very narrow fields, such as playing chess, checking mathematical proofs, and controlling fairly simple machines.

In order to understand what minds are, we therefore need to explore the space of possible minds instead of focusing on just one kind of mind. There are many ways to do this, including doing research on animal capabilities (e.g. See the 1991 Penguin book by Marc Hauser, Wild Minds), studying infants and children, and looking at effects of brain damage --- instead of always focusing only on normal adult humans, as philosophers tend to do.

In particular we need to understand the information-processing architectures used by different sorts of minds, including investigating the different implications of different architectures. Some architectures will use particular kinds of representations and mechanisms, for instance logical representations and logical reasoning mechanisms. Others will use quite different mechanisms, such as neural nets loosely modeled on brain mechanisms. The most sophisticated architectures seem to require a variety of different sorts of mechanisms working in parallel on different tasks, e.g. managing your posture, recognizing objects in your environment, making plans, learning, and so on. Some varieties of emotions seem to be related to alarm mechanisms that detect a need for rapid reorganisation of processing in order to avoid imminent harm or in order to grab some opportunity.

So we can expect AI architectures of the future to be hybrid systems combining a wide variety of different sorts of capabilities, using a wide variety of forms of information processing.

In this framework we can take a host of familiar concepts, such as ``consciousness'', ``decision'', ``emotion'', ``intelligence'', ``learning'', ``mind'', and see how different architectures support different versions of those concepts. Thus the architecture of an insect may support very simple forms of consciousness, emotion and learning, whereas the architecture of a chimpanzee allows far more sophisticated versions.

This exploration enables us to add precision and clarity to many old, confused, yet widely used concepts. This may or may not lead to the development of very intelligent human-like machines, but whether it does or not I can help us to gain a much deeper understanding of ourselves, and provide practical benefits through more successful methods of teaching and better kinds of therapy, for example.

(Especially emotions)

Further developments of these ideas are presented in a slide presentation originally prepared for a course of four lectures presented at the Interdisciplinary College 2002 on Autonomy and Emotion. March 1 - 8, 2002 Gunne am Mohnesee, Germany The slides are now available in postscript and PDF here:

Beware: those are DRAFT notes, likely to be updated from time to time as the theories develop.
The slide presentations listed above are provided in both postscript and in PDF formats. Browsers for both are freely available on the internet. See the information in this file

The diagrams in the slides were all produced using the excellent (small, fast, versatile, portable, reliable, and free) tgif package, available for linux and unix systems from here:


on Artificial Intelligence and Computational Cognitive Science

Further information can be found here

Frames-free web site

Maintained by: Aaron Sloman
Last updated: 10 Mar 2002