Part of the BBC/Open University series
The Next Big Thing
I am primarily a philosopher working on old philosophical problems about the nature of mind and the relations between mind and body, but I like to work in the School of Computer Science at The University of Birmingham because I have discovered that the best way to make progress in philosophy of mind is to investigate ways of designing and building working models of minds, or fragments of minds.
That's because minds are not static or passive entities simply floating around the universe. Instead they do things: they take in information (through perception), remember things, learn things, have desires, preferences and emotions, take decisions and make things happen, including things in the physical world.
So our theories of mind should explain how all that is possible. Artificial Intelligence is a discipline that has been attempting to do that since computers became available in the 1950s (though many of the ideas go back much further to automatic machines such as calculators, game playing machines, weaving machines, card-sorting machines and machines for controlling other machines).
My own work is based on the observation that human minds are not the only sort of mind. There are many kinds of animals with minds, and potentially many kinds of robots with minds --- although we are nowhere near producing robots that approach human sophistication except in very narrow fields, such as playing chess, checking mathematical proofs, and controlling fairly simple machines.
In order to understand what minds are, we therefore need to explore the space of possible minds instead of focusing on just one kind of mind. There are many ways to do this, including doing research on animal capabilities (e.g. See the 1991 Penguin book by Marc Hauser, Wild Minds), studying infants and children, and looking at effects of brain damage --- instead of always focusing only on normal adult humans, as philosophers tend to do.
In particular we need to understand the information-processing architectures used by different sorts of minds, including investigating the different implications of different architectures. Some architectures will use particular kinds of representations and mechanisms, for instance logical representations and logical reasoning mechanisms. Others will use quite different mechanisms, such as neural nets loosely modeled on brain mechanisms. The most sophisticated architectures seem to require a variety of different sorts of mechanisms working in parallel on different tasks, e.g. managing your posture, recognizing objects in your environment, making plans, learning, and so on. Some varieties of emotions seem to be related to alarm mechanisms that detect a need for rapid reorganisation of processing in order to avoid imminent harm or in order to grab some opportunity.
So we can expect AI architectures of the future to be hybrid systems combining a wide variety of different sorts of capabilities, using a wide variety of forms of information processing.
In this framework we can take a host of familiar concepts, such as ``consciousness'', ``decision'', ``emotion'', ``intelligence'', ``learning'', ``mind'', and see how different architectures support different versions of those concepts. Thus the architecture of an insect may support very simple forms of consciousness, emotion and learning, whereas the architecture of a chimpanzee allows far more sophisticated versions.
This exploration enables us to add precision and clarity to many old, confused, yet widely used concepts. This may or may not lead to the development of very intelligent human-like machines, but whether it does or not I can help us to gain a much deeper understanding of ourselves, and provide practical benefits through more successful methods of teaching and better kinds of therapy, for example.
http://www.cs.bham.ac.uk/research/cogaff/ibm02/slides.bbc02.psBeware: those are DRAFT notes, likely to be updated from time to time as the theories develop.
The slide presentations listed above are provided in both postscript and in PDF formats. Browsers for both are freely available on the internet. See the information in this filehttp://www.cs.bham.ac.uk/~axs/browsers.html
The diagrams in the slides were all produced using the excellent (small, fast, versatile, portable, reliable, and free) tgif package, available for linux and unix systems from here:http://bourbon.cs.umd.edu:8001/tgif/
An ``elementary'' overview of some of the main themes in Artificial Intelligence research and development, including pointers to additional sources of information. (Originally written for School careers advisers.).
A large and very useful collection of online information about AI, including news items, films, publications, software, and much more.
An incomplete but useful glossary of AI terms that may be helpful for newcomers to the field.
The Birmingham Cognition and Affect project paper repository. Papers here develop in more detail some of the themes mentioned above.
A collection of AI software, documentation, libraries and tools, all available free of charge, all with ``open sources'' for the software. These can be used for teaching, research or development in AI. (The core package, originally developed at the University of Sussex, used to be an expensive commercial product, but has been available free of charge since 1999.)
An online introduction to a powerful AI language, suitable for experienced programmers who are familiar with non-AI languages, such as C, C++, Java, Pascal, Perl, etc. and are wondering whether to use an AI language.
A collection of online slide presentations on a range of topics related to AI, philosophy, cognitive science, psychology, and biology.
A free online copy of the book The Computer Revolution in Philosophy: Philosophy Science and Models of Mind firs published in 1978. The online version has some new notes added.
Marvin Minsky is one of the pioneers of Artificial Intelligence. His web site includes a draft book entitled ``The emotion machine'', readable online, in addition to many other papers by him on AI.
This is the web site of John McCarthy, another of the pioneers, who has made many of his papers available online.
The home page for The Society for the Study of Artificial Intelligence and Simulation of Behaviour. This is the oldest AI society (founded around 1969) and still has annual conferences in the UK, a newsletter, journal, email announcement service, etc. Membership fees, especially for students, are very low. See the website for details.