(Don't have time to respond to social media "friend" requests)
Now including the Meta-Morphogenesis Project
There are now far more papers in the Cogaff directory than were originally envisaged when this scheme started. When I find time I shall try to organise grouping by topic, though that will not be easy because of the complex overlaps of topic.
Note: It seems that many of the items on this web site are also accessible via
US OSTI eprints web site:
artificial intelligence toolkit
Kantian Humean causation
what is AI?
"what is information?"
education programming AI
matter energy information
"design space" "niche space"
evolution altricial precocial
emotions "cluster concepts"
functions of vision
There are also links in the table below to projects that overlap with CogAff, including, since 2004, collaborative projects in Cognitive Robotics (CoSy 2004-2008, CogX 2008-2012). Now too many to list here!
Apology Despite warnings from academic staff the central university authorities decided in 2010 to reorganise campus web pages yet again, without taking action to ensure that references to old links are trapped and redirected.
As a result there are probably several broken links on this web site -- and on many other sites on this campus. Identifying and fixing them all will require massive effort for which resources are not available.
(Gratefully acknowledging many collaborators, especially
Margaret Boden, Luc Beaudoin, Ian Wright, Riccardo Poli,
Brian Logan, Steve Allen, Catriona Kennedy, Nick Hawes,
Jeremy Wyatt, Jeremy Baxter, Matthias Scheutz, Dean Petters,
Jackie Chappell, Marek Kopicki, Dave Gurnell, Manuela Viezzer,
Verónica Esther Arriola Ríos, Michael Zillich, ....)
Many researchers propose a theory of THE right architecture for a system with some kind of intelligence (e.g. human intelligence).
Although this may be an appropriate way to address a specific technical problem, it is seriously misguided, if done as a contribution to our scientific or philosophical understanding, unless the specific architecture is related to a theory about THE SPACE of POSSIBLE architectures for various kinds of intelligent system.
Such a theory would need to include a survey of the possible types of components, the different ways they can be combined, the different functions that might be present, the different types of information that might be acquired and used, the different ways such information could be represented and processed, the different ways the architecture could come into existence (e.g. built fully formed, or self-assembling), and how various changes in the design affect changes in functionality.
Such a theory also needs to be related to a study of possible sets of requirements for architectures (and for their components). If we don't consider architectures in relation to what they are used for or needed for (in particular types of context) then we have no way of explaining why they should have the features they have or what the trade-offs between alternative design options are.
These investigations should not be restricted to physical architectures. Since the mid-twentieth century human engineers have increasingly found virtual machine architectures, in which multiple virtual machine components interact with one another and with physical components. It seems that biological evolution "discovered" the need for virtual machinery, especially self-modifying and self-monitoring virtual machinery, long before human engineers did. (This and other "discoveries" by natural selection, and its products, are investigated in the Meta-Morphogenesis project.
Topics investigated include:
Ignoring the variety in these spaces, and instead proposing and studying just ONE architecture (e.g. for an emotional machine) is like doing physics by finding out how things work around the leaning tower of Pisa, and ignoring all other physical environments; or like trying to do biology by studying just one species; or like trying to study chemistry by proposing one complex molecule for investigation.
That's why, unlike other research groups, most of which propose an architecture, argue for its engineering advantages or its evidential support, then build a tool to build models using that architecture, we have tried, instead, to build tools to explore alternative architectures so that we can search the space of designs, including trying to find out which types evolved and why, instead of simply promoting one design. Our SimAgent toolkit (sometimes called "sim_agent") was designed to support exploration of that space, unlike toolkits that are committed to a particular type of architecture. Some videos of toy demos mostly produced in the 1990s can be found here.
When the work began in 1991 it was a continuation of work begun in the 1960s in the School of Social Sciences at The University of Sussex, and later continued in the School of Cognitive and Computing Sciences (COGS). (That, in turn, was a continuation of my 1962 Oxford DPhil Thesis attempting to defend Kant's philosophy of mathematics.)
Some of the earliest work was reported in this 1978 book (now out of print, but
The Computer Revolution in Philosophy: Philosophy, science and models of mind
In addition, an "Afterthoughts" document was begun in August 2015, and will
continue to grow, freely available here:
(also in PDF).
Chapter 7 on "Intuition and analogical reasoning", including reasoning with diagrams, and Chapter 8 "On Learning about Numbers" were specially closely related to the 1962 DPhil work on the nature of mathematical knowledge.
The first PhD thesis completed in the project was by Luc Beaudoin (funded by major scholarships from: Quebec's FCAR, The Association of Commonwealth Universities (UK), and the Natural Sciences and Engineering Research Council (NSERC) of Canada). The this is online here, along with others. Among other things, it offered a new, unusually detailed analysis of aspects of motives that can change over time, and introduced the important distinction between deliberative mechanisms (which can represent, explore, hypothesise, plan and select possible situations, processes and future actions) and meta-management mechanisms which can can monitor, and to some extent control internal processes (including deliberative processes). Some of the ideas are explained in more detail in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/fully-deliberative.html
Similar work elsewhere uses labels such as "reflective", "metacognitive", "executive functions", and "self-regulation", though often with different features emphasised. There is still no generally agreed ontology for describing architectures and their functions, unfortunately -- leading to much reinvention of wheels, often poorly designed wheels. (BICA, (below) is an attempt to remedy this.)
Later extensions arose from funding by DERA which enabled Brian Logan to work here for several years, followed by a project funded by The Leverhulme Trust on Evolvable virtual information processing architectures for human-like minds, originally set up with Brian Logan, which then paid for Matthias Scheutz to work here for 13 months (2000-2001), followed by Ron Chrisley (2001-2003).
In 2004, Jackie Chappell, arrived in the School of Biosciences (having previously worked in Oxford), and we began work on extending biologists' ideas about "Altricial" and "Precocial" species to robots and investigating nature-nurture tradeoffs in animals.
Our theoretical research on animal cognition then expanded e.g. to include work on varieties of causation (Humean and Kantian) in animals and machines. From 2008 this was further expanded to include studies of cognition in orangutans, in collaboration with Susannah Thorpe, and their PhD students, also in the School of Biosciences,
CogAff is really a loose, informal, collection of sub-projects, most of them unfunded at any time, including research on architectures, forms of representation and mechanisms occurring in humans, other animals, and human-like machines.
Analysing such architectures, and the mental states and processes they can support, allows us to investigate, for instance, whether consciousness or the ability to have emotional states is an accident of animal evolution or a direct evolutionary consequence of biological requirements or a side-effect of things meeting other requirements and constraints.
One of the outcomes of this research was development of the CogAff schema introduced above and (explained briefly in this poster). The schema (especially when elaborated beyond those simple diagrammatic specifications) is a high level abstraction that can be instantiated in many different special case architectures. This provides a way of characterising a wide range of types of possible architecture in natural and artificial systems (in contrast with most researchers on cognitive architectures who promote a particular architecture).
A special case (or subclass) of CogAff is the H-CogAff (Human-Cogaff) architecture, described below, which is still currently too difficult to implement, though various subsets have been implemented by researchers here and elsewhere. Some "toy" versions are used in demonstration videos of student programs running
Requirements for architectural theories: The CogAff (generative Schema)
2.a motor/action/effector processes constantly changing the environment and perhaps some internal states
3.a central, more slowly changing, processes
1.b Evolutionarily very old reactive processes, constantly driven by what is sensed internally and externally
2.b Newer deliberative processes able to represent what does not exist but might, e.g. future actions, unseen situations, past causes.
3.b Specialised meta-management/reflective processes capable
of describing information-processes states and processes in
oneself and also in others.
Debates about which came first, self understanding or other understanding
are futile: they almost certainly grew together in fits and starts. Some
further details concerning these distinctions are available here:
-- The first three divisions above (1.a--3.a) correspond to the vertical divisions in the schema.
-- The second three divisions above (1.b--3.b) correspond to the horizontal divisions in the schema: evolutionarily oldest functions in the bottom layer.
This is an over-simplification (a) because each layer should be more finely divided into components performing different functions, (b) because the columns and layers should overlap more (as in the diagram below), and (c) because there are mechanisms that straddle, or link components in, the various boxes in the diagram, including the "alarm" mechanisms that can play a role in emotions and other affective states. Slightly revised versions of the diagram are presented below.
NOTE - A student video on the CogAff schema
Jonathan Metallo and Daniel Lohmer gave a short and entertaining tutorial video presentation on some of the architectural ideas summarised below. The video is available here:
Jonny M and Dani L talk about AI architecture and Sloman. This appears to be an assignment for a course on "Perspectives on Artificial Intelligence, Robotics, and Humanity", in The Department of Computer Science and Engineering at the University of Notre Dame.
A more accurate but more obscure version of the schema
(inserted 21 Mar 2013)
The previous diagram does not make it clearer that perceptual and action/motor mechanisms overlap. E.g. (as J.J.Gibson pointed out in The Senses Considered as Perceptual Systems (1966), mechanisms of vision depend on the use of saccades, head movement, and whole body movements, and haptic sensing depends on controlled movements of hands, tongue, lips, etc.
The following diagram is an attempt to remedy this deficiency in the previous diagram (and other CogAff diagrams).
Note: the above diagram does not show the "alarm" processing routes and mechanisms
(With thanks to Dean Petters, who produced a first draft of the above diagram.)
Some of the missing structural and functional relations in the above diagram are included in the next diagram, which shows the "alarm" processing routes and mechanisms described in other CogAff papers (allowing asynchronous interruption or modulation of ongoing processes, e.g. to meet sudden threats, opportunities, etc.)
(Also with help from Dean Petters.)
Compare the BICA (Biologically Inspired Cognitive Architecture) web site:
There are additional complexities not shown in the above diagrams, including the architectural decomposition at each layer, the complex sub-architectures straddling layers, e.g. for several different kinds of long term memory, for vision, for behaviour initiation and motor control, for language use, for learning, for many kinds of motivation, for personality formation, for social and sexual interaction, and many more.
NB A Schema for architectures is not an architecture.
It is more like a grammar. Instances of the schema are like sentences in the grammar. However the CogAff schema is a grammar whose 'sentences' are not strings but quite complex networks of concurrently active mechanisms with different functions, as discussed in this paper on Virtual Machine Functionalism (VMF).
I have begun to discuss ways in which these ideas could shed light on autism and other
developmental abnormalities, in
A special subset of the CogAff schema: Architectures with Alarms
Alarm mechanisms, states and processes (added 6 Nov 2013)
Many organisms seem to have, and many robots and other intelligent machines will need, an "alarm" mechanism, which receives input from many of the internal and external sensors and is capable of recognising patterns that require very rapid global reorganisation of ongoing processes, for example switching into states like fleeing, attacking, freezing, or attending closely to what may or may not be a sign of serious danger or some opportunity.
This kind of mechanism seems to be very old in animal evolution and can be observed in a woodlouse, for example, when it reacts to being touched by rolling itself up in a ball, or a fly which reacts to the rapid approach of a fly-swat by stopping whatever it is doing (e.g. feeding) and switching to an escape action.
The kinds of effects of alarm mechanisms will obviously depend on various factors such as
It can be difficult to get 'professional' emotion researchers to take such theories seriously, since they are educated to use observation, measurements, questionnaires, statistical packages, but not to think about how to design, test and debug complex working information-processing systems, which is what humans and other animals are. See Margaret A. Boden, Mind As Machine: A history of Cognitive Science (Vols 1--2) Oxford University Press, 2006,
A symptom of that educational inadequacy is attempting to find a few "dimensions" in which features of emotions can be classified and then taking the resulting 2 or 3 (or sometimes higher) dimensional grid as providing a principled classification of emotions.
But complex information processing systems can get into states that vary in far more complex ways than such simple-minded tables or graphs can accommodate: as any system designer knows.
A very crude depiction of an insect-like information processing architecture with alarms could be something like this:
An insect-like special case of the CogAff schema is purely reactive -- none of the deliberative or meta-management functions are provided, though reactive mechanisms may be layered, as indicated crudely in the diagram.
[Modified: 7 Dec 2012] A purely reactive system that always merely reacts to particular stimuli could be modified to include "proto-deliberative" mechanisms (unfortunately labelled "deliberative" by Michael Arbib at a conference in 2002). In a proto-deliberative system, reactive mechanisms can simultaneously trigger two incompatible response-tendencies. Since in general a blend of two incompatible responses is worse than either response, it can be useful to have mechanisms for choosing one of them, e.g. using a comparison of strength, or some other mechanism such as always letting escape reactions win over feeding reactions, or using a self-organising neural net capable of achieving a variety of potentially-stable states, then adopting one.
For more more on different intermediate cases see this discussion of "Fully Deliberative" systems.
In such a (relatively) simple architecture, alarm mechanisms can trigger simple emotions (e.g. in the woodlouse that rapidly curls up in a ball if touched while walking).
Another special subset of the CogAff schema: Omega Architectures
Architectures of this general type where the flow of information and control can be thought of as roughly like the Greek capital letter Omega Ω (not necessarily presented in this sort of diagram) are often re-invented.
The assumption is that perception consists of detection of low level physical signals that are processed at increasing levels of abstraction until the processing generates new goals or preferences, at which point some selection mechanism (e.g. contention scheduling) chooses the best motive or action and the then signals propagate downwards to the motor subsystems which then produce behaviour.
This "peephole" (or "peep-hole") view of perception and action contrasts with the "multi-window" view of both perception and action as involving concurrent processing at different levels of abstraction, partly under the control of the environment and partly under the control of various layers of central information processing, operating in parallel.
So some of the more abstract perceptual or motor processing can be thought of as both cognitive insofar as they make use of forms of representation and ontologies shared with more central processing mechanisms, and also as peripheral (e.g. perception or action processes) because the information structures used are maintained in registration with perceptual input signals (or the optic array in the case of visual input) or in registration with motor signal arrays, and also because the processing at those more abstract levels is bi-directionally closely coupled with the lower level perceptual or motor signals.
These extra layers of perceptual or motor processing are fairly obviously needed for language production or perception because it is now well understood that linguistic expressions have structures at different levels of abstraction that all need specialised processing. Our claim is that that is a special case of a far more general phenomenon (as illustrated in the POPEYE program described in Chapter 9 of The Computer Revolution in in Philosophy 1978).
A poster summarising some of the main theoretical ideas is here (PDF).
Some dimensions in which architectures can vary were presented at the Designing a Mind Symposium on in 2000 in "Models of models of mind." However, that paper is inadequate in several ways, e.g. because it does not clearly distinguish the CogAff schema from the H-CogAff special case, presented briefly below.
It has other flaws that need to be remedied, in part by extending the analysis of ways in which architectures can differ, in part inspired by the diversity produced by biological evolution, and in part by inspiring deeper analyses of that diversity as proposed at the AIIB symposium in 2010.)
This Schema, as explained above, classifies requirements for the major components of an architecture into nine broad categories on a 3x3 grid which can be connected together in different ways (depending on how various kinds of information - factual information, queries, control information, etc, flow between subsystems).
This is just a first crude sub-division, requiring more detailed analysis and further decomposition of cases (as illustrated here). However it does cover many different types of architecture, natural and artificial, depicted rather abstractly above.
Architectures vary according to what mechanisms they have in the boxes, and how they are connected. Also more complex architectures may have important subdivisions and possibly may require functions that don't fit neatly into any of the boxes. (For example, it is arguable that the mechanisms concerned with production and understanding of language, and use of language for thinking and reasoning, are scattered over many different subsystems.)
The generic CogAff schema includes an important sub-class of architectures that include mechanisms capable of producing what might be called "emotional" or "alarm" reactions, as shown in the "insect-like" special case, above.
A much more
complex special case is the H-CogAff architecture, which we suggest
provides a very high level "birds-eye view" of the architecture of a typical
(adult) human mind, depicted crudely here (as a first approximation):
It includes concurrently active sub-architectures that evolved at different times in our evolutionary history, in addition to sub-architectures that grow themselves during individual development (as discussed in this paper by Chappell and Sloman.)
A paper summarising the ideas behind the CogAff schema and the H-CogAff architecture is this 2003 progress report on the Cogaff project.
A paper published in 1996 (published with commentaries) explained how emotional phenomena like long-lasting grief could be accommodated within this framework
I.P. Wright, A. Sloman, L.P. Beaudoin,
Towards a Design-Based Analysis of Emotional Episodes, Philosophy Psychiatry and Psychology, 3, 2, pp. 101--126, 1996,
Further details are provided in other papers, including for example this polemical piece:
Some Requirements for Human-like Robots:
Why the recent over-emphasis on embodiment has held up progress (2008).
Now published in Creating Brain-like Intelligence,
Eds. B. Sendhoff, E. Koerner, O. Sporns and H. Ritter and K. Doya,
Springer-Verlag, 2009 Berlin,
An incomplete survey of types of architecture that include various types of
"deliberative layer" can be found in
"Requirements for a Fully Deliberative Architecture"
Some designs described as "deliberative" by other authors include only what we call "proto-deliberative" mechanisms.
Most of the hypothesised architectures are still too difficult to implement though some of the simpler ones have been implemented using the SimAgent toolkit, and demonstrated here.
More complex examples were developed within the EU-funded CoSy robot project (2004-2008), and are being extended in its sequel the CogX robot project (2004-2012).
Tutorial presentations of how ideas like "qualia" and some of the vexing problems of consciousness ("the explanatory gap") can be understood in this framework are presented here.
In 1998 Gerd Ruebenstrunk presented some of our ideas for German readers in his diploma thesis in psychology on "Emotional Computers" (Bielefeld University, 1998). His 2004 presentation on emotions, at a workshop on "Affective Systems" (in English) is here.
Some of the ideas presented here, including what has been referred to as the use of multi-window perception and action seem to be closely related to some of the architectural ideas in this book (though we have some serious disagreements about the notion of 'self' and about consciousness):
Arnold Trehub, The Cognitive Brain, MIT Press, Cambridge, MA, 1991, http://www.people.umass.edu/trehub/
To be added.
- Aaron Sloman, The mind as a control system, in Philosophy and the Cognitive Sciences, Eds. C. Hookway and D. Peterson, CUP 1993, pp. 69--110,
- A Multi-picture Challenge for Theories of Vision
NEWS: AUDIO BROADCAST ONLINE:
Audio discussion broadcast on Deutschlandradio on 'Emotional Computers' online
(mostly in German), chaired by Maximilian Schönherr.
The audio link is on the right, under 'AUDIO ON DEMAND'. Click on 'Emotionale Agenten'.
Audio interview on grand challenge (December 2004)
Here are some links (BICA and related sites):
Some documents are in html, latex or plain ascii text. Most of the postscript files are duplicated in PDF format.
PDF versions of files available only postscript can be provided on request. Email A.Sloman@cs.bham.ac.uk requesting conversion of a paper you cannot read.
Browsers for these formats are freely available.
NOTE (16 Jun 1998): Files which were previously in form xxx.Z are now in the form xxx.gz
The toolkit is mostly implemented in Pop-11,
which is part of Poplog, which used to be
an expensive commercial product, but is also now available free of charge with full system
Information about the symposium, including abstracts and full papers can
be found here http://www.cs.bham.ac.uk/research/projects/cogaff/dam00
A book of papers related to the workshop edited by Darryl
Davis was published in 2004
Visions of Mind: Architectures for Cognition and Affect.
(I won't publish with IGI because I object to their copyright requirements.)
Please read that file information BEFORE writing to individuals asking for advice or information.
Please note: I do not deal with student admissions.
Being retired, I am also no longer able to supervise PhD students.
See also the School of Computer Science
This file is maintained by