School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project M-M Project


Maintained by
Aaron Sloman
(Don't have time to respond to social media "friend" requests)

Now including the Meta-Morphogenesis Project
This File is available as

MAJOR UPDATE: 3 Aug 2016
This web site has been split between:
--- Contents lists (separate page)
--- CogAff Project Overview (this page).

This file updated:
6 Sep 2014; ... 14 Dec 2015; ... 3 Aug 2016



Note: It seems that many of the items on this web site are also accessible via the
US OSTI eprints web site:

A sample of related materials on this web site

The rest of this web page gives a high level overview of the Cogaff project and related projects. It includes a roughly chronologically organised collection of papers since the 1960s grouped by year of addition to this web site.

There are also links in the table below to projects that overlap with CogAff, including, since 2004, collaborative projects in Cognitive Robotics (CoSy 2004-2008, CogX 2008-2012). Now too many to list here!


Apology Despite warnings from academic staff the central university authorities decided in 2010 to reorganise campus web pages yet again, without taking action to ensure that references to old links are trapped and redirected.

As a result there are probably several broken links on this web site -- and on many other sites on this campus. Identifying and fixing them all will require massive effort for which resources are not available.

Origins and Overview
of The Cognition and Affect (CogAff) Project

(Gratefully acknowledging many collaborators, especially
Margaret Boden, Luc Beaudoin, Ian Wright, Riccardo Poli,
Brian Logan, Steve Allen, Catriona Kennedy, Nick Hawes,
Jeremy Wyatt, Jeremy Baxter, Matthias Scheutz, Dean Petters,
Jackie Chappell, Marek Kopicki, Dave Gurnell, Manuela Viezzer,
Verónica Esther Arriola Ríos, Michael Zillich, ....)

Key Ideas

Many researchers propose a theory of THE right architecture for a system with some kind of intelligence (e.g. human intelligence).

Although this may be an appropriate way to address a specific technical problem, it is seriously misguided, if done as a contribution to our scientific or philosophical understanding, unless the specific architecture is related to a theory about THE SPACE of POSSIBLE architectures for various kinds of intelligent system.

Such a theory would need to include a survey of the possible types of components, the different ways they can be combined, the different functions that might be present, the different types of information that might be acquired and used, the different ways such information could be represented and processed, the different ways the architecture could come into existence (e.g. built fully formed, or self-assembling), and how various changes in the design affect changes in functionality.

Such a theory also needs to be related to a study of possible sets of requirements for architectures (and for their components). If we don't consider architectures in relation to what they are used for or needed for (in particular types of context) then we have no way of explaining why they should have the features they have or what the trade-offs between alternative design options are.

These investigations should not be restricted to physical architectures. Since the mid-twentieth century human engineers have increasingly found virtual machine architectures, in which multiple virtual machine components interact with one another and with physical components. It seems that biological evolution "discovered" the need for virtual machinery, especially self-modifying and self-monitoring virtual machinery, long before human engineers did. This and other "discoveries" by natural selection, and its products, are investigated in the Meta-Morphogenesis project.

Topics investigated include:

Ignoring the variety in these spaces, and instead proposing and studying just ONE architecture (e.g. for an emotional machine) is like doing physics by finding out how things work around the leaning tower of Pisa, and ignoring all other physical environments; or like trying to do biology by studying just one species; or like trying to study chemistry by proposing one complex molecule for investigation.

That's why, unlike other research groups, most of which propose an architecture, argue for its engineering advantages or its evidential support, then build a tool to build models using that architecture, we have tried, instead, to build tools to explore alternative architectures so that we can search the space of designs, including trying to find out which types evolved and why, instead of simply promoting one design. Our SimAgent toolkit (sometimes called "sim_agent") was designed to support exploration of that space, unlike toolkits that are committed to a particular type of architecture. Some videos of toy demos mostly produced in the 1990s can be found here.

Start of the CogAff (Cognition and Affect) Project
Birmingham, 1991.

The project was begun by Aaron Sloman and Glyn Humphreys (then head of Psychology in Birmingham) in 1991, who later moved to Oxford. (He died suddenly in 2016 and is remembered here.)

When the work began in 1991 it was a continuation of work begun in the 1960s in the School of Social Sciences at The University of Sussex, and later continued in the School of Cognitive and Computing Sciences (COGS). (That, in turn, was a continuation of my 1962 Oxford DPhil Thesis attempting to defend Kant's philosophy of mathematics.)

Some of the earliest work was reported in this 1978 book (now out of print, but available online):
The Computer Revolution in Philosophy: Philosophy, science and models of mind

The book was originally published in 1978. Thanks to the efforts of Manuela Viezzer and Sammy Snow it was scanned in 2001 and the chapters were made available in html format, with some notes added (since 2002). Later PDF versions of the chapters were derived from the html and copies of the whole book in PDF format made freely available here, and various other places, including the ASSC repository -- since anyone could freely copy it. In 2015, a new revised edition was produced, with all the chapters in a single internally cross-referenced HTML file, from which a PDF version was derived:
The latest version is also freely available, with a "Creative Commons" licence.

In addition, an "Afterthoughts" document was begun in August 2015, and will continue to grow, freely available here:
(also in PDF).

Chapter 7 on "Intuition and analogical reasoning", including reasoning with diagrams, and Chapter 8 "On Learning about Numbers" were specially closely related to the 1962 DPhil work on the nature of mathematical knowledge.

After AS moved to Birmingham, the work was partly funded by a grant to Sloman and Humphreys, from the UK Joint Council Initiative (JCI), which paid for a workstation and a studentship. An additional studentship was funded by the Renaissance Trust (Gerry Martin).

The first PhD thesis completed in the project was by Luc Beaudoin (funded by major scholarships from: Quebec's FCAR, The Association of Commonwealth Universities (UK), and the Natural Sciences and Engineering Research Council (NSERC) of Canada). The this is online here, along with others. Among other things, it offered a new, unusually detailed analysis of aspects of motives that can change over time, and introduced the important distinction between deliberative mechanisms (which can represent, explore, hypothesise, plan and select possible situations, processes and future actions) and meta-management mechanisms which can can monitor, and to some extent control internal processes (including deliberative processes). Some of the ideas are explained in more detail in

Later PhD students who built on and extended the ideas are listed here (with online theses). In particular a paper summarising some of the key ideas in the context of long term grief, including phenomena that refute many theories of emotions/affect was published (by invitation) in the journal Philosophy Psychiatry and Psychology in 1996

Similar work elsewhere on architectures for intelligent agents uses labels such as "reflective", "metacognitive", "executive functions", and "self-regulation", though often with different features emphasised. There is still no generally agreed ontology for describing architectures and their functions, unfortunately -- leading to much reinvention of wheels, often poorly designed wheels. (The BICA society (mentioned below) is an attempt to remedy this.)

Later extensions arose from funding by DERA which enabled Brian Logan to work here for several years, followed by a project funded by The Leverhulme Trust on Evolvable virtual information processing architectures for human-like minds, originally set up with Brian Logan, which then paid for Matthias Scheutz to work here for 13 months (2000-2001), followed by Ron Chrisley (2001-2003).

A progress report on the CogAff project was written in 2003 (separate document).

From 2004 related work was funded by the EU, in two projects on cognitive robotics CoSy and CogX.

For a while some of the work was done as part of the Intelligent Robotics research laboratory (led by Jeremy Wyatt) at Birmingham. However, the externally funded projects, partly under pressure from collaborators had to focus on more specific targets and techniques, so the more theoretical work reported here, and in the Meta-Morphogenesis project spawned around 2012, continued mostly independently, though with potential implications for future ambitious robotic projects.

Links with Biology
In 2004, Jackie Chappell arrived in the School of Biosciences (having previously worked in Oxford, in the Behavioural Ecology department led by Prof Alex Kacelnik). Our ways of thinking about intelligence in animals had significant overlap so we worked together on extending biologists' ideas about "Altricial" and "Precocial" species to robots and investigating nature-nurture tradeoffs in animals.

Our theoretical research on animal cognition then expanded e.g. to include work on varieties of understanding of causation (Humean and Kantian) in animals and machines. From 2008 this was further expanded to include studies of cognition in orangutans, in collaboration with Susannah Thorpe, and their PhD students, also in the School of Biosciences,

CogAff is really a loose, informal, collection of sub-projects, most of them unfunded at any time, including research on architectures, forms of representation and mechanisms occurring in humans, other animals, and human-like machines.

Some additional topics covered can be found in this document compiled in 2009 and this list of online discussion papers (frequently extended).

Analysing such architectures, and the mental states and processes they can support, allows us to investigate, for instance, whether consciousness or the ability to have emotional states is an accident of animal evolution or a direct evolutionary consequence of biological requirements or a side-effect of things meeting other requirements and constraints.

One of the outcomes of this research was development of the CogAff schema introduced above and (explained briefly in this poster). The schema (especially when elaborated beyond those simple diagrammatic specifications) is a high level abstraction that can be instantiated in many different special case architectures. This provides a way of characterising a wide range of types of possible architecture in natural and artificial systems (in contrast with most researchers on cognitive architectures who promote a particular architecture).

A special case (or subclass) of CogAff is the H-CogAff (Human-Cogaff) architecture, described below, which is still currently too difficult to implement, though various subsets have been implemented by researchers here and elsewhere. Some "toy" versions are used in demonstration videos of student programs running.

The collaboration with Jackie Chappell, and our presentation at IJCAI 2005, led to an invited paper for a new interdisciplinary journal in 2007 "Natural and artificial meta-configured altricial information-processing systems" International Journal of Unconventional Computing available here. Some of our key ideas, extending Waddington's ideas about the "Epigenetic Landscape", are summarised (perhaps too obscurely) in this diagram (based on a simpler version in the paper):


Requirements for architectural theories: The CogAff (generative Schema)

By superimposing the above two classifications we get the following suggestive, but in some ways misleading, 3x3 grid of possible types of architectural components, misleading because not all the required mechanisms will fit into just one of the boxes.

The Cogaff Schema:


The CogAff schema shown above summarises this space of possible types of architectural components.

-- The first three divisions above (1.a--3.a) correspond to the vertical divisions in the schema.

-- The second three divisions above (1.b--3.b) correspond to the horizontal divisions in the schema: evolutionarily oldest functions in the bottom layer.

This is an over-simplification (a) because each layer should be more finely divided into components performing different functions, (b) because the columns and layers should overlap more (as in the diagram below), and (c) because there are mechanisms that straddle, or link components in, the various boxes in the diagram, including the "alarm" mechanisms that can play a role in emotions and other affective states. A revised (but still inadequate) version of the diagram is presented below.

NOTE - A student video on the CogAff schema

Jonathan Metallo and Daniel Lohmer gave a short and entertaining tutorial video presentation on some of the architectural ideas summarised below. The video is available here:
Jonny M and Dani L talk about AI architecture and Sloman. This appears to be an assignment for a course on "Perspectives on Artificial Intelligence, Robotics, and Humanity", in The Department of Computer Science and Engineering at the University of Notre Dame.

A more accurate but more obscure version of the schema
(inserted 21 Mar 2013)

The previous diagram does not make it clearer that perceptual and action/motor mechanisms overlap. E.g. (as J.J.Gibson pointed out in The Senses Considered as Perceptual Systems (1966)), mechanisms of vision depend on the use of saccades, head movement, and whole body movements, and haptic sensing depends on controlled movements of hands, tongue, lips, etc.

The following diagram is an attempt to remedy this deficiency in the previous diagram (and other CogAff diagrams).

New grid

Fig CogArch

Note: the above diagram does not show the "alarm" processing routes and mechanisms described below.
(With thanks to Dean Petters, who produced a first draft of the above diagram.)

Some of the missing structural and functional relations in the above diagram are included in the next diagram, which shows the "alarm" processing routes and mechanisms described in other CogAff papers (allowing asynchronous interruption or modulation of ongoing processes, e.g. to meet sudden threats, opportunities, etc.)

New grid

Fig NewCogArch

(Also with help from Dean Petters.)
Compare the BICA (Biologically Inspired Cognitive Architecture) web site:

There are additional complexities not shown in the above diagrams, including the architectural decomposition at each layer, the complex sub-architectures straddling layers, e.g. for several different kinds of long term memory, for vision, for behaviour initiation and motor control, for language use, for learning, for many kinds of motivation, for personality formation, for social and sexual interaction, and many more.

NB A Schema for architectures is not an architecture.

It is more like a grammar. Instances of the schema are like sentences in the grammar. However the CogAff schema is a grammar whose 'sentences' are not strings but quite complex networks of concurrently active mechanisms with different functions, as discussed in this paper on Virtual Machine Functionalism (VMF).

I have begun to discuss ways in which these ideas could shed light on autism and other developmental abnormalities, in

A special subset of the CogAff schema: Architectures with Alarms

Fig Alarms CogAff + Alarms

Alarm mechanisms, states and processes (added 6 Nov 2013)
Many organisms seem to have, and many robots and other intelligent machines will need, an "alarm" mechanism, which receives input from many of the internal and external sensors and is capable of recognising patterns that require very rapid global reorganisation of ongoing processes, for example switching into states like fleeing, attacking, freezing, or attending closely to what may or may not be a sign of serious danger or some opportunity.

This kind of mechanism seems to be very old in animal evolution and can be observed in a woodlouse, for example, when it reacts to being touched by rolling itself up in a ball, or a fly which reacts to the rapid approach of a fly-swat by stopping whatever it is doing (e.g. feeding) and switching to an escape action.

The kinds of effects of alarm mechanisms will obviously depend on various factors such as

Special cases of the CogAff schema

A very crude depiction of an insect-like information processing architecture with alarms could be something like this:

Fig Insect Insect Alarms?

An insect-like special case of the CogAff schema is purely reactive -- none of the deliberative or meta-management functions are provided, though reactive mechanisms may be layered, as indicated crudely in the diagram.

[Modified: 7 Dec 2012] A purely reactive system that always merely reacts to particular stimuli could be modified to include "proto-deliberative" mechanisms (unfortunately labelled "deliberative" by Michael Arbib at a conference in 2002). In a proto-deliberative system, reactive mechanisms can simultaneously trigger two incompatible response-tendencies. Since in general a blend of two incompatible responses is worse than either response, it can be useful to have mechanisms for choosing one of them, e.g. using a comparison of strength, or some other mechanism such as always letting escape reactions win over feeding reactions, or using a self-organising neural net capable of achieving a variety of potentially-stable states, then adopting one.

For more more on different intermediate cases see this discussion of "Fully Deliberative" systems.

In such a (relatively) simple architecture, alarm mechanisms can trigger simple emotions (e.g. in the woodlouse that rapidly curls up in a ball if touched while walking).

Another special subset of the CogAff schema: Omega Architectures

Fig Omega Omega Architectures

Architectures of this general type where the flow of information and control can be thought of as roughly like the Greek capital letter Omega Ω (not necessarily presented in this sort of diagram) are often re-invented.

The assumption is that perception consists of detection of low level physical signals that are processed at increasing levels of abstraction until the processing generates new goals or preferences, at which point some selection mechanism (e.g. contention scheduling) chooses the best motive or action and the then signals propagate downwards to the motor subsystems which then produce behaviour.

This "peephole" (or "peep-hole") view of perception and action contrasts with the "multi-window" view of both perception and action as involving concurrent processing at different levels of abstraction, partly under the control of the environment and partly under the control of various layers of central information processing, operating in parallel.

So some of the more abstract perceptual or motor processing can be thought of as both cognitive insofar as they make use of forms of representation and ontologies shared with more central processing mechanisms, and also as peripheral (e.g. perception or action processes) because the information structures used are maintained in registration with perceptual input signals (or the optic array in the case of visual input) or in registration with motor signal arrays, and also because the processing at those more abstract levels is bi-directionally closely coupled with the lower level perceptual or motor signals.

These extra layers of perceptual or motor processing are fairly obviously needed for language production or perception because it is now well understood that linguistic expressions have structures at different levels of abstraction that all need specialised processing. Our claim is that that is a special case of a far more general phenomenon (as illustrated in the POPEYE program described in Chapter 9 of The Computer Revolution in in Philosophy 1978).

A much more complex special case
(or subset of special cases)
of the CogAff schema:
H-CogAff (Human-inspired CogAff).

A poster summarising some of the main theoretical ideas is here (PDF).

Some dimensions in which architectures can vary were presented at the Designing a Mind Symposium on in 2000 in "Models of models of mind." However, that paper is inadequate in several ways, e.g. because it does not clearly distinguish the CogAff schema from the H-CogAff special case, presented briefly below.

It has other flaws that need to be remedied, in part by extending the analysis of ways in which architectures can differ, in part inspired by the diversity produced by biological evolution, and in part by inspiring deeper analyses of that diversity as proposed at the AIIB symposium in 2010.)

The CogAff Architecture Schema
and the H-CogAff special case

The name "CogAff" is used both for the project and as a label for a generic schema proposed several years ago for a wide variety of architectures, natural and artificial. (We don't claim it is general enough to cover all cases: some of the distinctions are not fine-grained enough. But it illustrates a style of research on architectures that is unfortunately rare.)

This Schema, as explained above, classifies requirements for the major components of an architecture into nine broad categories on a 3x3 grid which can be connected together in different ways (depending on how various kinds of information - factual information, queries, control information, etc, flow between subsystems).

This is just a first crude sub-division, requiring more detailed analysis and further decomposition of cases (as illustrated here). However it does cover many different types of architecture, natural and artificial, depicted rather abstractly above.

Architectures vary according to what mechanisms they have in the boxes, and how they are connected. Also more complex architectures may have important subdivisions and possibly may require functions that don't fit neatly into any of the boxes. (For example, it is arguable that the mechanisms concerned with production and understanding of language, and use of language for thinking and reasoning, are scattered over many different subsystems.)

The generic CogAff schema includes an important sub-class of architectures that include mechanisms capable of producing what might be called "emotional" or "alarm" reactions, as shown in the "insect-like" special case, above.

A much more complex special case is the H-CogAff architecture, which we suggest provides a very high level "birds-eye view" of the architecture of a typical
(adult) human mind, depicted crudely here (as a first approximation):

Fig H-Cogaff CogAff

It includes concurrently active sub-architectures that evolved at different times in our evolutionary history, in addition to sub-architectures that grow themselves during individual development (as discussed in this paper by Chappell and Sloman.)

A paper summarising the ideas behind the CogAff schema and the H-CogAff architecture is this 2003 progress report on the Cogaff project.

A paper published in 1996 (published with commentaries) explained how emotional phenomena like long-lasting grief could be accommodated within this framework

I.P. Wright, A. Sloman, L.P. Beaudoin,
Towards a Design-Based Analysis of Emotional Episodes, Philosophy Psychiatry and Psychology, 3, 2, pp. 101--126, 1996,

Further details are provided in other papers, including for example this polemical piece:

Some Requirements for Human-like Robots:
Why the recent over-emphasis on embodiment has held up progress (2008).
Now published in Creating Brain-like Intelligence,
Eds. B. Sendhoff, E. Koerner, O. Sporns and H. Ritter and K. Doya,
Springer-Verlag, 2009 Berlin,

An incomplete survey of types of architecture that include various types of "deliberative layer" can be found in "Requirements for a Fully Deliberative Architecture"

Some designs described as "deliberative" by other authors include only what we call "proto-deliberative" mechanisms.

Most of the hypothesised architectures are still too difficult to implement though some of the simpler ones have been implemented using the SimAgent toolkit, and demonstrated here.

More complex examples were developed within the EU-funded CoSy robot project (2004-2008), and are being extended in its sequel the CogX robot project (2004-2012).

Tutorial presentations of how ideas like "qualia" and some of the vexing problems of consciousness ("the explanatory gap") can be understood in this framework are presented here.

In 1998 Gerd Ruebenstrunk presented some of our ideas for German readers in his diploma thesis in psychology on "Emotional Computers" (Bielefeld University, 1998). See especially sections 9 and 10 of his thesis.. His 2004 presentation on emotions, at a workshop on "Affective Systems" (in English) is here.

Some of the ideas presented here, including what has been referred to as the use of multi-window perception and action seem to be closely related to some of the architectural ideas in this book (though we have some serious disagreements about the notion of 'self' and about consciousness):

   Arnold Trehub,
   The Cognitive Brain, MIT Press, Cambridge, MA, 1991,


A Dynamical Systems view of H-CogAff

To be added.

See also:


The CogAff project is inherently interdisciplinary

This work has (surprisingly?) many links with other disciplines, including several branches of philosophy, for example:

We have links with several other groups of researchers at Birmingham

An interdisciplinary Centre for Research in Computational Neuroscience and Cognitive Robotics,
led by the Schools of Psychology and Computer Science, was approved by the University in 2009.

Managed by Aaron Sloman.

Associated with the CoSy Robotics Project since 2004


Cognition and Affect Project Papers, Presentations, Theses, Software

Audio discussion broadcast on Deutschlandradio on 'Emotional Computers' online
(mostly in German), chaired by Maximilian Schönherr.
The audio link is on the right, under 'AUDIO ON DEMAND'. Click on 'Emotionale Agenten'.

Audio interview on grand challenge (December 2004)


In 2002, the UK Computing Research Committee (UKCRC) initiated a discussion of research grand challenges. One of these is Grand Challenge 5: 'Architecture of Brain and Mind' For more information see



Related developments elsewhere: Biologically Inspired Cognitive Architectures (BICA)

The organisers of the BICA (Biologically Inspired Cognitive Architectures)
workshops/conferences have begun to address this problem in a promising way.

Here are some links (BICA and related sites):

Other links

(Use google and "CogAff" to search for more.)

For more details see

Related links

The SimAgent AI toolkit

Our toolkit is available within the Birmingham Free Poplog Web directory with full system sources. For information about the toolkit, which is available see

The toolkit is mostly implemented in Pop-11, which is part of Poplog, which used to be an expensive commercial product, but is also now available free of charge with full system sources, at

Symposium: How to Design a Functional Mind (at AISB 2000)
(The DAM -- 'Designing a Mind' -- symposium)

A symposium on ``How to Design a Functional Mind'' was held at
the AISB'00 Convention at the University of Birmingham 17-20 April 2000.

Information about the symposium, including abstracts and full papers can
be found here

A book of papers related to the workshop edited by Darryl Davis was published in 2004
Visions of Mind: Architectures for Cognition and Affect.
IGI Publishing
(I won't publish with IGI because I object to their copyright requirements.)

A Tribute to Max Clowes,

one of the pioneers of AI in the UK, who died in 1981.
His ideas played an important role in the early development of this work.


For information on how to apply to be a PhD or MSc student
(in Computer Science, Software Engineering, AI, or Cognitive Science)
in this School, see the School's study opportunities web page.

Please read that file information BEFORE writing to individuals asking for advice or information.
Please note: I do not deal with student admissions.
Being retired, I am also no longer able to supervise PhD students.

See also the School of Computer Science Web page.
The Birmingham Centre for Computational Neuroscience and Cognitive Robotics.

This file is maintained by Aaron Sloman,