Now including the Meta-Morphogensis Project
Apology
Despite warnings from academic staff the central university authorities decided in 2010 to
reorganise campus web pages yet again, without taking action to ensure that references to
old links are trapped and redirected.
As a result there are probably several broken links on this web site -- and on many other
sites on this campus. Identifying and fixing them all will require massive effort for
which resources are not available.
Many researchers propose a theory of THE right architecture for a system with some kind of
intelligence (e.g. human intelligence).
Although this may be an appropriate way to address a specific technical problem, it is
seriously misguided, if done as a contribution to our scientific or philosophical
understanding, unless the specific architecture is related to a theory about THE SPACE of
POSSIBLE architectures for various kinds of intelligent system.
Such a theory would need to include a survey of the possible types of components, the
different ways they can be combined, the different functions that might be present, the
different types of information that might be acquired and used, the different ways such
information could be represented and processed, the different ways the architecture could
come into existence (e.g. built fully formed, or self-assembling), and how various changes
in the design affect changes in functionality.
Such a theory also needs to be related to a study of possible sets of requirements for
architectures (and for their components). If we don't consider architectures in relation
to what they are used for or needed for (in particular types of context) then we have no
way of explaining why they should have the features they have or what the trade-offs
between alternative design options are.
NB These investigations should not be restricted to physical architectures.
Since the mid-twentieth century human engineers have increasingly found virtual machine
architectures, in which multiple virtual machine components interact with one another and
with physical components. It seems that biological evolution "discovered" the need for
virtual machinery, especially self-modifying and self-monitoring virtual machinery, long
before human engineers did.
Topics investigated include:
Proposing and studying just ONE architecture is like doing physics by finding out how
things work around the leaning tower of Pisa, and ignoring all other physical
environments; or like trying to do biology by studying just one species; or like trying to
study chemistry by proposing one molecule for investigation.
That's why, unlike other research groups, most of which propose an architecture, argue for
its engineering advantages or its evidential support, then build a tool to build models
using that architecture, we have tried to build tools to explore alternative architectures
so that we can search the space of designs, including trying to find out which types
evolved and why, instead of simply promoting one design. Our
SimAgent toolkit
(sometimes called "sim_agent") was designed to support exploration of that space, unlike
toolkits that are committed to a particular type of architecture.
Recent developments elsewhere: Biologically Inspired Cognitive Architectures (BICA)
The organisers of the BICA (Biologically Inspired Cognitive Architectures)
workshops/conferences have begun to address this problem in a promising way.Here are some links:
Other links
- BICA 2008 Web site
- BICA 2009 Web site
with many useful links.
- CogArch Repository
Toward a Comparative Repository of Cognitive Architectures, Models, Tasks and Data
- Alexei V. Samsonovich's page.
(Including BICA links.)
- Ron Sun's Architectures Page
- umich.edu Cognitive Architectures page
By Bill Lemon, David Pynadath, Glenn Taylor and Bob Wray.
- Report on AIIB symposium, Spring 2010
- Requirements for a Fully Deliberative Architecture (Or component of an architecture)
Discussion note on some possible architectural sub-divisions.
- A First Draft Analysis of Some Meta-Requirements for Cognitive Systems in Robots
- Architecture-Based Motivation vs Reward-Based Motivation
- The Design-Based Approach to the Study of Mind (in humans, other animals, and machines)
Including the Study of Behaviour Involving Mental Processes.
The project was begun by Aaron Sloman and Glyn Humphreys (psychology) in 1991.When the work began in 1991 it was a continuation of work begun in the 1960s at The
University of Sussex, and continued in the School of Cognitive and Computing Sciences
(COGS). (That, in turn, was a continuation of my 1962 Oxford DPhil Thesis
attempting to defend Kant's philosophy of mathematics.)Some of the earliest work was reported in this book (now out of print, but available
online):The Computer Revolution in Philosophy: Philosophy, science and models of mindAfter AS moved to Birmingham, the work was partly funded by a grant to Sloman and
(1978 -- with notes added since 2002).
Available as PDF and HTML. Also at ASSC repositoryChapter 7 on "Intuition and analogical reasoning", including reasoning with diagrams,
and Chapter 8 "On Learning about Numbers" were specially closely related to
the DPhil work on the nature of mathematical knowledge.
Humphreys, from the UK Joint Council Initiative (JCI), which paid for equipment and a
studentship.
An additional studentship was funded by the Renaissance Trust (Gerry Martin).The first PhD thesis completed in the project was by Luc Beaudoin (funded by major
scholarships from: Quebec's FCAR, The Association of Commonwealth Universities (UK), and
the Natural Sciences and Engineering Research Council (NSERC) of Canada). It is listed
here, along with others. Among other things, it offered a new, unusually detailed analysis
of aspects of motives that can change over time, and introduced the important distinction
between deliberative mechanisms (which can represent, explore, hypothesise, plan and
select possible situations, processes and future actions) and meta-management mechanisms
which can can monitor, and to some extent control internal processes (including deliberative
processes). The ideas are explained in more detail here.
Similar work elsewhere uses labels such as "reflective", "metacognitive", "executive
functions", and "self-regulation", though often with different emphases.
Later extensions arose from funding by DERA which enabled Brian Logan to work here for
several years, followed by a project funded by The Leverhulme Trust on
Evolvable virtual information processing architectures for human-like minds,
originally set up with Brian Logan, which then paid for Matthias Scheutz to work here for
13 months (2000-2001), followed by Ron Chrisley (2001-2003).A progress report on the CogAff project was written in 2003 (separate document).
From 2004 related work was funded by the EU, in two projects on cognitive robotics
CoSy and CogX.Much of this work is now done as part of the Intelligent Robotics research laboratory (led by
Jeremy Wyatt) at Birmingham.In 2004, Jackie Chappell, arrived in the School of Biosciences (having previously worked in Oxford),
and we began work on extending biologists' ideas about "Altricial" and "Precocial" species to robots
and investigating nature-nurture tradeoffs in animals.Our theoretical research on animal cognition then expanded e.g. to include work on
varieties of causation (Humean and Kantian) in animals and machines. From 2008
this was further expanded to include studies of cognition in orangutans, in collaboration
with Susannah Thorpe, and their PhD students, also in the School of Biosciences,CogAff is really a loose, informal, collection of sub-projects, most of them unfunded at any time,
including research on architectures, forms of representation and mechanisms occurring in
humans, other animals, and human-like machines.Some additional topics covered can be found in this document compiled in 2009
and this list of online discussion papers (frequently extended).
Analysing such architectures, and the mental states and processes they can support, allows
us to investigate, for instance, whether consciousness or the ability to have emotional
states is an accident of animal evolution or a direct evolutionary consequence of
biological requirements or a side-effect of things meeting other requirements and
constraints.
One of the outcomes of this research was development of the CogAff schema introduced
above and (explained briefly in this poster). It provides a way of characterising a wide range
of types of possible architecture in natural and artificial systems (in contrast with most
researchers on cognitive architectures who promote a particular architecture).
A special case (or subclass) of CogAff is the H-CogAff (Human-Cogaff) architecture, described
below, which is still currently too difficult to implement, though various subsets have been
implemented by researchers here and elsewhere.
Requirements for architectural theories: The CogAff (generative Schema)
1.a sensory/perceptual processes constantly changing to represent the
environment (including internal states)
2.a motor/action/effector processes constantly changing the environment
and perhaps some internal states
3.a central, more slowly changing, processes
or between
1.b Evolutionarily very old reactive processes, constantly driven by what
is sensed internally and externally
2.b Newer deliberative processes able to represent what does not exist but
might, e.g. future actions, unseen situations, past causes.
3.b Specialised meta-management/reflective processes capable of describing
information-processes states and processes in oneself (and therefore also
others).
By superimposing those two classifications we get the following suggestive 3x3 grid of
The CogAff schema shown above summarises this space of possible types of architectural
components.
-- The first three divisions above (1.a--3.a) correspond to the vertical divisions in the
schema.
-- The second three divisions above (1.b--3.b) correspond to the horizontal divisions in
the schema: evolutionarily oldest functions in the bottom layer.
Jonathan Metallo and Daniel Lohmer gave a short and entertaining tutorial video
presentation on some of the architectural ideas summarised below. The video
is available here
http://www.youtube.com/watch?v=Twzw9iFOspI
Jonny M and Dani L talk about AI architecture and Sloman
This appears to be an assignment for course on "Perspectives on Artificial Intelligence,
Robotics, and Humanity", in The Department of Computer Science and Engineering at the
University of Notre Dame.
A more accurate but more obscure version of the schema (inserted 21 Mar 2013)
The previous diagram does not make it clearer that perceptual and action/motor mechanisms
overlap. E.g. (as J.J.Gibson pointed out in The Senses Considered as Perceptual Systems
(1966), mechanisms of vision depend on the use of saccades, head movement, and whole body
movements, and haptic sensing depends on controlled movements of hands, tongue, lips, etc.
The following diagram is an attempt to remedy this deficiency in the previous diagram (and
other CogAff diagrams).
Fig CogArch
(With thanks to Dean Petters.)
NB A Schema for architectures is not an architecture.
It is more like a grammar. Instances of the schema are like sentences in the grammar.
However the CogAff schema is a grammar whose 'sentences' are not strings but quite
complex networks of concurrently active mechanisms with different functions, as discussed
in this paper on
Virtual Machine Functionalism
(VMF).
I have begun to discuss ways in which these ideas could shed light on autism and other
developmental abnormalities, in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/autism.html
A special subset of the CogAff schema: Architectures with Alarms
Fig Alarms
Many organisms seem to have, and many robots and other intelligent machines will need, an
"alarm" mechanism, which receives input from many of the internal and external sensors and
is capable of recognising patterns that require very rapid global reorganisation of
ongoing processes, for example switching into states like fleeing, attacking, freezing, or
attending closely to what may or may not be a sign of serious danger or some opportunity.
This kind of mechanism seems to be very old in animal evolution and can be observed in
a woodlouse, for example, when it reacts to being touched by rolling itself up in a ball,
or a fly which reacts to the rapid approach of a fly-swat by stopping whatever it is doing
(e.g. feeding) and switching to an escape action.
So a very crude depiction of an insect-like information processing architecture with
alarms could
could
be something like this:
Fig Insect
An insect-like special case of the CogAff schema is purely reactive -- none of the
deliberative or meta-management functions are provided, though reactive
mechanisms may be layered, as indicated crudely in the diagram.
[Modified: 7 Dec 2012]
A purely reactive system that always merely reacts to particular stimuli could be
modified to include "proto-deliberative" mechanisms (unfortunately labelled
"deliberative" by Michael Arbib at a conference in 2002). In a proto-deliberative
system, reactive mechanisms can simultaneously trigger two incompatible
response-tendencies. Since in general a blend of two incompatible responses is worse
than either response, it can be useful to have mechanisms for choosing one of them,
e.g. using a comparison of strength, or some other mechanism such as always letting
escape reactions win over feeding reactions. For more more on different intermediate
cases see this discussion of "Fully Deliberative" systems.
In such a (relatively) simple architecture, alarm mechanisms can trigger simple emotions
(e.g. in the woodlouse that rapidly curls up in a ball if touched while walking).
Another special subset of the CogAff schema: Omega Architectures
Fig Omega
Architectures of this general type where the flow of information and control can be
thought of as roughly like the Greek capital letter Omega Ω (not necessarily presented
in this sort of diagram) are often re-invented.
The assumption is that perception consists of detection of low level physical signals that
are processed at increasing levels of abstraction until the processing generates new goals
or preferences, at which point some selection mechanism (e.g. contention scheduling)
chooses the best motive or action and the then signals propagate downwards to the motor
subsystems which then produce behaviour.
This "peephole" (or "peep-hole") view of perception and action contrasts with the
"multi-window" view of both perception and action as involving concurrent processing at
different levels of abstraction, partly under the control of the environment and partly
under the control of various layers of central information processing, operating in
parallel. So some of the more abstract perceptual or motor processing can be thought of as
both cognitive insofar as they make use of forms of representation and ontologies shared
with more central processing mechanisms, and also as peripheral (e.g. being perception or
action mechanisms) because the information structures used are maintained in registration
with perceptual input signals (or the optic array in the case of visual input) or in
registration with motor signal arrays, and also because the processing at those more
abstract levels is bi-directionally closely coupled with the lower level perceptual or
motor signals. These extra layers of perceptual or motor processing are fairly obviously
needed for language production or perception because it is now well understood that
linguistic expressions have structures at different levels of abstraction that all need
specialised processing. Our claim is that that is a special case of a far more general
phenomenon (as illustrated in the POPEYE program described in Chapter 9 of The Computer
Revolution in in Philosophy 1978).
Below we introduce a much more complex special case (or subset of special cases) of the
CogAff schema: H-CogAff (Human-inspired CogAff).
A poster summarising some of the main theoretical ideas is
here (PDF 3-pages).
A flash version has mysteriously appeared on Docstoc
here.
(Can anyone tell me how that happened?)
Some dimensions in which architectures can vary were presented at the Designing a Mind
Symposium on in 2000 in "Models of models of mind." However, that paper is inadequate in
several ways, e.g. because it does not clearly distinguish the CogAff schema from the
H-CogAff
special case, presented briefly below.
It has other flaws that need to be remedied, in part by extending the analysis of ways in
which architectures can differ, in part inspired by the diversity produced by biological
evolution, and in part by inspiring deeper analyses of that diversity as proposed at the
AIIB symposium in 2010.)
A much more
complex special case is the H-CogAff architecture, which we suggest
provides a very high level "birds-eye view" of the architecture of a typical
(adult) human
mind, depicted crudely here (as a first approximation):
Fig H-Cogaff
It includes concurrently active sub-architectures that evolved at
different times
in our evolutionary history, in addition to
sub-architectures that grow themselves
during
individual development (as discussed in
this paper
by Chappell and
Sloman.)
A paper summarising the ideas behind the CogAff schema and the
H-CogAff architecture
is this
2003
progress report on the Cogaff project.
A paper published in 1996 (published with commentaries) explained how emotional phenomena
like long-lasting grief could be accommodated within this framework
I.P. Wright, A. Sloman, L.P. Beaudoin,
Towards a Design-Based Analysis of Emotional Episodes,
Philosophy Psychiatry and Psychology, 3, 2, pp. 101--126, 1996,
http://www.cs.bham.ac.uk/research/projects/cogaff/96-99.html#2
Further details are provided in other papers, including for example this polemical piece:
Some Requirements for Human-like Robots:
Why the recent over-emphasis on embodiment has held up progress (2008).
Now published in
Creating Brain-like Intelligence,
Eds. B. Sendhoff, E. Koerner, O. Sporns and H. Ritter and K. Doya,
Springer-Verlag, 2009 Berlin,
http://rapidshare.com/files/209786694/Creating_Brain-Like_Intelligence.zip
An incomplete survey of types of architecture that include a
"deliberative layer" can be
found in
Requirements for a Fully Deliberative Architecture.
Some systems described as "deliberative" include only what we call "proto-deliberative"
mechanisms.
Most of the hypothesised architectures are still too difficult to implement
though some of the simpler ones have been implemented using
the SimAgent toolkit,
and demonstrated
here.
More complex examples were developed within the EU-funded
CoSy robot
project (2004-2008),
and are being extended in its sequel
the CogX robot project
(2004-2012).
Tutorial presentations of how ideas like "qualia" and some of the vexing problems of
consciousness ("the explanatory gap") can be understood in this framework are presented
here.
In 1998 Gerd Ruebenstrunk presented some of our ideas for German readers in his diploma
thesis in psychology on "Emotional Computers"
(Bielefeld University, 1998). His 2004 presentation
on emotions, at a workshop on "Affective Systems" (in English) is
here.
Some of the ideas presented here, including what has been referred to as the use of
multi-window perception and action seem to be closely related to some of the architectural
ideas in this book (though we have some serious disagreements about the notion of 'self'
and about consciousness):
Arnold Trehub, The Cognitive Brain, MIT Press, Cambridge, MA, 1991, http://www.people.umass.edu/trehub/
To be added.See also:
- Aaron Sloman, The mind as a control system, in Philosophy and the Cognitive Sciences,
Eds. C. Hookway and D. Peterson,
CUP 1993, pp. 69--110, http://tinyurl.com/BhamCog/81-95.html#18
- A Multi-picture Challenge for Theories of Vision
http://tinyurl.com/BhamCog/misc/multipic-challenge.pdf
NEWS: AUDIO BROADCAST ONLINE:
Audio discussion
broadcast on Deutschlandradio
on 'Emotional Computers' online
(mostly in German), chaired by
Maximilian Schönherr.
The audio link is on the right, under 'AUDIO ON DEMAND'. Click on 'Emotionale Agenten'.
Audio interview on grand challenge (December 2004)
OUR SOFTWARE TOOLS ARE AVAILABLE FREE OF CHARGE/OPEN SOURCE
at
http://www.cs.bham.ac.uk/research/poplog/freepoplog.html
Including
and search here:
tertiary emotions
meta-management
diagrams
vision architecture
artificial intelligence toolkit
information-processing
Kantian Humean causation
meta-requirements eucognition
free-will
what is AI?
"what is information?"
Grand challenge
research roadmap
education programming AI
qualia
Marvin Minsky
-----------------------------------matter energy information
"possible minds"
"design space" "niche space"
evolution altricial precocial
biology
emotions "cluster concepts"
emotions intelligence
emotions architectures
virtual machines
Turing irrelevance
CoSy Playmate
CoSy robot
functions of vision
consciousness
creativity machines
John McCarthy
-----------------------------------
Some documents are in html, latex or plain ascii text. Most of the postscript files are
duplicated in PDF format.
PDF versions of files available only postscript can be provided on request.
Email A.Sloman@cs.bham.ac.uk requesting conversion of a paper you cannot read.
Browsers for these formats are freely available.
NOTE (16 Jun 1998):
Files which were previously in form xxx.Z are now in the form xxx.gz
The toolkit is mostly implemented in Pop-11,
which is part of Poplog, which used to be
an expensive commercial product, but is also now available free of charge with full system
sources, at
http://www.cs.bham.ac.uk/research/poplog/freepoplog.html
Information about the symposium, including abstracts and full papers can be found here http://www.cs.bham.ac.uk/research/projects/cogaff/dam00
A book of papers related to the workshop was edited by Darryl
Davis and published in 2004
Visions of Mind: Architectures for Cognition and
Affect.
IGI Publishing
Please read that file information BEFORE writing to individuals asking
for advice or information.
Please note: I do not deal with student admissions.
This file is maintained by
Aaron Sloman,
Email A.Sloman@cs.bham.ac.uk