THE UNIVERSITY OF BIRMINGHAM
School of Computer Science
Cognitive Science Research Centre

THE BIRMINGHAM COGNITION AND AFFECT PROJECT
PROJECT WEB DIRECTORY
PAPERS ADDED BETWEEN 1981 AND 1995 (APPROXIMATELY)
(Some of them published in 1996 or later)
Plus a few earlier papers added to this list later.

PAPERS 1981 -- 1995 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX FILE

This file is http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html
Maintained by Aaron Sloman -- who does not respond to Facebook requests.
It contains an index to files in the Cognition and Affect project FTP/Web directory. It contains papers written before 1996.

Some of the papers by Aaron Sloman listed here were written while he was at the University of Sussex. He moved to the University of Birmingham in July 1991.

Last updated: 3 Jan 2010; 13 Nov 2010; 7 Jul 2012; .... 11 Apr 2014


Most of the papers listed here are in compressed or uncompressed postscript format. Some are latex or plain ascii text. Most are also available in PDF. For information on free browsers for these formats see http://www.cs.bham.ac.uk/~axs/browsers.html

PDF versions of postscript files can be provided on request. Please Email A.Sloman@cs.bham.ac.uk requesting conversion.

Papers are listed below roughly in reverse chronological order.



JUMP TO DETAILED LIST (AFTER CONTENTS)

CONTENTS LIST
PAPERS IN THE COGNITION AND AFFECT FTP DIRECTORY (1981-1995)
(And some earlier papers)
(latest first)

Title: What Are The Purposes Of Vision?
(link to another file)

Based on invited presentation at Fyssen Foundation Workshop on Vision,
Versailles France, March 1986, Organiser: M. Imbert
(The proceedings were never published.)

Author: Aaron Sloman (Installed here: 4 Nov 2012)

Computational Epistemology (1982)
From a workshop on Genetic Epistemology and Artificial Intelligence
Geneva 1980

Author: Aaron Sloman (Installed here: 25 Jan 2014)

Developing concepts of consciousness
(Commentary on Velmans, BBS, 1991)

Author: Aaron Sloman (Installed here: 4 Jun 2013)

Title: A Suggestion About Popper's Three Worlds In the Light of Artificial Intelligence
(Previously: Artificial Intelligence and Popper's Three Worlds)
Author: Aaron Sloman (Installed here: 9 Oct 2012)

Title: A Personal View Of Artificial Intelligence
Preface to Computers and Thought 1989 (by Sharples et al).
Author: Aaron Sloman (Installed here: 4 Sep 2012)

Towards a Computational Theory of Mind (PDF)
Aaron Sloman
(First published 1984 -- installed here with a new end-note 8 Aug 2012. Details in another file.)

Title: Skills Learning and Parallelism
Cognitive Science Conference, 1981
Author: Aaron Sloman

Title: Simulating agents and their environments
Authors: Darryl Davis, Aaron Sloman and Riccardo Poli

Title: Towards a Grammar of Emotions
Author: Aaron Sloman

Title: Beginners Need Powerful Systems
Author: Aaron Sloman

Title: The Evolution of Poplog and Pop-11 at Sussex University
Author: Aaron Sloman

Title: The primacy of non-communicative language
Author: Aaron Sloman

Title: A Philosophical Encounter
Authors: Aaron Sloman

Title: Exploring design space and niche space
Authors: Aaron Sloman

Title: A Hybrid Trainable Rule-based System
Authors: Riccardo Poli and Mike Brayshaw

Title: Information about the SIM_AGENT toolkit
Authors: Aaron Sloman and Riccardo Poli

Title: Goal processing in autonomous agents
Author: Luc P. Beaudoin

Title: The use of ratings for the integration of planning and learning in a broad but shallow agent architecture.
Author: Christian Paterson

Title: Why robots will have emotions
Authors: Aaron Sloman and Monica Croucher

Title: An Emotional Agent -- The Detection and Control of Emergent
Author: Ian Wright

Title: Computational Constraints on Associative Learning,
Author: Edmund Shing

Title: Musings on the roles of logical and non-logical representations in intelligence.
Author: Aaron Sloman

Title: Geneva Emotion Week 1995

Title: Towards a general theory of representations
Author: Aaron Sloman

Title: Computational Modelling Of Motive-Management Processes
Authors: Aaron Sloman, Luc Beaudoin and Ian Wright

Title: Applying Systemic Design to the study of `emotion'
Author: Tim Read

Title: Computational Constraints for Associative Learning
Author: Edmund Shing

Title: Explorations in Design Space
Author: Aaron Sloman

Title: Representations as control substates (DRAFT)
Author: Aaron Sloman

Title: Semantics in an intelligent control system
Author: Aaron Sloman

Title: A Summary of the Attention and Affect Project
Author: Ian Wright

Title: Varieties of Formalisms for Knowledge Representation
Author: Aaron Sloman

Title: Systemic Design: A Methodology For Investigating Emotional
Author: Tim Read

Title: The Terminological Pitfalls of Studying Emotion
Authors: Tim Read and Aaron Sloman

Title: Cassandra: Planning with contingencies
Authors: Louise Pryor and Gregg Collins

Title: Reference features as guides to reasoning about opportunities
Authors: Louise Pryor and Gregg Collins

Title: The Mind as a Control System,
Author: Aaron Sloman

Title: Prospects for AI as the General Science of Intelligence
Author: Aaron Sloman

Title: A study of motive processing and attention,
Authors: Luc P. Beaudoin and Aaron Sloman

Title: What are the phenomena to be explained?
Author: Aaron Sloman

Title: Towards an information processing theory of emotions
Author: Aaron Sloman

Title: Silicon Souls, How to design a functioning mind
Author: Aaron Sloman

Title: The Emperor's Real Mind (Review of Penrose)
Author: Aaron Sloman

Title: Appendix to JCI proposal, The Attention and Affect Project
Authors: Aaron Sloman and Glyn Humphreys

Title: Prolegomena to a Theory of Communication and Affect
Author: Aaron Sloman

Title: Notes on consciousness
Author: Aaron Sloman

Title: How to dispose of the free will issue
Author: Aaron Sloman

Title: On designing a visual system: Towards a Gibsonian computational model of vision.
Author: Aaron Sloman

Title: Motives Mechanisms and Emotions
Author: Aaron Sloman

Title: Reference without causal links,
Author: Aaron Sloman

Title: What enables a machine to understand?
Author: Aaron Sloman

Title: Why we need many knowledge representation formalisms,
Author: A.Sloman


DETAILS OF FILES AVAILABLE


BACK TO CONTENTS LIST

Filename: comp-epistemology-sloman.pdf
Title: Computational Epistemology

in Genetic epistemology and cognitive science Structures and cognitive processes:
Proceedings of the 2nd and 3rd Advanced Courses in Genetic Epistemology,
organised by the Fondation Archives Jean Piaget in 1980 and 1981. - Geneva: Fondation Archives Jean Piaget, 1982. - P. 49-93.
http://ael.archivespiaget.ch/dyn/portal/index.seam?page=alo&aloId=16338&fonds=&menu=&cid=28

Author: Aaron Sloman

Date: (Originally Published in 1982)

Abstract:

To be added.


Filename: sloman-on-velmans-bbs.pdf (PDF)
Title: Developing concepts of consciousness

Commentary on 'Is Human Information Processing Conscious',
By Max Velmans
in Behavioural and Brain Sciences C.U.P., 1991
Author: Aaron Sloman
Date Installed: 4 Jun 2013

Where published:

Behavioral and Brain Sciences, Vol 14, Issue 04, Dec, 1991, pp. 694--695,
http://dx.doi.org/10.1017/S0140525X00072071

Extract:

Velmans cites experiments undermining hypotheses about causal roles for consciousness in perception, learning, decision making, and so on. I'll leave it to experts to challenge the data, as I want to concentrate on removing the surprising sting in the tail of the argument.
.................
Conjecture: This (very difficult) design-based strategy for explaining phenomena that would support talk of consciousness will eventually explain it all. We shall have evidence of success if intelligent machines of the future reject our explanations of how they work, saying it leaves out something terribly important, something that can only be described from the first-machine point of view.


Filename: sloman-popper-3-worlds.pdf

Title: A Suggestion About Popper's Three Worlds In the Light of Artificial Intelligence
(Previously: Artificial Intelligence and Popper's Three Worlds)

Author: Aaron Sloman

Date: 1985
Date Installed: 9 Oct 2012

Where published:

In Problems, Conjectures, and Criticisms: New Essays in Popperian Philosophy,
Eds. Paul Levinson and Fred Eidlin, Special issue of ETC: A Review of General Semantics, (42:3) Fall 1985.
http://www.generalsemantics.org/store/etc-a-review-of-general-semantics/309-etc-a-review-of-general-semantics-42-3-fall-1985.html

Abstract:

Materialists claim that world2 is reducible to world1. Work in Artificial Intelligence suggests that world2 is reducible to world3, and that one of the main explanatory roles Popper attributes to world2, namely causal mediation between worlds 1 and 3, is a redundant role. The central claim can be summed up as: "Any intelligent ghost must contain a computational machine." Computation is a world3 process. Moreover, much of AI (like linguistics) is clearly both science and not empirically refutable, so Popper's demarcation criterion needs to be replaced by a criterion which requires scientific theories to have clear and definite consequences concerning what is possible, rather than about what will happen.

Having always admired Popper and been deeply influenced by some of his ideas (even though I do not agree with all of them) I feel privileged at being invited to contribute to a volume of commentaries on his work. My brief is to indicate the relevance of work in Artificial Intelligence (henceforth AI) to Popper's philosophy of mind. Materialist philosophers of mind tend to claim that world2 is reducible to world1. I shall try to show how AI suggests that world2 is reducible to world3, and that one of the main explanatory roles Popper attributes to world2, namely causal mediation between worlds 1 and 3, is a redundant role. The central claim of this paper can be summed up by the slogan: "Any intelligent ghost must contain a computational machine".


Filename: personal-ai-sloman-1988.html (HTML)
Filename: personal-ai-sloman-1988.pdf (PDF)

Title: A Personal View Of Artificial Intelligence
Author: Aaron Sloman

Date Installed: 4 Sep 2012 (First published 1989)

Where published:

Preface to Computers and Thought 1989
By Mike Sharples, David Hogg, Chris Hutchinson, Steve Torrance, and David Young
MIT Press, 20 Oct 1989 - 433 pages

This preface has also been available since about 1988 as a 'TEACH' file in the Poplog system: TEACH AITHEMES

Abstract:

(Extract from Introduction:)
There are many books, newspaper reports and conferences providing information and making claims about Artificial Intelligence and its lusty baby the field of Expert Systems. Reactions range from one lunatic view that all our intellectual capabilities will be exceeded by computers in a few years time to the slightly more defensible opposite extreme view that computers are merely lumps of machinery that simply do what they are programmed to do and therefore cannot conceivably emulate human thought, creativity or feeling. As an antidote for these extremes, I'll try to sketch a sane middle-of-the-road view.


Towards a Computational Theory of Mind (PDF)
Aaron Sloman

(First published 1984 -- installed here with a new end-note 8 Aug 2012. Details in another file.)


Filename: skills-cogsci-81.pdf (PDF)
Filename: skills-cogsci-81.txt (Plain Text)
Filename: skills-cogsci-81.ps (Postscript)
Title: Skills Learning and Parallelism

In Proceedings Cognitive Science Conference, Berkeley, 1981.
Author: Aaron Sloman
Date Installed: 15 Jan 2008 (Written April 1981)

Abstract:
From the text

The distinction between compiled and interpreted programs plays an important role in computer science and may be essential for understanding intelligent systems. For instance programs in a high-level language tend to have a much clearer structure than the machine code compiled equivalent, and are therefore more easily synthesised, debugged and modified. Interpreted languages make it unnecessary to have both representations. Further, if the interpreter is itself an interpreted program it can be modified during the course of execution, for instance to enhance the semantics of the language it is interpreting, and different interpreters may be used with the same program, for different purposes: e.g. an interpreter running the program in 'careful mode' would make use of comments ignored by an interpreter running the program at maximum speed (Sussman 1975). (The possibility of changing interpreters vitiates many of the arguments in Fodor (1975) which assume that all programs are compiled into a low level machine code, whose interpreter never changes).

People who learn about the compiled/interpreted distinction frequently re-invent the idea that the development of skills in human beings may be a process in which programs are first synthesised in an interpreted language, then later translated into a compiled form. The latter is thought to explain many features of skilled performance, for instance, the speed, the difficulty of monitoring individual steps, the difficulty of interrupting, starting or resuming execution at arbitrary desired locations, the difficulty of modifying a skill, the fact that performance is often unconscious after the skill has been developed, and so on. On this model, the old jokes about centipedes being unable to walk, or birds to fly, if they think about how they do it, might be related to the impossibility of using the original interpreter after a program has been compiled into a lower level language.

Despite the attractions of this theory I suspect that a different model is required in some cases.



Filename: davis-sloman-poli-aisbq-95.pdf
Also at http://www2.dcs.hull.ac.uk/NEAT/dnd/
http://www2.dcs.hull.ac.uk/NEAT/dnd/papers/aisbq.pdf

Title: Simulating agents and their environments,
In AISB Quarterly, Autumn 1995
Authors: Darryl Davis, Aaron Sloman and Riccardo Poli,
Date: Installed here 3 Mar 2004 (Originally Published in 1995)

Abstract:
This paper describes a toolkit that arose out of a project concerned with designing an architecture for an autonomous agent with human-like capabilities. Analysis of requirements showed a need to combine a wide variety of richly interacting mechanisms, including independent asynchronous sources of motivation and the ability to reflect on which motives to adopt, when to achieve them, how to achieve them, and so on. These internal `management' (and metamanagement) processes involve a certain amount of parallelism, but resource limits imply the need for explicit control of attention. Such control problems can lead to emotional and other characteristically human affective states. We needed a toolkit to facilitate exploration of alternative architectures in varied environments, including other agents. The paper outlines requirements and summarises the main design features of a toolkit written in Pop-11. Some preliminary work on developing a multi-agent scenario, using agents of differing sophistication is presented.

NOTE: See also the current description of the toolkit, here: http://www.cs.bham.ac.uk/research/poplog/packages/simagent.html


Filename: Sloman.emot.gram.ps
Filename: Sloman.emot.gram.pdf

Title: Towards a Grammar of Emotions, in New Universities Quarterly, 36,3, pp 230-238, 1982.

Authors: Aaron Sloman
Date: Installed here 6 Dec 1998 (Originally Published in 1982)

Abstract:
By analysing what we mean by 'A longs for B', and similar descriptions of emotional states we see that they involve rich cognitive structures and processes, i.e. computations. Anything which could long for its mother, would have to have some sort of representation of its mother, would have to believe that she is not in the vicinity, would have to be able to represent the possibility of being close to her, would have to desire that possibility, and would have to be to some extent pre-occupied or obsessed with that desire. The paper includes a fairly detailed discussion of what it means to say 'X is angry with Y', and relationships between anger, exasperation, annoyance, dismay, etc. Emotions are contrasted with attitudes and moods.


Filename: sloman.beginners.pdf (PDF)
Filename: sloman.beginners.html (HTML)
Title: Beginners need powerful systems

Originally in New Horizons in Educational Computing (Ed) M. Yazdani, Ellis Horwood, 1984. pp 220-235

Author: Aaron Sloman
Date: Originally published 1984. Added here 27 Nov 2001

Abstract:
The paper argues that instead of choosing very simple and restricted programming languages and environments for beginners, we can offer them many advantages if we use powerful, sophisticated languages, libraries, and development environments. Several reasons are given. The Pop-11 subset of the Poplog system is offered as an example.


Filename: sloman.pop11.pdf
Filename: Sloman.pop11.html (HTML Added 17 Jan 2009
Filename: Sloman.pop11.txt Plain text
Filename: sloman.pop11.ps
Title: The Evolution of Poplog and Pop-11 at Sussex University

Originally in POP-11 Comes of Age: The Advancement of an AI Programming Language, (Ed) J. A.D.W. Anderson, Ellis Horwood, pp 30-54, 1989.
Author: Aaron Sloman
Date: Originally published 1989. Added here 1 Feb 2001

Abstract:
This paper gives an overview of the origins and development of the programming language Pop-11, one of the Pop family of languages including Pop1, Pop2, Pop10, Wpop, Alphapop. Pop-11 is the most sophisticated version, comparable in scope and power to Common Lisp, though different in many significant details, including its syntax. For more on Pop-11 and Poplog, the system of which it is the core language, see http://www.cs.bham.ac.uk/research/poplog/poplog.info.html

This paper first appeared in a collection published in 1989 to celebrate the 21st birthday of the Pop family of languages.


Filename: sloman.primacy.inner.language.pdf
Filename: sloman.primacy.inner.language.ps
Filename: sloman.primacy.inner.language.txt (Plain text)
Title: The primacy of non-communicative language

Author: Aaron Sloman

In The Analysis of Meaning, Proceedings 5,
(Invited talk for ASLIB Informatics Conference, Oxford, March 1979,)
ASLIB and British Computer Society, London, 1979.
Eds M. MacCafferty and K. Gray, pages 1--15.
Date: Originally published 1979. Added here 2 Dec 2000

Abstract:
How is it possible for symbols to be used to refer to or describe things? I shall approach this question indirectly by criticising a collection of widely held views of which the central one is that meaning is essentially concerned with communication. A consequence of this view is that anything which could be reasonably described as a language is essentially concerned with communication. I shall try to show that widely known facts, for instance facts about the behaviour of animals, and facts about human language learning and use, suggest that this belief, and closely related assumptions (see A1 to A3, in the paper) are false. Support for an alternative framework of assumptions is beginning to emerge from work in Artificial Intelligence, work concerned not only with language but also with perception, learning, problem-solving and other mental processes. The subject has not yet matured sufficiently for the new paradigm to be clearly articulated. The aim of this paper is to help to formulate a new framework of assumptions, synthesising ideas from Artificial Intelligence and Philosophy of Science and Mathematics.


Filename: Sloman.ijcai95.txt (Plain text)
Filename: Sloman.ijcai95.pdf
Filename: Sloman.ijcai95.ps
Authors: Aaron Sloman
Title: A Philosophical Encounter

This is a four page paper, introducing a panel at IJCAI95 in Montreal August 1995:

ijcai95 picture
`A philosophical encounter: An interactive presentation of some of the key philosophical problems in AI and AI problems in philosophy.'

    Many thanks to Takashi Gomi, at Applied AI Systems Inc, who took the picture.

John McCarthy also contributed a short paper on interactions between Philosophy and AI, available via his WEB page:
http://www-formal.stanford.edu/jmc/
Date: 24 April 95

Abstract:
This paper, along with the following paper by John McCarthy, introduces some of the topics to be discussed at the IJCAI95 event `A philosophical encounter: An interactive presentation of some of the key philosophical problems in AI and AI problems in philosophy.' Philosophy needs AI in order to make progress with many difficult questions about the nature of mind, and AI needs philosophy in order to help clarify goals, methods, and concepts and to help with several specific technical problems. Whilst philosophical attacks on AI continue to be welcomed by a significant subset of the general public, AI defenders need to learn how to avoid philosophically naive rebuttals.


Filename: Sloman.scai95.ps
Filename: Sloman.scai95.pdf
Authors: Aaron Sloman
Title: Exploring design space and niche space

Invited talk for 5th Scandinavian Conference on AI, Trondheim, May 1995. in Proceedings SCAI95 published by IOS Press, Amsterdam.
Date: 16 April 1995

Abstract:
Most people who give definitions of AI offer narrow views based either on their own work area or the pronouncement of an AI guru about the scope of AI. Looking at the range of research activities to be found in AI conferences, books, journals and laboratories suggests something very broad and deep, going beyond engineering objectives and the study or replication of human capabilities. This is exploration of the space of possible designs for behaving systems (design space) and the relationships between designs and various collections of requirements and constraints (niche space). This exploration is inherently multi-disciplinary, and includes not only exploration of various architectures, mechanisms, formalisms, inference systems, and the like (aspects of natural and artificial designs), but also the attempt to characterise various kinds of behavioural capabilities and the environments in which they are required, or possible. The implications of such a study are profound: e.g. for engineering, for biology, for psychology, for philosophy, and for our view of how we fit into the scheme of things.


Filename: Riccardo.Poli_Mike.Brayshaw.hybrid.system.ps
Filename: Riccardo.Poli_Mike.Brayshaw.hybrid.system.pdf
Title: A Hybrid Trainable Rule-based System
School of Computer Science, the University of Birmingham Cognitive Science technical report: CSRP-95-4
Date: 31 March 1995
Authors: Riccardo Poli and Mike Brayshaw
Abstract:
In this paper we introduce a new formalism for rule specification that extends the behaviour of a traditional rule based system and allows the natural development of hybrid trainable systems. The formalism in itself allows a simple and concise specification of rules and lends itself to the introduction of symbolic rule induction mechanisms (example-based knowledge acquisition) as well as artificial neural networks. In the paper we describe such a formalism and four increasingly powerful mechanisms for rule induction. The first one is based on a truth-table representation; the second is based on a form of example based learning; the third on feed-forward artificial neural nets; the fourth on genetic algorithms. Examples of systems based on these hybrid paradigms are presented and their advantages with respect to traditional approaches are discussed.


Title: Information about the SIM_AGENT toolkit
Authors: Aaron Sloman and Riccardo Poli
Filename: sim_agent

A text file which is part of the online documentation for the SIM_AGENT toolkit. See also ftp://ftp.cs.bham.ac.uk/pub/dist/poplog/sim

Filename: sim_agent.pdf November 1994 Seminar Slides. (PDF)
Filename: sim_agent.ps.gz November 1994 Seminar Slides. (Gzipped Postscript)

Postscript/PDF version of some seminar slides presenting the package. Partly out of date.

Filename: simagent.html http://www.cs.bham.ac.uk/research/projects/poplog/packages/simagent.html
Link to the main SIM_AGENT overview page. Includes a pointer to some movies demonstrating simple uses of the toolkit.

Author: Aaron Sloman and Riccardo Poli
Date: November 1994 to March 1995


Abstract:
These files give partial descriptions of the sim_agent toolkit implemented in Poplog Pop-11 for exploring architectures for individual or interacting agents. See also the Atal95 paper summarised above, Aaron.Sloman_Riccardo.Poli_sim_agent_toolkit.ps.gz


Filename: Luc.Beaudoin_thesis.pdf (PDF)
Filename: Luc.Beaudoin_thesis.ps (postscript.)
Filename: Luc.Beaudoin_thesis.ps.gz (Compressed postscript.)
Filename: Luc.Beaudoin_thesis.rtf.gz (Original rtf format, gzipped.)
Filename: Luc.Beaudoin_thesis.txt.gz (Plain text version gzipped)
Title: Goal processing in autonomous agents
Date: 31 Aug 1994 (Updated March 13th 1995) (PDF version added 18 May 2003.)
Author: Luc P. Beaudoin


Abstract:
A thesis submitted to the Faculty of Science of the University of Birmingham for the degree of PhD in Cognitive Science. (Supervisor: Aaron Sloman).
Synopsis
The objective of this thesis is to elucidate goal processing in autonomous agents from a design-stance. A. Sloman's theory of autonomous agents is taken as a starting point (Sloman, 1987; Sloman, 1992b). An autonomous agent is one that is capable of using its limited resources to generate and manage its own sources of motivation. A wide array of relevant psychological and AI theories are reviewed, including theories of motivation, emotion, attention, and planning. A technical yet rich concept of goals as control states is expounded. Processes operating on goals are presented, including vigilational processes and management processes. Reasons for limitations on management parallelism are discussed. A broad design of an autonomous agent that is based on M. Georgeff's (1986) Procedural Reasoning System is presented. The agent is meant to operate in a microworld scenario. The strengths and weaknesses of both the design and the theory behind it are discussed. The thesis concludes with suggestions for studying both emotion ("perturbance") and pathologies of attention as consequences of autonomous goal processing.


Filename: Christian.paterson_mphil1.ps.gz part 1
Filename: Christian.paterson_mphil2.ps.gz part 2
Title: The use of ratings for the integration of planning and learning in a broad but shallow agent architecture.
MPhil Thesis (in two parts), University of Birmingham.
Author: Christian Paterson
Date: Feb 27 1995
Abstract:
The effective integration of both planning and learning must be viewed as a prerequisite to the creation of a truly intelligent autonomous agent, one of AI's Holy Grails. Many approaches, implemented within many systems, have been propounded yet all have fallen short of the mark. The proposed AIMAE system, a broad but shallow agent architecture, provides just such an integration in a situated, goal-directed fashion. This is made possible via the use of behaviour-based ratings providing a multi-dimensional ordering on sub-plans, and hence acting as heuristic guides to plan construction. Furthermore, use is made of a control plan mechanism which, it is hoped will allow the system to address multiple concurrent goals, and carry out a degree of opportunity taking.

Filename: Aaron.Sloman_why_robot_emotions.ps
Filename: Aaron.Sloman_why_robot_emotions.pdf
Title: Why robots will have emotions
Authors: Aaron Sloman and Monica Croucher

Date: August 1981 (Installed in this directory 10 Nov 1994)
Originally appeared in Proceedings IJCAI 1981, Vancouver
Also available from Sussex University as Cognitive Science Research paper No 176

Abstract:
Emotions involve complex processes produced by interactions between motives, beliefs, percepts, etc. E.g. real or imagined fulfilment or violation of a motive, or triggering of a 'motive-generator', can disturb processes produced by other motives. To understand emotions, therefore, we need to understand motives and the types of processes they can produce. This leads to a study of the global architecture of a mind. Some constraints on the evolution of minds are discussed. Types of motives and the processes they generate are sketched.

(Note we now use slightly different terminology from that used in this paper. In particular, what the paper labelled as "intensity" we now call "insistence", i.e. the capacity to divert attention from other things.)

NB

This paper is often misquoted as arguing that robots (or at least intelligent robots) should have emotions. On the contrary, the paper argues that certain sorts of high level disturbances (i.e. emotional states) will be capable of arising out of interactions between mechanisms that exist for other reasons. Similarly 'thrashing' is capable of occurring in multi-processing operating systems that support swapping and paging, but that does not mean that operating systems should produce thrashing.

A more recent analysis of the confused but fashionable arguments (e.g. based on Damasio's writings) claiming that emotions are needed for intelligence can be found in this semi-popular presentation.

One of the arguments is analogous to arguing that a car requires a functioning horn for its starter motor to work, because damaging the battery can disable the horn and disable the starter motor.


Filename: Ian.Wright_emotional_agent.ps.gz
Filename: Ian.Wright_emotional_agent.ps
Filename: Ian.Wright_emotional_agent.pdf
Title: An Emotional Agent -- The Detection and Control of Emergent States in an Autonomous Resource-Bounded Agent
(PhD Thesis Proposal)
Date: October 31 1994
Author: Ian Wright
Abstract:
In dynamic and unpredictable domains, such as the real world, agents are continually faced with new requirements and constraints on the quality and types of solutions they produce. Any agent design will always be limited in some way. Such considerations highlight the need for self-referential mechanisms, i.e. agents with the ability to examine and reason about their internal processes in order to improve and control their own functioning.
This work aims to implement a prototype agent architecture that meets the requirements for self-referential systems, and is able to exhibit perturbant (`emotional') states, detect such states and attempt to do something about them. Results from this research will contribute to autonomous agent design, emotionality, internal perception and meta-level control; in particular, it is hoped that we will
i. provide a (partial) implementation of Sloman's theory of perturbances (Sloman, 81) within the NML1 design (Beaudoin, 94),
ii. investigate the requirements for the self-detection and control of processing states, and
iii. demonstrate the adaptiveness of, the need for, and consequences of, self-control mechanisms that meet the requirements for self-referential systems.


Filename: Ed.Shing_Computational.Constraints.ps.gz
Filename: Ed.Shing_Computational.Constraints.ps
Filename: Ed.Shing_Computational.Constraints.pdf
Title: Computational Constraints on Associative Learning,
in Proceedings of the XI National Brazilian Symposium on AI, Fortaleza, Brazil, published by the Banco Nordeste do Brazil.
Date: October 25 1994
Author: Edmund Shing
Abstract:
Due to the dynamic nature of the real world, learning in intelligent agents requires various processes of selection (`attention') of input features in order to enable computational tractability. This paper looks at associative learning and analyses the selection processes necessary for this to work effectively by avoiding the combinatorial explosion problem faced by an adaptive agent situated in a complex and dynamic world. Analysis suggests that adaptive agent architectures require selection processes in order to perform any "useful" learning. An agent design is constructed following a "broad and shallow" approach to meet both general (e.g. related to fundamental properties of the real world) and specific (e.g. related to the specific theory proposed) requirements, concentrating on learning and selection mechanisms in the implementation of reinforcement learning.

Filename: Aaron.Sloman_musings.ps
Filename: Aaron.Sloman_musings.pdf
Title: Musings on the roles of logical and non-logical representations in intelligence.

in: Janice Glasgow, Hari Narayanan, Chandrasekaran, (eds), Diagrammatic Reasoning: Computational and Cognitive Perspectives, AAAI Press 1995
Author: Aaron Sloman
Date: 17 October 1994

Abstract:

This paper offers a short and biased overview of the history of discussion and controversy about the role of different forms of representation in intelligent agents. It repeats and extends some of the criticisms of the `logicist' approach to AI that I first made in 1971, while also defending logic for its power and generality. It identifies some common confusions regarding the role of visual or diagrammatic reasoning including confusions based on the fact that different forms of representation may be used at different levels in an implementation hierarchy. This is contrasted with the way in the use of one form of representation (e.g. pictures) can be {\em controlled} using another (e.g. logic, or programs). Finally some questions are asked about the role of metrical information in biological visual systems.

This is one of several sequels to the paper presented at IJCAI in 1971


Filename: emotions_workshop95
Title: Geneva Emotion Week 1995 Date: October 1994 Call for Applications
GENEVA EMOTION WEEK '95
April 8 to April 13, 1995
University of Geneva, Switzerland

The Emotion Research Group at the University of Geneva announces the third GENEVA EMOTION WEEK (GEW '95), consisting of a colloquium focusing on a major topic in the psychology of emotion, and of a series of workshops designed to introduce participants to advanced research methods in the field of emotion. In combination with WAUME95.


Filename: Aaron.Sloman_towards.th.rep.ps
Filename: Aaron.Sloman_towards.th.rep.pdf
Title: Towards a general theory of representations

Author: Aaron Sloman
In Donald Peterson (ed) Forms of representation, Intellect Books, 1996
Date: 31 July 1994

Abstract:

This position paper presents the beginnings of a general theory of representations starting from the notion that an intelligent agent is essentially a control system with multiple control states, many of which contain information (both factual and non-factual), albeit not necessarily in a propositional form. The paper attempts to give a general characterisation of the notion of the syntax of an information store, in terms of types of variation the relevant mechanisms can cope with. Similarly concepts of semantics, pragmatics and inference are generalised to apply to information-bearing sub- states in control systems. A number of common but incorrect notions about representation are criticised (such as that pictures are in some way isomorphic with what they represent).

This is one of several sequels to the paper presented at IJCAI in 1971


Filename: Aaron.Sloman_isre.pdf
Filename: Aaron.Sloman_isre.ps.gz
Title: Computational Modelling Of Motive-Management Processes
"Poster" prepared for the Conference of the International Society for Research in Emotions, Cambridge July 1994 (Final version installed here July 30th 1994)
Authors: Aaron Sloman, Luc Beaudoin and Ian Wright
Revised version in Proceedings ISRE94, edited by Nico Frijda, ISRE Publications. Email: frijda@uvapsy.psy.uva.nl
Date: 29 July 1994 (PDF version added 25 Dec 2005)
Abstract:
This is a 5 page summary with three diagrams of the main objectives and some work in progress at the University of Birmingham Cognition and Affect project. involving: Professor Glyn Humphreys (School of Psychology), and Luc Beaudoin, Chris Paterson, Tim Read, Edmund Shing, Ian Wright, Ahmed El-Shafei, and (from October 1994) Chris Complin (research students). The project is concerned with "global" design requirements for coping simultaneously with coexisting but possibly unrelated goals, desires, preferences, intentions, and other kinds of motivators, all at different stages of processing. Our work builds on and extends seminal ideas of H.A.Simon (1967). We are exploring "broad and shallow" architectures combining varied capabilities most of which are not implemented in great depth. The poster summarises some ideas about management and meta-management processes, attention filtering, and the relevance to emotional states involved "perturbances", where there is partial loss of control of attention.


Filename: Tim.Read_Applying_S.D.pdf (PDF)
Filename: Tim.Read_Applying_S.D.ps.gz
Title: Applying Systemic Design to the study of `emotion'
Presented at AICS94, Dublin Ireland
Author: Tim Read Presented at AICS94, Dublin Ireland
Date: 20th July 1994
Abstract:
Emotion has proved a difficult concept for researchers to explain. This is principally due to both terminological and methodological problems. Systemic Design is a methodology which has been developed and used for studying emotion in an attempt to resolve these difficulties, providing a step toward a complete understanding of `emotional phenomena'. This paper discusses the application of this methodology to study the three mammalian behavioural control systems proposed by Gray (1990). The computer simulation presented here models a rat in the Kamin (1957) avoidance experiment for two reasons: firstly, to demonstrate how Gray's systems can form a large part of the explanation of what is happening in this experiment (which has proved difficult for researchers to do so far), and secondly, as avoidance behaviour and its associated architectural concomitance are related to many so called `emotional states'.


Filename: Ed.Shing_Constraining.Learning.ps.gz
Title: Computational Constraints for Associative Learning
Date: 15 May 1994
Author: Edmund Shing
Abstract:
Due to the dynamic nature of the real world, learning in intelligent agents requires various processes of selection ("attention to") of input features in order to facilitate computational tractability.
There are many different forms of learning observed in people and animals; this research looks at reinforcement learning and analyses the selection processes necessary for this to work effectively. Machine learning work has traditionally concentrated on small predictable domains (the "deep and narrow" approach to cognitive simulation) and so has avoided the combinatorial explosion problem faced by an adaptive agent situated in a complex and dynamic world.
A preliminary analysis of several forms of learning suggests that (a) adaptive agent architectures require selection processes in order to perform any "useful" learning; and (b) reinforcement learning coupled with certain simple selection, monitoring and evaluation mechanisms can achieve several seemingly more complex forms of learning.
An agent design is constructed following a "broad and shallow" approach to meet both general (e.g. related to fundamental properties of the real world) and specific (e.g. related to the specific theory proposed) requirements, concentrating on learning and selection mechanisms in the implementation of reinforcement learning. This agent architecture should exhibit both expected reinforcement learning behaviours and seemingly more complex learning behaviours. Implications of this work are discussed.


Filename: Aaron.Sloman_explorations.ps
Filename: Aaron.Sloman_explorations.pdf
Title: Explorations in Design Space

Author: Aaron Sloman
Date: 20 April 1994
in Proc ECAI94, 11th European Conference on Artificial Intelligence Edited by A.G.Cohn, John Wiley, pp 578-582, 1994

Abstract:
This paper sketches a vision of AI as a unifying discipline that explores designs for a variety of behaving systems, for both scientific and engineering purposes. This unpacks the idea that AI is the general study of intelligence, whether natural or artificial. Some aspects of the methodology of such a discipline are outlined, and a project attempting to fill gaps in current work introduced. This is one of a series of papers outlining the "design-based" approach to the study of mind, based on the notion that a mind is essentially a sophisticated self-monitoring, self-modifying control system. The "design-based" study of architectures for intelligent agents is important not only for engineering purposes but also for bringing together hitherto fragmentary studies of mind in various disciplines, for providing a basis for an adequate set of descriptive concepts, and for making it possible to understand what goes wrong in various human activities and how to remedy the situation. But there are many difficulties to be overcome.


Filename: Aaron.Sloman_representations.control.pdf
Filename: Aaron.Sloman_representations.control.ps
Filename: Aaron.Sloman_representations.control.ps.gz
Title: Representations as control substates (DRAFT)

Author: Aaron Sloman
Date: March 6th 1994

Abstract:
(This is a longer, earlier version of "Towards a general theory of representations", and includes some additional material.)
Since first presenting a paper criticising excessive reliance on logical representations in AI at the second IJCAI at Imperial College London in 1971, I have been trying to understand what representations are and why human beings seem to need so many different kinds, tailored to different purposes. This position paper presents the beginnings of a general answer starting from the notion that an intelligent agent is essentially a control system with multiple control states, many of which contain information (both factual and non-factual), albeit not necessarily in a propositional form. The paper attempts to give a general characterisation of the notion of the syntax of an information store, in terms of types of variation the relevant mechanisms can cope with. Different kinds of syntax can support different kinds of semantics, and serve different kinds of purposes. Similarly concepts of semantics, pragmatics and inference are generalised to apply to information-bearing sub-states in control systems. A number of common but incorrect notions about representation are criticised (such as that pictures are in some way isomorphic with what they represent), and a first attempt is made to characterise dimensions in which forms of representations can differ, including the explicit/implicit dimension.

This is one of several sequels to the paper presented at IJCAI in 1971


Filename: Aaron.Sloman_semantics.ps
Filename: Aaron.Sloman_semantics.pdf
Title: Semantics in an intelligent control system

Paper for conference at Royal Society in April 1994 on Artificial Intelligence and the Mind: New Breakthroughs or Dead Ends?
in Philosophical Transactions of the Royal Society: Physical Sciences and Engineering Vol 349, 1689 pp 43-58 1994
Author: Aaron Sloman Date: May 11 1994

Abstract:
Much research on intelligent systems has concentrated on low level mechanisms or sub-systems of restricted functionality. We need to understand how to put all the pieces together in an \ul{architecture} for a complete agent with its own mind, driven by its own desires. A mind is a self-modifying control system, with a hierarchy of levels of control, and a different hierarchy of levels of implementation. AI needs to explore alternative control architectures and their implications for human, animal, and artificial minds. Only within the framework of a theory of actual and possible architectures can we solve old problems about the concept of mind and causal roles of desires, beliefs, intentions, etc. The high level "virtual machine" architecture is more useful for this than detailed mechanisms. E.g. the difference between connectionist and symbolic implementations is of relatively minor importance. A good theory provides both explanations and a framework for systematically generating concepts of possible states and processes. Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems. The paper sketches some requirements for such architectures, and analyses an idea shared between engineers and philosophers: the concept of "semantic information".

This is one of several sequels to the paper on representations presented at IJCAI in 1971.


Filename: Ian.Wright_Project_Summary.pdf (PDF)
Filename: Ian.Wright_Project_Summary.ps.gz
Title: A Summary of the Attention and Affect Project

Date: March 2nd 1994
Author: Ian Wright
Abstract:
The Attention and Affect project is summarized. The original aims of the project are reviewed and the work to date described, followed by a critique of the project in terms of the original aims. Some ideas for future work are outlined.


Filename: Aaron.Sloman_variety.formalisms.ps
Filename: Aaron.Sloman_variety.formalisms.pdf
Title: Varieties of Formalisms for Knowledge Representation

Commentary on: "The Imagery Debate Revisited: A Computational perspective," by Janice I. Glasgow, in: Computational Intelligence. Special issue on Computational Imagery, Vol. 9, No. 4, November 1993
Author: Aaron Sloman
Date: Nov 1993

Abstract:
Whilst I agree largely with Janice Glasgow's position paper, there are a number of relevant subtle and important issues that she does not address, concerning the variety of forms and techniques of representation available to intelligent agents, and issues concerned with different levels of description of the same agent, where that agent includes different virtual machines at different levels of abstraction. I shall also suggest ways of improving on her array-based representation by using a general network representation, though I do not know whether efficient implementations are possible.

This is one of several sequels to the paper presented at IJCAI in 1971


Filename: Tim.Read_Systemic.Design.pdf (PDF)
Filename: Tim.Read_Systemic.Design.ps.gz
Title: Systemic Design: A Methodology For Investigating Emotional Phenomena

Presented at WAUME93
Author: Tim Read
Date: August 1993
Abstract:
In this paper I introduce Systemic Design as a methodology for studying complex phenomena like those commonly referred to as being emotional. This methodology is an extension of the design-based approach to include: organismic phylogenetic considerations, a holistic design strategy, and a consideration of resource limitations. It provides a powerful technique for generating theoretical models of the mechanisms underpinning emotional phenomena, the current terminology associated with which is often muddled and inconsistent. This approach enables concepts and mechanisms to be clearly specified and communicated to other researchers in related fields.


Filename: Tim.Read-et.al_TerminlogyPit.pdf
Filename: Tim.Read,et.al_Terminology.Pit.ps.gz
Title: The Terminological Pitfalls of Studying Emotion
Authors: Tim Read and Aaron Sloman

(This paper is written by the first author with ideas developed from conversations with the second).
Date: Aug 1993
Abstract:
The research community is full of papers with titles that include terms like `emotion', `motivation', `cognition', and `attention'. However when these terms are used they are either considered to be so obvious as not to warrant a definition, or are defined in overly simplistic and arbitrary ways. The reasons behind our usage of existing terminology is easy to see, but the problems inherent with it are not. The use of such terminology gives rise to a whole set of problems, chief among them are confusion and pointless semantic disagreement.
These problems occur because the current terminology is too vague, and burdened with acquired meaning. We need to replace it with terminology that emerges from a putatively complete theory of the conceptual space of mechanisms and behaviours, spanning several functional levels (e.g.: neural, behavioural and computational). Research that attempts to use the current terminology to build larger and more complex theory, just adds to the existing confusion.
In this paper I examine the reasons behind the use of current terminology, explore the problems inherent with it, and offer a way to resolve these problems. The days when one small research team could hope to produce a theory to explain the complete range of phenomena currently referred to as being `emotional' have passed. It is time for concerted and coordinated activity to understand the relation of mechanisms to behaviour. This will give rise to clear and unambiguous terminology that is defined at different functional levels. Until the current terminological problems are solved, our rate of progress will be slow.


Filename: Louise.Pryor,et.al_Cassandra.ps.Z
Title: Cassandra: Planning with contingencies
Authors: Louise Pryor and Gregg Collins

Date: Sept 1993
Abstract:
A fundamental assumption made by classical planners is that there is no uncertainty in the world: the planner has full knowledge of the initial conditions in which the plan will be executed, and all actions have fully predictable outcomes. These planners cannot therefore construct contingency plans that is, plans that specify different actions to be performed in different circumstances. In this paper we discuss the issues that arise in the representation and construction of contingency plans and describe Cassandra, a complete and sound partial-order contingent planner that uses a single simple mechanism to represent unknown initial conditions and the uncertain effects of actions. Cassandra uses explicit decision steps that enable the agent executing the plan to decide which plan branch to follow. The decision steps in a plan result in subgoals to acquire knowledge, which are planned for in the same way as any other subgoals. Unlike previous systems, Cassandra thus distinguishes the process of gathering information from the process of making decisions, and can use information-gathering actions with a full range of preconditions. The simple representation of uncertainty and the explicit representation of decisions in Cassandra allow a coherent approach to the problems of contingent planning, and provide a solid base for extensions such as the use of different decision making procedures.


Filename: Louise.Pryor,et.al_R.Features.ps.Z
Title: Reference features as guides to reasoning about opportunities
Authors: Louise Pryor and Gregg Collins

Date: Feb 1993
Abstract:
An intelligent agent acting in a complex and unpredictable world must be able to both plan ahead and act quickly to changes in its surroundings. In particular, such an agent must be able to react quickly when faced with unexpected opportunities to fulfill its goals. We consider the issue of how an agent should respond to perceived opportunities, and we describe a method for determining quickly whether it is rational to seize an opportunity or whether a more detailed analysis is required. Our system uses a set of heuristics based on reference features to identify situations and objects that characteristically involve problematic patterns of interaction. We discuss the recognition of reference features, and their use in focusing the system reasoning onto potentially adverse interactions between its ongoing plans and the current opportunity.


New Searchable HTML version 11 Apr 2014
Location: Aaron.Sloman_Mind.as.controlsystem/ (HTML)
New PDF derived from new HTML:
Location: Aaron.Sloman_Mind.as.controlsystem.pdf (PDF in subdirectory)
Older version Postscript and PDF originally produced by FrameMaker:
Filename: Aaron.Sloman_Mind.as.controlsystem.ps
Filename: Aaron.Sloman_Mind.as.controlsystem.pdf

Title: The Mind as a Control System,
In

    Philosophy and the Cognitive Sciences,
    (eds) C. Hookway and D. Peterson,
    Cambridge University Press, pp 69--110 1993
Author: Aaron Sloman
Date: 1993 (installed) Feb 15 1994
Originally Presented at Royal Institute of Philosophy conference
on Philosophy and the Cognitive Sciences,
in Birmingham in 1992, with proceedings published later.

Abstract:
Many people who favour the design-based approach to the study of mind, including the author previously, have thought of the mind as a computational system, though they don't all agree regarding the forms of computation required for mentality. Because of ambiguities in the notion of 'computation' and also because it tends to be too closely linked to the concept of an algorithm, it is suggested in this paper that we should rather construe the mind (or an agent with a mind) as a control system involving many interacting control loops of various kinds, most of them implemented in high level virtual machines, and many of them hierarchically organised. (Some of the sub-processes are clearly computational in character, though not necessarily all.) A feature of the system is that the same sensors and motors are shared between many different functions, and sometimes they are shared concurrently, sometimes sequentially. A number of implications are drawn out, including the implication that there are many informational substates, some incorporating factual information, some control information, using diverse forms of representation. The notion of architecture, i.e. functional differentiation into interacting components, is explained, and the conjecture put forward that in order to account for the main characteristics of the human mind it is more important to get the architecture right than to get the mechanisms right (e.g. symbolic vs neural mechanisms). Architecture dominates mechanism


Filename: Aaron.Sloman_prospects.ps
Filename: Aaron.Sloman_prospects.pdf
Title: Prospects for AI as the General Science of Intelligence

Author: Aaron Sloman
in Proceedings AISB93, published by IOS Press as a book: Prospects for Artificial Intelligence
Date: April 1993

Abstract:
Three approaches to the study of mind are distinguished: semantics-based, phenomena-based and design-based. Requirements for the design-based approach are outlined. It is argued that AI as the design-based approach to the study of mind has a long future, and pronouncements regarding its failure are premature, to say the least.


Filename: Luc.Beaudoin.and.Sloman_Motive_proc.ps
Filename: Luc.Beaudoin.and.Sloman_Motive_proc.pdf
Title: A study of motive processing and attention,

in A.Sloman, D.Hogg, G.Humphreys, D. Partridge, A. Ramsay (eds) Prospects for Artificial Intelligence, IOS Press, Amsterdam, pp 229-238, 1993.
Authors: Luc P. Beaudoin and Aaron Sloman
Date: April 1993

Abstract:
We outline a design based theory of motive processing and attention, including: multiple motivators operating asynchronously, with limited knowledge, processing abilities and time to respond. Attentional mechanisms address these limits using processes differing in complexity and resource requirements, in order to select which motivators to attend to, how to attend to them, how to achieve those adopted for action and when to do so. A prototype model is under development. Mechanisms include: motivator generators, attention filters, a dispatcher that allocates attention, and a manager. Mechanisms like these might explain the partial loss of control of attention characteristic of many emotional states.


Filename: Aaron.Sloman_Phenomena.Explain.pdf (PDF)
Filename: Aaron.Sloman_Phenomena.Explain.ps.gz
Title: What are the phenomena to be explained?

Author: Aaron Sloman
Date: Dec 1992

Seminar notes for the Attention and Affect Project, summarising its long term objectives


Filename: Aaron.Sloman_IP.Emotion.Theory.pdf (PDF)
Filename: Aaron.Sloman_IP.Emotion.Theory.ps.gz
Title: Towards an information processing theory of emotions
Author: Aaron Sloman

Date: Dec 1992

Seminar notes for the Attention and Affect Project


Filename: Aaron.Sloman_Silicon.Souls.pdf (PDF)
Filename: Aaron.Sloman_Silicon.Souls.ps.gz
Title: Silicon Souls, How to design a functioning mind

Author: Aaron Sloman
Date: May 1992

Professorial Inaugural Lecture, Birmingham, May 1992 In the form of lecture slides for an excessively long lecture. Much of this is replicated in other papers published since.


Filename: Aaron.Sloman_Emperor.Real.Mind.ps
Filename: Aaron.Sloman_Emperor.Real.Mind.pdf
Filename: sloman_aij_penrose_review.pdf
Title: The Emperor's Real Mind
Author: Aaron Sloman
Lengthy review/discussion of R.Penrose (The Emperor's New Mind) in the journal Artificial Intelligence Vol 56 Nos 2-3 August 1992, pages 355-396

NOTE ADDED 21 Nov 2009:
A much shorter review by Aaron Sloman was published in The Bulletin of the London Mathematical Society 24 (1992) 87-96
Available here.(PDF)

Abstract:
"The Emperor's New Mind" by Roger Penrose has received a great deal of both praise and criticism. This review discusses philosophical aspects of the book that form an attack on the "strong" AI thesis. Eight different versions of this thesis are distinguished, and sources of ambiguity diagnosed, including different requirements for relationships between program and behaviour. Excessively strong versions attacked by Penrose (and Searle) are not worth defending or attacking, whereas weaker versions remain problematic. Penrose (like Searle) regards the notion of an *algorithm* as central to AI, whereas it is argued here that for the purpose of explaining mental capabilities the *architecture* of an intelligent system is more important than the concept of an algorithm, using the premise that what makes something intelligent is not *what* it does but *how it does it.* What needs to be explained is also unclear: Penrose thinks we all know what consciousness is and claims that the ability to judge Goedel's formula to be true depends on it. He also suggests that quantum phenomena underlie consciousness. This is rebutted by arguing that our existing concept of "consciousness" is too vague and muddled to be of use in science. This and related concepts will gradually be replaced by a more powerful theory-based taxonomy of types of mental states and processes. The central argument offered by Penrose against the strong AI thesis depends on a tempting but unjustified interpretation of Goedel's incompleteness theorem. Some critics are shown to have missed the point of his argument. A stronger criticism is mounted, and the relevance of mathematical Platonism analysed. Architectural requirements for intelligence are discussed and differences between serial and parallel implementations analysed.

Filename: Aaron.Sloman.et.al_JCI.Grant.ps
Filename: Aaron.Sloman.et.al_JCI.Grant.pdf
Title: Appendix to JCI proposal, The Attention and Affect Project

Authors: Aaron Sloman and Glyn Humphreys
Appendix to research grant proposal for the Attention and Affect project. (Paid for computer and computer officer support, and some workshops, for three years, funded by UK Joint Research Council initiative in Cognitive Science and HCI, 1992-1995.)
WARNING: for some reason the page order of the file is reversed. I'll fix this one day. (Fixed 15 Feb 2002).
Date: January 1992


Filename: Aaron.Sloman_Prolegomena.ps
Filename: Aaron.Sloman_Prolegomena.pdf
Author: Aaron Sloman
Title: Prolegomena to a Theory of Communication and Affect

In Ortony, A., Slack, J., and Stock, O. (Eds.) Communication from an Artificial Intelligence Perspective: Theoretical and Applied Issues. Heidelberg, Germany: Springer, 1992, pp 229-260.

Abstract:
As a step towards comprehensive computer models of communication, and effective human machine dialogue, some of the relationships between communication and affect are explored. An outline theory is presented of the architecture that makes various kinds of affective states possible, or even inevitable, in intelligent agents, along with some of the implications of this theory for various communicative processes. The model implies that human beings typically have many different, hierarchically organised, dispositions capable of interacting with new information to produce affective states, distract attention, interrupt ongoing actions, and so on. High "insistence" of motives is defined in relation to a tendency to penetrate an attention filter mechanism, which seems to account for the partial loss of control involved in emotions. One conclusion is that emulating human communicative abilities will not be achieved easily. Another is that it will be even more difficult to design and build computing systems that reliably achieve interesting communicative goals.


Filename: Aaron.Sloman_consciousness.html (HTML -- added 27 Dec 2007)
Filename: Aaron.Sloman_consciousness.pdf (PDF)
Filename: Aaron.Sloman_consciousness.ps.gz
Title: Notes on consciousness
Author: Aaron Sloman


Abstract:
A discussion on why talking about consciousness is premature Appeared in AISB Quarterly No 72, pp 8-14, 1990


Filename: Aaron.Sloman_freewill.ps
Filename: Aaron.Sloman_freewill.pdf
Title: How to dispose of the free will issue

Author: Aaron Sloman
Date: 1988 (or earlier)
NOTE (2 May 2014): A revised slightly extended and reformatted version of
the paper is now available (HTML and PDF) here:
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-freewill-1988.html
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-freewill-1988.pdf
HISTORY
Originally posted to comp.ai.philosophy circa 1988.
A similar version appeared in AISB Quarterly, Winter 1992/3, Issue 82, pp. 31-2.

A older, plain text, version is available online at http://www.cs.bham.ac.uk/research/projects/cogaff/misc/freewill.disposed.of
(An improved elaborated paraphrase can be found in Chapter 2 of Stan Franklin Artificial Minds (MIT Press, 1995). Paper back version available.)

Abstract:

Much philosophical discussion concerning freedom of the will is based on an assumption that there is a well-defined distinction between systems whose choices are free and those whose choices are not. This assumption is refuted by showing that when requirements for behaving systems are considered there are very many design options which correspond to a wide variety of distinctions more or less closely associated with our naive ideas of individual freedom. Thus, instead of one major distinction there are many different distinctions; different combinations of design choices will produce different sorts of agents, and the naive distinction is not capable of classifying them. In this framework, the pre-theoretical concept of freedom of the will needs to be abandoned and replaced with a host of different technical concepts corresponding to the capabilities enabled by different designs.

It is argued that biological evolution "discovered" many of the design options and produced more and more complex combinations of increasingly sophisticated designs giving animals more and more freedom (though all the interesting varieties depend on the operation of deterministic mechanisms).
See also section 10.13 of Chapter 10 of The Computer Revolution in Philosophy: Philosophy, science and models of mind (1978) .
Recently added (2006): Four Concepts of Freewill: Two of them incoherent
This argues that people who discuss problems of free will often talk past each other because they do not clearly perceive that there is not one universally accepted notion of "free will". Rather there are at least four, only two of which are of real value.


Filename: Aaron.Sloman_vision.design.pdf (PDF)
(Out of date Postscript version removed. Please use PDF version instead.)
Filename: Aaron.Sloman_vision.design.html (HTML slightly messy)

Title: On designing a visual system: Towards a Gibsonian computational model of vision.

In Journal of Experimental and Theoretical AI 1,4, 289-337 1989
Author: Aaron Sloman
Date: Original 1989, installed here April 18th 1994
Reformatted, with images included 22 Oct 2006
Footnote at the beginning extended 8 Aug 2012

Abstract:
This paper contrasts the standard (in AI) "modular" theory of the nature of vision with a more general theory of vision as involving multiple functions and multiple relationships with other sub-systems of an intelligent system. The modular theory (e.g. as expounded by Marr) treats vision as entirely, and permanently, concerned with the production of a limited range of descriptions of visible surfaces, for a central database; while the "labyrinthine" design allows any output that a visual system can be trained to associate reliably with features of an optic array and allows forms of learning that set up new communication channels. The labyrinthine theory turns out to have much in common with J.J.Gibson's theory of affordances, while not eschewing information processing as he did. It also seems to fit better than the modular theory with neurophysiological evidence of rich interconnectivity within and between sub-systems in the brain. Some of the trade-offs between different designs are discussed in order to provide a unifying framework for future empirical investigations and engineering design studies. However, the paper is more about requirements than detailed designs.

NOTE:
A precursor to this paper was published in 1982: Image interpretation: The way ahead?

Some of the author's later work on vision is also on this web site, including
    http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#gibson
    What's vision for, and how does it work?
    From Marr (and earlier) to Gibson and Beyond

Filename: Aaron.Sloman_Motives.Mechanisms.pdf (PDF added 3 Jan 2010)
Filename: Aaron.Sloman_Motives.Mechanisms.txt
Title: Motives Mechanisms and Emotions
Author: Aaron Sloman

In Cognition and Emotion 1,3, pp.217-234 1987,
reprinted in M.A. Boden (ed) The Philosophy of Artificial Intelligence, "Oxford Readings in Philosophy" Series Oxford University Press, pp 231-247 1990.
(Also available as Cognitive Science Research Paper No 62, Sussex University.)


Filename: Sloman.ecai86.ps.gz
Filename: Sloman.ecai86.ps
Filename: Sloman.ecai86.pdf
Title: Reference without causal links,

in Proceedings 7th European Conference on Artificial Intelligence, Brighton, July 1986. Re-printed in
J.B.H. du Boulay, D.Hogg, L.Steels (eds) Advances in Artificial Intelligence - II North Holland, 369-381, 1987.
Date: 1986
Author: Aaron Sloman

Abstract:
This enlarges on earlier work attempting to show in a general way how it might be possible for a machine to use symbols with `non- derivative' semantics. It elaborates on the author's earlier suggestion that computers understand symbols referring to their own internal `virtual' worlds. A machine that grasps predicate calculus notation can use a set of axioms to give a partial, implicitly defined, semantics to non-logical symbols. Links to other symbols defined by direct causal connections within the machine reduce ambiguity. Axiom systems for which the machine's internal states do not form a model give a basis for reference to an external world without using external sensors and motors.


Filename: Sloman.ijcai85.ps.gz
Filename: Sloman.ijcai85.ps
Filename: Sloman.ijcai85.pdf
Filename: Sloman.ijcai85.txt (Plain text original)
Title: What enables a machine to understand?

in Proceedings 9th International Joint Conference on AI, pp 995-1001, Los Angeles, August 1985.
Date: 1985
Author: Aaron Sloman

Abstract:
The 'Strong AI' claim that suitably programmed computers can manipulate symbols that THEY understand is defended, and conditions for understanding discussed. Even computers without AI programs exhibit a significant subset of characteristics of human understanding. To argue about whether machines can REALLY understand is to argue about mere definitional matters. But there is a residual ethical question.


Filename: Aaron.Sloman_Rep.Formalisms.ps.gz
Filename: Aaron.Sloman_Rep.Formalisms.ps
Filename: Aaron.Sloman_Rep.Formalisms.pdf
Author: A.Sloman
Title: Why we need many knowledge representation formalisms,

in Research and Development in Expert Systems, ed. M Bramer, pp 163-183, Cambridge University Press 1985.
(Proceedings Expert Systems 85 conference. Also Cognitive Science Research paper No 52, Sussex University.)
Date: 1985 (Reformatted December 2005)

Abstract:

Against advocates of particular formalisms for representing ALL kinds of knowledge, this paper argues that different formalisms are useful for different purposes. Different formalisms imply different inference methods. The history of human science and culture illustrates the point that very often progress in some field depends on the creation of a specific new formalism, with the right epistemological and heuristic power. The same has to be said about formalisms for use in artificial intelligent systems. We need criteria for evaluating formalisms in the light of the uses to which they are to be put. The same subject matter may be best represented using different formalisms for different purposes, e.g. simulation vs explanation. If different notations and inference methods are good for different purposes, this has implications for the design of expert systems.

This is one of several sequels to the paper presented at IJCAI in 1971


BACK TO CONTENTS LIST

RETURN TO MAIN COGAFF INDEX FILE

See also the School of Computer Science Web page.

This file, designed to be lynx-friendly, is maintained by Aaron Sloman.
Email A.Sloman@cs.bham.ac.uk