PROJECT WEB DIRECTORY
PAPERS INSTALLED IN THE YEAR 2005 (APPROXIMATELY)
See also
PAPERS 2005 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX FILE
This file is
http://www.cs.bham.ac.uk/research/projects/cogaff/05.html
Maintained by Aaron Sloman -- who does
not respond to Facebook requests.
It contains an index to files in the Cognition and Affect
Project's FTP/Web directory produced or published in the year
2005. Some of the papers published in this period were produced
earlier and are included in one of the lists for an earlier period
http://www.cs.bham.ac.uk/research/cogaff/0-INDEX.html#contents
A list of PhD and MPhil theses was added in June 2003
This file Last updated: 3 Oct 2007; 29 Jul 2010; 13 Nov 2010
In some cases other versions of the files can be provided on request. Email A.Sloman@cs.bham.ac.uk requesting conversion.
JUMP TO DETAILED LIST (After Contents)
Title: CoSy Papers and Presentations
Authors: Various
Title: Physicalism and the Bogey of Determinism (Originally published in 1974)
Author: Aaron Sloman
Title: 'Ought' and 'Better' (Originally published in 1970)
Author: Aaron Sloman
Title: AI in a New Millennium: Obstacles and Opportunities
Author: Aaron Sloman
Title: Building agents to understand infant attachment behaviour
Author: Dean Petters
Title: The Altricial-Precocial Spectrum for Robots
Authors: Aaron Sloman and Jackie Chappell
Title: Afterthoughts on Analogical Representations (1975)
Author: Aaron Sloman
File (in CoSy Directory) COSY-TR-0507 (PDF)
TITLE: CoSy Year 1 Deliverable DR.2.1: Requirements study for
representations
MAIN AUTHOR: Aaron Sloman (with contributions from
members of the CoSy team)
DATE: March 2005 (installed here 29 Jul 2010)
ABSTRACT:
This work is closely related to work on architectures, reported in DR.1.1. In this early deliverable on the first phase of work-package WP.2, we report on some of the hard unsolved problems we have identified on the basis of detailed analysis of some of the processes that will have to occur when the PlayMate and Explorer robots perform their tasks. The analysis used our scenario-driven research methodology. We introduce some preliminary characterisations of the key problems and some preliminary ideas for dealing with them, inspired in part by studies of cognition in humans and other animals. We confirm the conjecture in the CoSy proposal that various kinds of representations are required for different sorts of sub-mechanisms (including for instance representations concerned with planning complex sequences of actions and representations used in producing and controlling fast and fluent movements). The different representations are in part related to different ontologies, since different sub-mechanisms acquire, manipulate and use information about different subject-matter. A substantial part of this report is therefore concerned with first draft, incomplete, ontologies that we expect our robots will need, some parts of which the robots will have to develop for themselves, especially ontologies concerned with objects and processes that have quite complex structures involving multi-strand relationships. A particularly important requirement for a robot with 3-D manipulation capabilities is the ability to perceive and understand what we have labelled 'multi-strand' relationships (where multiple parts of complex objects are related, e.g. edges, corners and faces of two cubes), which cause multi-strand processes to occur when objects are moved, with several different relationships changing in parallel. Perceiving such processes seems to require something like a simulation process to occur. Moreover, this needs to happen at different levels of abstraction concurrently (some continuous, with high or low resolution, and some discrete capturing 'qualitative' structural changes), for the same reason as many researchers have claimed that perception of static scenes involves multiple-levels of abstraction. So we conclude that our robot is likely to require an architecture and mechanisms that support several concurrent simulations at different levels of abstraction, in registration with one another and (where appropriate) with the sensory data. It seems that a mechanism like this can also implement some of what is often referred to as spatial or visual reasoning, and could be relevant to perception and understanding of affordances. We consider in particular requirements for a pre-linguistic robot that is capable of perceiving, acting in and to some extent reasoning about the world before being able to talk about it, and raise questions about how that might relate to learning that adds linguistic competence. We note that in animals there is wide variation between species that start with most of the ontology and representational competence they will ever need and those that somehow learn or develop what they need and suggest that further study of those cases may yield clues regarding options for robots of different kinds. Most of this work has not yet been published. This is work-in-progress and much of it remains to be expanded, clarified and polished.
Table of Contents
1 Requirements study for representations 6
1.1 Background: representational issues in natural and artificial systems . . . . . . . . . 6
1.2 Constraining the problem space to human-like robots . . . . . . . . . . . . . . . . . 8
1.3 Some limits of human competence . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Criteria of adequacy of representations . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Varieties of representations in CoSy . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Requirements for pre-linguistic spatial competence . . . . . . . . . . . . . . . . . . 10
1.7 Beyond current preoccupations in machine vision . . . . . . . . . . . . . . . . . . . 11
1.8 The need for analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.9 Examples from a possible tea-party scenario . . . . . . . . . . . . . . . . . . . . . . 13
1.10 Variations in tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.11 Vision in action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.12 Requirements for the cups-world task . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.13 Example sub-tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2 Ontology for an active robot 17
2.1 Introduction: the need for ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2 Background, and relation to section on question-ontology . . . . . . . . . . . . . . . 18
2.3 This is about pre-linguistic competence . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4 Propositional components for a physically embedded information user . . . . . . . . 19
2.5 Types of entities that can be referred to . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5.1 Physical object types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5.2 `Stuff' types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5.3 Location types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5.4 Other types [to be completed] . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.6 Attributes (of objects, locations, events, etc.) . . . . . . . . . . . . . . . . . . . . . . 24
2.7 Affordance-based object properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.8 Use of predicates vs attribute-value pairs . . . . . . . . . . . . . . . . . . . . . . . . 26
2.9 Object relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.10 Intrinsic relations: Relations based on attribute values . . . . . . . . . . . . . . . . 27
2.11 Relations based on shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.12 Extrinsic relations: Spatio-temporal relations . . . . . . . . . . . . . . . . . . . . . 29
2.13 Multi-strand relations and multi-relation facts . . . . . . . . . . . . . . . . . . . . . 30
2.14 Multi-strand relationships and causality . . . . . . . . . . . . . . . . . . . . . . . . 31
2.15 Local vs global spaces .... to be extended . . . . . . . . . . . . . . . . . . . . . . . . 32
2.16 Integration, zooming, etc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.17 Indeterminacy of spatial concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.18 Affordance-based relations - and embodiment . . . . . . . . . . . . . . . . . . . . . 34
2.19 Affordance-based and mathematical relations . . . . . . . . . . . . . . . . . . . . . 35
2.20 Self-knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.21 Further details [to be reorganised] . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.22 What about a non-linguistic (pre-linguistic) agent? . . . . . . . . . . . . . . . . . . 37
2.23 Representations in reactive systems: speed, and fluency using implicit representations 37
2.24 Nature Nurture Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.25 Some References on ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3 Ontology for information gaps: questions to oneself or others 41
3.1 Self-knowledge may be about gaps in knowledge . . . . . . . . . . . . . . . . . . . 41
3.2 The need for an abstract syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3 Driving idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.1 Questions for thinkers as well as communicators . . . . . . . . . . . . . . . 43
3.3.2 Varieties of non-information-seeking types of questions . . . . . . . . . . . . 44
3.4 Definitions: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4.1 Key ideas: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4.2 Non-factual questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4.3 Varieties of answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4.4 Questions and propositions in non-linguistic (pre-linguistic) information users 47
3.4.5 Types of question structures and answer structures . . . . . . . . . . . . . . 48
3.5 Question forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5.1 Yes-no questions: Proposition and its negation . . . . . . . . . . . . . . . . 49
3.5.2 Derived questions: operations on propositions with gaps . . . . . . . . . . . 49
3.5.3 Some common question forms . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5.4 More complex derived forms . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5.5 Further ways of deriving questions from propositions after creating gaps . . 51
3.6 Categorical and hypothetical information gaps . . . . . . . . . . . . . . . . . . . . . 51
3.7 Some References [to be extended] . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4 Multi-layer perception and action sub-systems 55
4.1 Ontologies and representations in concurrently active sub-systems . . . . . . . . . . 57
5 Some general notes on representations 59
5.1 The concept of representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.2 Some previous work and work in progress . . . . . . . . . . . . . . . . . . . . . . . 60
5.3 Recommendation for CoSy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.3.1 Varieties of tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3.2 Tradeoffs between varieties of forms of representation . . . . . . . . . . . . 62
5.3.3 Varieties of criteria of assessment . . . . . . . . . . . . . . . . . . . . . . . 62
6 Sources of meaning: symbol grounding and symbol tethering 64
6.1 Interdisciplinary inspiration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.2 Biologically inspired `altricial' learning systems . . . . . . . . . . . . . . . . . . . . 65
7 Meanings of `ontology' and some history 67
7.1 Meanings of the word `ontology' . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7.2 Form vs content of some part of the world . . . . . . . . . . . . . . . . . . . . . . . 69
7.3 Ontology Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
7.4 Further information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8 Future Work 70
9 References 71
9.1 Reference documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Filename: sloman-bogey.html
Filename: sloman-bogey.pdf
(incomplete PDF from OCR)
Filename: sloman-bogey-print.pdf
(A more complete, PDF version, derived from the html version.)
Title: Physicalism and the Bogey of Determinism
Author: Aaron Sloman
Date Installed: 29 Dec 2005
Abstract:
Presented at an interdisciplinary conference on Philosophy of Psychology at the University of Kent in 1971. Published in the proceedings, asA. Sloman, 'Physicalism and the Bogey of Determinism'
(along with Reply by G. Mandler and W. Kessen, and additional comments by Alan R. White, Philippa Foot and others, and replies to criticisms)
in Philosophy of Psychology, Ed S.C.Brown, London: Macmillan, 1974, pages 293--304. (Published by Barnes & Noble in USA.)
Commentary and discussion followed on pages 305--348.This paper rehearses some relatively old arguments about how any coherent notion of free will is not only compatible with but depends on determinism.
However the mind-brain identity theory is attacked on the grounds that what makes a physical event an intended action A is that the agent interprets the physical phenomena as doing A. The paper should have referred to the monograph Intention (1957) by Elizabeth Anscombe (summarised here by Jeff Speaks), which discusses in detail the fact that the same physical event can have multiple (true) descriptions, using different ontologies.
My point is partly analogous to Dennett's appeal to the 'intentional stance', though that involves an external observer attributing rationality along with beliefs and desires to the agent. I am adopting the design stance not the intentional stance, for I do not assume rationality in agents with semantic competence (e.g. insects), and I attempt to explain how an agent has to be designed in order to perform intentional actions; the design must allow the agent to interpret physical events (including events in its brain) in a way that is not just perceiving their physical properties. That presupposes semantic competence which is to be explained in terms of how the machine or organism works, i.e. using the design stance, not by simply postulating rationality and assuming beliefs and desires on the basis of external evidence.Some of ideas that were in the paper and in my responses to commentators were also presented in The Computer Revolution in Philosophy, including a version of this diagram (originally pages 344-345, in the discussion section below), discussed in more detail in Chapter 6 of the book, and later elaborated as an architectural theory assuming concurrent reactive, deliberative and metamanagement processes, e.g. as explained in this 1999 paper Architecture-Based Conceptions of Mind, and later papers.
The html paper preserves original page divisions.
(I may later add further notes and comments to this HTML version.)
Note added 3 May 2006
I have just found an online review of the whole book here. by Marius Schneider, O. F. M., The Catholic University of America, Washington, D. C., apparently written in 1975.
Filename: sloman-ought-and-better.html
Filename: ought-better.pdf
Filename:
http://www.cs.bham.ac.uk/research/cogaff/ought-and-better-jpegs
(scanned version)
Title: 'Ought' and 'Better'
Author: Aaron Sloman
Date Installed: 19 Sep 2005
Abstract:
Originally published as Aaron Sloman, 'Ought and Better' Mind, vol LXXIX, No 315, July 1970, pp 385--394)This is a sequel to the 1969 paper on "How to derive 'Better' from 'Is'" also online at this web site. It presupposes the analysis of 'better' in the earlier paper, and argues that statements using the word 'ought' say something about which of a collection of alternatives is better than the others, in contrast with statements using 'must' or referring to 'obligations', or what is 'obligatory'. The underlying commonality between superficially different statements like 'You should take an umbrella with you' and 'The sun should come out soon' is explained, along with some other philosophical puzzles, e.g. concerning why 'ought' does not imply 'can', contrary to what some philosophers have claimed.
Curiously, the 'Ought' and 'Better' paper is mentioned at http://semantics-online.org/blog/2005/08/ in the section on David Lodge's novel "Thinks...", which includes a reference to this paper 'What to Do If You Want to Go to Harlem: Anankastic Conditionals and Related Matters' by Kai von Fintel and Sabine Iatridou (MIT), which includes a discussion of the paper on 'Ought' and 'Better'.
Filename: sloman-ijcai05-manifesto.pdf
Title: AI in a New Millennium: Obstacles and Opportunities
Author: Aaron Sloman
Date Installed: 5 Sep 2005
Abstract:
This paper (a manifesto for long term AI research on integrated,
human-like physically embodied, robots) was originally Section 4 of the
introductory notes for the booklet prepared for the IJCAI-05 Tutorial on
Representation and Learning in Robots and Animals:
http://www.cs.bham.ac.uk/research/projects/cosy/conferences/ijcai-booklet/
A summary of the manifesto was written in July 2005 by Linda world,
available here
Filename: sloman-world-ai-millenium.pdf
Title: AI in a New Millennium: Obstacles and Opportunities
Author: Aaron Sloman
(Paper summarised by Linda World).
Date Installed: 15 Jul 2005
Abstract:
This is a short summary, written by
Linda World, Senior Editor IEEE Computer Society, of
Aaron Sloman's introductory notes for the IJCAI-05 Tutorial
on Representation and Learning in Robots and Animals. See section 4 of
the boolket for the original version:
http://www.cs.bham.ac.uk/research/projects/cosy/conferences/ijcai-booklet/
Linda World also wrote a profile on Aaron Sloman for the 'Histories and Futures' section in the July/Aug 2005 issue of IEEE Intelligent Systems.
Filename: sloman-croucher-warm-heart.pdf
Title: You don't need a soft skin to have a warm heart: Towards a
computational analysis of motives and emotions.
Authors: Aaron Sloman and
Monica Croucher
Originally a Cognitive Science Research Paper at Sussex University:
Sloman, Aaron and Monica Croucher, "You don't need a soft skin to have a warm heart: towards a computational analysis of motives and emotions," CSRP 004, 1981.
Date Installed: 17 Jun 2005 (Written circa 1980-81)
Abstract:
The paper introduces an interdisciplinary methodology for the study of
minds of animals humans and machines, and, by examining some of the
pre-requisites for intelligent decision-making, attempts to provide a
framework for integrating some of the fragmentary studies to be found in
Artificial Intelligence.
The space of possible architectures for intelligent systems is very large. This essay takes steps towards a survey of the space, by examining some environmental and functional constraints, and discussing mechanisms capable of fulfilling them. In particular, we examine a subspace close to the human mind, by illustrating the variety of motives to be expected in a human-like system, and types of processes they can produce in meeting some of the constraints.
This provides a framework for analysing emotions as computational states and processes, and helps to undermine the view that emotions require a special mechanism distinct from cognitive mechanisms. The occurrence of emotions is to be expected in any intelligent robot or organism able to cope with multiple motives in a complex and unpredictable environment.
Analysis of familiar emotion concepts (e.g. anger, embarrassment, elation, disgust, pity, etc.) shows that they involve interactions between motives (e.g. wants, dislikes, ambitions, preferences, ideals, etc.) and beliefs (e.g. beliefs about the fulfilment or violation of a motive), which cause processes produced by other motives (e.g. reasoning, planning, execution) to be disturbed, disrupted or modified in various ways (some of them fruitful). This tendency to disturb or modify other activities seems to be characteristic of all emotions. In order fully to understand the nature of emotions, therefore, we need to understand motives and the types of processes they can produce.
This in turn requires us to understand the global computational architecture of a mind. There are several levels of discussion: description of methodology, the beginning of a survey of possible mental architectures, speculations about the architecture of the human mind, analysis of some emotions as products of the architecture, and some implications for philosophy, education and psychotherapy.
Filename: petters-ijcai05.pdf
Title: Building agents to understand infant attachment behaviour
Author: Dean Petters
(School of Computer Science,
University of Birmingham)
Paper for the Modeling Natural Action Selection workshop at IJCAI 2005 in Edinburgh, July 30-31st
Date Installed: 8 Jun 2005
Abstract:
This paper reports on an autonomous agent simulation of infant
attachment behaviour. The behaviours simulated have been observed in
home environments and in a controlled laboratory procedure called the
Strange Situation Experiment. The Avoidant, Secure and Ambivalent styles
of be- haviour seen in these studies are outlined, and then abstracted
to their core elements to act as a specification of requirements for the
simulation. A reactive agent architecture demonstrates that these
patterns of behaviour can be learnt from reinforcement signals without
recourse to deliberative mechanisms.
For background see http://www.cs.bham.ac.uk/~ddp/
Filename: summary-gc7.pdf
Title: Altricial self-organising information-processing systems
Abstract for International Workshop on The Grand Challenge in Non-Classical Computation 18-19th April 2005, York, UKAuthors: Aaron Sloman and Jackie Chappell (School of Biosciences University of Birmingham)
Abstract:
It is often thought that there is one key design principle or at best a small set of design principles, underlying the success of biological organisms. Candidates include neural nets, `swarm intelligence', evolutionary computation, dynamical systems, particular types of architecture or use of a powerful uniform learning mechanism, e.g. reinforcement learning. All of those support types of self-organising, self-modifying behaviours. But we are nowhere near understanding the full variety of powerful information-processing principles `discovered' by evolution. By attending closely to the diversity of biological phenomena we may gain key insights into (a) how evolution happens, (b) what sorts of mechanisms, forms of representation, types of learning and development and types of architectures have evolved, (c) how to explain ill-understood aspects of human and animal intelligence, and (d) new useful mechanisms for artificial systems.
Filename: alt-prec-ijcai05.pdf
Title: The Altricial-Precocial Spectrum for Robots
In Proceedings IJCAI-05, pages 1187--1192, EdinburghAuthors: Aaron Sloman and Jackie Chappell (School of Biosciences University of Birmingham)
Abstract:
Several high level methodological debates among AI researchers, linguists, psychologists and philosophers, appear to be endless, e.g. about the need for and nature of representations, about the role of symbolic processes, about embodiment, about situatedness, about whether symbol-grounding is needed, and about whether a robot needs any knowledge at birth or can start simply with a powerful learning mechanism. Consideration of the variety of capabilities and development patterns on the precocial-altricial spectrum in biological organisms will help us to see these debates in a new light.It seems that after evolution discovered how to make physical bodies that grow themselves, it discovered how to make virtual machines that grow themselves. Researchers attempting to design human-like, chimp-like or crow-like intelligent robots will need to understand how. Whether computers as we know them can provide the infrastructure for such systems is a separate question.
NOTE:
A sequel to this paper was an invited journal paper, published by the same authors in 2007, here.
Filename: sloman-afterthoughts.pdf
Filename: sloman-tinlap-1975.pdf
(original formatting: also
here --
with photocopying errors)
Title: Afterthoughts on Analogical Representations (1975)
(Derived from a scanned version)
Originally Published in
in
Theoretical Issues in Natural Language Processing (TINLAP-1),
Eds. R. Schank & B. Nash-Webber,
pp. 431--439,
MIT,
Now available online
http://acl.ldc.upenn.edu/T/T75/
Reprinted in
Readings in knowledge representation,
Eds. R.J. Brachman & H.J. Levesque,
Morgan Kaufmann,
1985.
Author: Aaron Sloman
Date installed: 28 Mar 2005
Abstract:
In 1971 I wrote a paper attempting to relate some old philosophical issues about representation and reasoning to problems in Artificial Intelligence. A major theme of the paper was the importance of distinguishing ``analogical'' from ``Fregean'' representations. I still think the distinction is important, though perhaps not as important for current problems in A.I. as I used to think. In this paper I'll try to explain why.
See also the School of Computer Science Web page.
This file is maintained by
Aaron Sloman, and designed to be
lynx-friendly,
and
viewable with any browser.
Email A.Sloman@cs.bham.ac.uk