THE UNIVERSITY OF BIRMINGHAM
School of Computer Science
THE COGNITION AND AFFECT PROJECT

PROJECT WEB DIRECTORY
PAPERS ADDED IN THE YEAR 2007 (APPROXIMATELY)

PAPERS 2007 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX FILE

NOTE


See also
This file is http://www.cs.bham.ac.uk/research/projects/cogaff/07.html
Maintained by Aaron Sloman -- who does not respond to Facebook requests.
It contains an index to files in the Cognition and Affect Project's FTP/Web directory produced or published in the year 2007. Some of the papers published in this period were produced earlier and are included in one of the lists for an earlier period http://www.cs.bham.ac.uk/research/cogaff/0-INDEX.html#contents

A list of PhD and MPhil theses was added in June 2003

This file Last updated: 30 Dec 2009; 13 Nov 2010; 7 Jul 2012; 26 Nov 2012


PAPERS (AND TALKS) IN THE COGNITION AND AFFECT DIRECTORY
Produced or published in 2007 (Approximately)
(Latest first)

Most of the papers listed here are in postscript and PDF format. More recent papers are in PDF only. A few are in HTML only.
For information on free browsers for these formats see http://www.cs.bham.ac.uk/~axs/browsers.html


The following Contents list (in reverse chronological order) contains links to locations in this file giving further details, including abstracts, and links to the papers themselves.

JUMP TO DETAILED LIST (After Contents)

CONTENTS -- FILES 2007 (Latest First)

What follows is a list of links to more detailed information about each paper. From there you can select the actual papers, in various formats, e.g. PDF, postscript and some in html.

Note: Several of the items listed here were actually published several decades ago, but have only now been digitised and made available online.


DETAILS OF FILES AVAILABLE

BACK TO CONTENTS LIST


CoSy Papers and Presentations
Many CogAff papers are now being added to the Birmingham CoSy (EU Robotics Project 2004-2008) Web site


Filename: chappell-sloman-ijuc-07.pdf
Title: Natural and artificial meta-configured altricial information-processing systems

Authors: Jackie Chappell and Aaron Sloman
Date Installed: Nov 2006, Published 2007

Where published:

Invited contribution to a special issue of The International Journal of Unconventional Computing
Vol 2, Issue 3, 2007, pp. 211--239
Abstract:
The full variety of powerful information-processing mechanisms 'discovered' by evolution has not yet been re-discovered by scientists and engineers. By attending closely to the diversity of biological phenomena, we may gain new insights into (a) how evolution happens, (b) what sorts of mechanisms, forms of representation, types of learning and development and types of architectures have evolved, (c) how to explain ill-understood aspects of human and animal intelligence, and (d) new useful mechanisms for artificial systems. We analyse tradeoffs common to both biological evolution and engineering design, and propose a kind of architecture that grows itself, using, among other things, genetically determined meta-competences that deploy powerful symbolic mechanisms to achieve various kinds of discontinuous learning, often through play and exploration, including development of an 'exosomatic' ontology, referring to things in the environment --- in contrast with learning systems that discover only sensorimotor contingencies or adaptive mechanisms that make only minor modifications within a fixed architecture.

Keywords:
behavioural epigenetics, biologically inspired robot architectures, development of behaviour, exosomatic ontology, evolution of behaviour, nature/nurture tradeoffs, precocial-altricial spectrum, preconfigured/meta-configured competences sensorimotor contingencies.

NOTE:
This paper is a sequel to a paper published in proceedings of IJCAI 2005 by the same authors The Altricial-Precocial Spectrum for Robots.


Filename: sloman-aaai-consciousness.pdf
Why Some Machines May Need Qualia and How They Can Have Them:
Including a Demanding New Turing Test for Robot Philosophers

Invited presentation for AAAI Fall Symposium 2007
AI and Consciousness: Theoretical Foundations and Current Approaches
(
Symposium Web site
Supplementary Web Site )
Author: Aaron Sloman
Date Installed: 3 Sep 2008 (Previously on CoSy site)

Abstract:

This paper extends three decades of work arguing that instead of focusing only on (adult) human minds, we should study many kinds of minds, natural and artificial, and try to understand the space containing all of them, by studying what they do, how they do it, and how the natural ones can be emulated in synthetic minds. That requires: (a) understanding sets of requirements that are met by different sorts of minds, i.e. the niches that they occupy, (b) understanding the space of possible designs, and (c) understanding the complex and varied relationships between requirements and designs. Attempts to model or explain any particular phenomenon, such as vision, emotion, learning, language use, or consciousness lead to muddle and confusion unless they are placed in that broader context. in part because current ontologies for specifying and comparing designs are inconsistent and inadequate. A methodology for making progress is summarised and a novel requirement proposed for human-like philosophical robots, namely that a single generic design, in addition to meeting many other more familiar requirements, should be capable of developing different and opposed viewpoints regarding philosophical questions about consciousness, and the so-called hard problem. No designs proposed so far come close.

See also this short talk at Bielefeld on 10th October 2007

http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#bielefeld
'Why robot designers need to be philosophers'



Filename: sloman-aaai-representation.pdf
Title: Diversity of Developmental Trajectories in Natural and Artificial Intelligence

Invited presentation for AAAI Fall Symposium 2007
Computational Approaches to Representation Change during Learning and Development.
(
Symposium Web site)
Authors: Aaron Sloman
Date Installed: 3 Sep 2008 (Previously on CoSy site)

Abstract:

There is still much to learn about the variety of types of learning and development in nature and the genetic and epigenetic mechanisms responsible for that variety. This paper is one of a collection exploring ideas about how to characterise that variety and what AI researchers, including robot designers, can learn from it. This requires us to understand important features of the environment. Some robots and animals can be pre-programmed with all the competences they will ever need (apart from fine tuning), whereas others will need powerful learning mechanisms. Instead of using only completely general learning mechanisms, some robots, like humans, need to start with deep, but widely applicable, implicit assumptions about the nature of the 3-D environment, about how to investigate it, about the nature of other information users in the environment and about good ways to learn about that environment, e.g. using creative play and exploration. One feature of such learning could be learning more about how to learn in that sort of environment. What is learnt initially about the environment is expressible in terms of an innate ontology, using innately determined forms of representation, but some learning will require extending the forms of representation and the ontology used. Further progress requires close collaboration between AI researchers, biologists studying animal cognition and biologists studying genetics and epigenetic mechanisms.


Available in two formats:
Filename: sloman-rogators.pdf (Scanned original, about 13MB PDF)
Filename: sloman-rogators.html (Digitised and annotated HTML version, about 55KB)
Title: Functions and Rogators (1965)

Author: Aaron Sloman
Date Installed: 23 Dec 2007

This paper was originally presented at a meeting of the Association for Symbolic Logic held in St. Anne's College, Oxford, England from 15-19 July 1963 as a NATO Advanced Study Institute with a Symposium on Recursive Functions sponsored by the Division of Logic, Methodology and Philosophy of Science of the International Union of the History and Philosophy of Science.

A summary of the meeting by E. J. Lemmon, M. A. E. Dummett, and J. N. Crossley with abstracts of papers presented, including this one, was published in The Journal of Symbolic Logic, Vol. 28, No. 3. (Sep., 1963), pp. 262-272. accessible online here.

The full paper was published in the conference proceedings:

Aaron Sloman 'Functions and Rogators', in
Formal Systems and Recursive Functions:
Proceedings of the Eighth Logic Colloquium Oxford, July 1963
Eds J N Crossley and M A E Dummett
North-Holland Publishing Co (1965), pp. 156--175

This paper extends Frege's concept of a function to "rogators", which are like functions in that they take arguments and produce results, but are unlike functions in that their results can depend on the state of the world, in addition to which arguments they are applied to.
It was scanned in and digitised in December 2007.
(This paper was described by David Wiggins as 'neglected but valuable' in his 'Sameness and Substance Renewed' (2001).

Note: the key ideas were in this Oxford DPhil Thesis (Aaron Sloman, 1962): Knowing and Understanding

Abstract:
(Published in E. J. Lemmon, M. A. E. Dummett, and J. N. Crossley) (1963)
Frege, and others, have made extensive use of the notion of a function, for example in analysing the role of quantification, the notion of a function being defined, usually, in the manner familiar to mathematicians, and illustrated with mathematical examples. On this view functions satisfy extensional criteria for identity. It is not usually noticed that in non-mathematical contexts the things which are thought of as analogous to functions are, in certain respects, unlike the functions of mathematics. These differences provide a reason for saying that there are entities, analogous to functions, but which do not satisfy extensional criteria for identity. For example, if we take the supposed function 'x is red' and consider its value (truth or falsity) for some such argument as the lamp post nearest my front door, then we see that what the value is depends not only on which object is taken as argument, and the 'function', but also on contingent facts about the object, in particular, what colour it happens to have. Even if the lamp post is red (and the value is truth), the same lamp post might have been green, if it had been painted differently. So it looks as if we need something like a function, but not extensional, of which we can say that it might have had a value different from that which it does have. We cannot say this of a function considered simply as a set of ordered pairs, for if the same argument had had a different value it would not have been the same function. These non-extensional entities are described as 'rogators', and the paper is concerned to explain what the function-rogator distinction is, how it differs from certain other distinctions, and to illustrate its importance in logic, from the philosophical point of view.


Filename: bbs-chimps-1978.pdf
Filename: bbs-chimps-1978.html
Title: What About Their Internal Languages?

Author: Aaron Sloman
Date Installed: 13 Dec 2007 (Originally published 1978)

Abstract:

Commentary on three articles published in Behavioral and Brain Sciences Journal 1978, 1 (4)
1. Premack, D., Woodruff, G. Does the chimpanzee have a theory of mind? BBS 1978 1 (4): 515.
2. Griffin, D.R. Prospects for a cognitive ethology. BBS 1978 1 (4): 527.
3. Savage-Rumbaugh, E.S., Rumbaugh, D.R., Boysen, S. Linguistically-mediated tool use and exchange by chimpanzees (Pan Troglodytes). BBS 1978 1 (4): 539.
Despite the virtues of the target articles, I find something sadly lacking: an awareness of deep problems and a search for deep explanations.

Are the authors of these papers merely concerned to collect facts? Clearly not: they are also deeply concerned to learn the extent of man's uniqueness in the animal world, to refute behaviourism, and to replace anecdote with experimental rigour. But what do they have to say to someone who doesn't care whether humans are unique, who believes that behaviourism is either an irrefutable collection of tautologies or a dead horse, and who already is deeply impressed by the abilities of cats, dogs, chimps, and other animals, but who constantly wonders: HOW DO THEY DO IT?

My answer is that the papers do not have much to say about that: for that, investigation of designs for working systems is required, rather than endless collection of empirical facts, interesting as those may be.


Filename: sloman-explain-necessity.pdf (132 KBytes, via latex from OCR -- PDF)
Filename: sloman-explain-necessity.html (44 KBytes, via latex from OCR HTML)
Filename: sloman-ExplainNecessity.pdf (11.4 MB Scanned PDF from original)
Title: Explaining Logical Necessity

Author: Aaron Sloman
Date Installed: 4 Dec 2007 (Published originally in 1968); Updated 19 Dec 2009

in Proceedings of the Aristotelian Society, 1968/9, Volume, 69, pp 33--50.

Note: the key ideas were in Aaron Sloman's Oxford DPhil Thesis (1962): Knowing and Understanding

Abstract: (From the introductory section)

Summary:
I: Some facts about logical necessity stated.
II: Not all necessity is logical.
III: The need for an explanation.
IV: Formalists attempt unsuccessfully to reduce logic to syntax.
V: The no-sense theory of Wittgenstein's Tractatus merely reformulates
the problem.
VI: Crude conventionalism is circular.
VII: Extreme conventionalism is more sophisticated.
VIII: It yields some important insights.
IX: But it ignores the variety of kinds of proof.
X: Proofs show why things must be so, but different proofs show different things. Hence there can be no general explanation of necessity.

I An adequate theory of meaning and truth must account for the following facts, whose explanation is the topic, though not the aim, of the paper.

(i) Different signs (e.g., in different languages) may express the same proposition.

(ii) The syntactic and semantic rules in virtue of which sentences are able to express contingent propositions also permit the expression of necessary propositions and generate necessary relations between contingent propositions. E.g. although 'It snows in Sydney or it does not snow in Sydney' can be verified empirically (since showing one disjunct to be true would be an empirical verification, just as a proposition of the form 'p and not-p' can be falsified empirically), the empirical enquiry can be short-circuited by showing what the result must be.

(iii) At least some such restrictions on truth-values, or combinations of truth-values (e.g., when two or more contingent propositions are logically equivalent, or inconsistent, or when one follows from others), result from purely formal, or logical, or topic-neutral features of the construction of the relevant propositions, features which have nothing to do with precisely which concepts occur, or which objects are referred to. Hence we call some propositions logically true, or logically false, and say some inferences are valid in virtue of their logical form, which prevents simultaneous truth of premisses and falsity of conclusion.

(iv) The truth-value-restricting logical forms are systematically inter-related so that the whole infinite class of such forms can be recursively generated from a relatively small subset, as illustrated in axiomatisations of logic.

Subsequent discussion will show these statements to be over-simple. Nevertheless, they will serve to draw attention to the range of facts whose need of explanation is the starting point of this paper. They have deliberately been formulated to allow that there may be cases of non-logical necessity.


Filename: sloman-oii-2007.pdf
Title: Requirements for Digital Companions: It's harder than you think (OUT OF DATE)

See the final version (May 2009) here.
Author: Aaron Sloman
Date Installed: 30 Nov 2007

Abstract:

Position Paper for Workshop on Artificial Companions in Society:
Perspectives on the Present and Future
Organised by the Companions project.
Oxford Internet Institute (25th--26th October, 2007)
A closely related slide presentation is here.

Presenting some of the requirements for a truly helpful, as opposed to merely engaging (or annoying) artificial companion, with arguments as to why meeting those requirements is way beyond the current state of the art in AI.

Contents
1  Introduction                                         2
   1.1 Functions of DCs . . . . . . . . . . . . . . .   2
   1.2 Motives for acquiring DCs . . . . . . . . . .    3
2 Categories of DC use and design that interest me.     4
3 Problems of achieving the enabling functions          4
   3.1 Kitchen mishaps . . . . . . . . . . . . . . .    5
   3.2 Alternatives to canned responses . . . . . .     5
   3.3 Identifying affordances and searching for things
        that provide them  . . . . . . . . . . . . .    5
   3.4 More abstract problems . . . . . . . . . . .     6
4 Is the solution statistical?                          6
   4.1 Why do statistics-based approaches work at all?  7
   4.2 What's needed . . . . . . . . . . . . .          7
5 Can it be done?                                       8
6  Rights of intelligent machines                       8
7  Risks of premature advertising                       9
References                                              9


Filename: sloman-sunbook.pdf
Title: Putting the Pieces Together Again (Preprint)

Author: Aaron Sloman

In The Cambridge Handbook of Computational Psychology
Ed. Ron Sun, Cambridge University Press (2008)
Paperback version.
Hardback version.
Date Installed: 20 Oct 2007
(Details here updated: 22 Mar 2008)

Abstract:

This is a 'preprint' for the final chapter of a Handbook of Computational Psychology which is currently in press. The differences between this and the version to be published include British vs American spelling and punctuation. This version also has a few footnotes that had to be excluded. For some reason the publisher did not want abstracts for each chapter, so there is no official abstract. The preprint version also includes a table of contents for the chapter (copied below).

Overview
Instead of surveying achievements of AI and computational Cognitive Science as might be expected, this chapter complements the Editor's review of requirements for work on integrated systems in Chapter 1, by presenting a personal view of some of the major unsolved problems, and obstacles to solving them. It attempts to identify some major gaps, and to explain why progress has been much slower than many people expected. It also includes some recommendations for improving progress and for countering the fragmentation and factionalism of the research community.

It it is relatively easy to identify long term ambitions in vague terms, e.g. the aim of modelling human flexibility, human learning, human cognitive development, human language understanding or human creativity; but taking steps to fulfil the ambitions is fraught with difficulties. So progress in modelling human and animal cognition is slow despite many impressive narrow-focus achievements, including those reported in earlier chapters.

An attempt is made to explain why progress in producing realistic models of human and animal competences is slow, namely (a) the great difficulty of the problems, (b) failure to understand the breadth, depth and diversity of the problems, (c) the fragmentation of the research community and (d) social and institutional pressures against risky multi-disciplinary, long-term research. Advances in computing power, theory and techniques will not suffice to overcome these difficulties. Partial remedies are offered, namely identifying some of the unrecognised problems and suggesting how to plan research on the basis of `backward-chaining' from long term goals, in ways that may, perhaps, help warring factions to collaborate and provide new ways to select targets and assess progress.

Contents of the Chapter

1 Introduction                                                                       1
  1.1 The scope of cognitive modelling . . . . . . . . . . . . . . . . . . . . .     2
  1.2 Levels of analysis and explanation . . . . . . . . . . . . . . . . . . . . .   2
2 Difficulties and how to address them.                                              3
  2.1 Institutional obstacles . . . . . . . . . . . . . . . . . . . . . . . . . . .  3
  2.2 Intrinsic difficulties in making progress . . . . . . . . . . . . . . . . . .  4
3 Failing to see problems: ontological blindness                                     4
4 What are the functions of vision?                                                  5
  4.1 The importance of mobile hands . . . . . . . . . . . . . . . .    . . . . . .  6
  4.2 Seeing processes, affordances and empty spaces . . . . . . .      . . . . . .  8
  4.3 Seeing without recognising objects . . . . . . . . . . . . . . .  . . . . . .  9
  4.4 Many developmental routes to related cognitive competences        . . . . . . 10
  4.5 The role of perception in ontology extension . . . . . . . . .    . . . . . . 10
5 Representational capabilities                                                     11
  5.1 Is language for communication? . . . . . . . . . . . . . . . . . . . . . .    12
  5.2 Varieties of complexity: 'Scaling up' and 'scaling out' . . . . . . . . . .   14
  5.3 Humans scale out, not up . . . . . . . . . . . . . . . . . . . . . . . . .    16
6 Are humans unique?                                                                17
  6.1 Altricial and precocial skills in animals and robots . . . . . . . . . . . .  17
  6.2 Meta-semantic competence . . . . . . . . . . . . . . . . . . . . . . . . .    19
7 Using detailed scenarios to sharpen vision                                        20
   7.1 Sample Competences to be Modelled  . . . . . . . . . . . . . . . . . . .     20
   7.2 Fine-grained Scenarios are Important . . . . . . . . . . . . . . . . . . .   21
   7.3 Behavior specifications are not enough . . . . . . . . . . . . . . . . . .   22
8 Resolving fruitless disputes by methodological `lifting'                          22
   8.1 Analyse before you choose . . . . . . . . . . . . . . . . . . . . . . . . .  23
   8.2 The need to survey spaces of possibilities . . . . . . . . . . . . . . . . . 24
   8.3 Towards an ontology for types of architectures . . . . . . . . . . . . . .   24
9 Assessing scientific progress                                                     25
   9.1 Organising questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
   9.2 Scenario-based backward chaining research    . . . . . . . . . . . . . . . . 27
   9.3 Assessing (measuring?) progress . . . . . .  . . . . . . . . . . . . . . . . 27
   9.4 Replacing rivalry with collaboration . . . . . . . . . . . . . . . . . . . . 28
10 Conclusion                                                                       29
References                                                                          30
--------------------------------------------------------------- 
NOTE:
This chapter overlaps with various other things including


Filename: challenge-penrose.pdf
Title: Perception of structure 2: Impossible Objects

Author: Aaron Sloman
Date Installed: 15 May 2007

Abstract:

This is a sequel to the presentation of a challenge to vision researchers here, about visual perception. This sequel discusses some detailed requirements for visual mechanisms related to how typical (adult) humans see pictures of `impossible objects'.

Many people have seen the picture by M.C. Escher representing a watermill, people, and two towers. It is simultaneously a work of art, a mathematical exercise and a probe into the human visual system. You probably see a variety of 3-D structures of various shapes and sizes, some in the distance and some nearby, some familiar, like flights of steps and a water wheel, others strange, e.g. some things in the `garden'. There are many parts you can imagine grasping, climbing over, leaning against, walking along, picking up, pushing over, etc.: you see both structure and affordances in the scene. Yet all those internally consistent and intelligible details add up to a multiply contradictory global whole. What we see could not possibly exist. The implications of this sort of thing, are discussed, with examples.

So the existence of pictures of impossible objects shows (a) that what we see is not necessarily internally consistent, even when we see 3-D structures and processes and (b) that detecting the impossibility does not happen automatically: it requires extra work, and may sometimes be too difficult. This has implications for forms of representation in 3-D vision, in particular that scene perception cannot involve building a model of the scene, since models cannot be inconsistent.

For additional challenges for vision researchers see
http://www.cs.bham.ac.uk/research/projects/cosy/photos/crane
Challenge for Vision: Seeing a Toy Crane


Filename: challenge.pdf
Title: Perception of structure: Anyone Interested?

Author: Aaron Sloman
Date Installed: 15 May 2007 (Written Feb 2005)

Abstract:

This is not strictly a paper, but a short slide presentation making a point about the state of vision research, when making plans for a robot with vision and manipulation capabilities (the CoSy Playmate.

I have the impression that most of the research work being done on vision in AI is concerned with:

What is missing from the above? A sequel to this is the discussion of impossible objects, above.


Filename: viezzer-thesis/thesis/main.pdf (PDF)
Filename: viezzer-thesis/thesis/main.ps (Postscript)
Abstract and further information
Title: Autonomous concept formation: An architecture-based analysis (PhD Thesis, 2007)

Author: Manuela Viezzer
Date Installed: 6 May 2007

Abstract:

Abstract, synopsis and program code available here


Filename: sloman-1962 (html overview and PDF chapters)
Title: Oxford DPhil Thesis (1962): Knowing and Understanding
Relations between meaning and truth, meaning and necessary truth, meaning and synthetic necessary truth

Author: Aaron Sloman

This thesis was scanned in and made generally available by Oxford University Research Archive in the form of PDF versions of the chapters, in 2007. The text is only in image form and is viewable and printable, but not searchable.

The chapters have been copied here for ease of access, along with more detailed information about the contents.

The PDF files can also be obtained via this 'permanent ID'
http://ora.ox.ac.uk/objects/uuid:cda7c325-e49f-485a-aa1d-7ea8ae692877
Old link (broken?) http://ora.ouls.ox.ac.uk:8081/10030/928
(which is often extremely slow to respond.)

Date Installed: 2 May 2007

Abstract:

The avowed aim of the thesis is to show that there are some synthetic necessary truths, or that synthetic apriori knowledge is possible. This is really a pretext for an investigation into the general connection between meaning and truth, or between understanding and knowing, which, as pointed out in the preface, is really the first stage in a more general enquiry concerning meaning. (Not all kinds of meaning are concerned with truth.) After the preliminaries (chapter one), in which the problem is stated and some methodological remarks made, the investigation proceeds in two stages. First there is a detailed inquiry into the manner in which the meanings or functions of words occurring in a statement help to determine the conditions in which that statement would be true (or false). This prepares the way for the second stage, which is an inquiry concerning the connection between meaning and necessary truth (between understanding and knowing apriori). The first stage occupies Part Two of the thesis, the second stage Part Three. In all this, only a restricted class of statements is discussed, namely those which contain nothing but logical words and descriptive words, such as "Not all round tables are scarlet" and "Every three-sided figure is three-angled". (The reasons for not discussing proper names and other singular definite referring expressions are given in Appendix I.)

Some of the ideas developed here were expanded in


Filename: sloman-cognitive-modelling-chapter.pdf
Title: Putting the pieces together again

Author: Aaron Sloman

This is an out of date early draft of chapter for a handbook of cognitive modelling. The final chapter is available here.

Date Installed: 27 Apr 2007


Filename: sloman-space-of-minds-84.pdf
Filename: sloman-space-of-minds-84.html (HTML)
Title: The structure of the space of possible minds

Author: Aaron Sloman

Originally published in The Mind and the Machine: philosophical aspects of Artificial Intelligence,
ed. Stephen Torrance, Ellis Horwood, 1984, pp 35-42.
Date Installed: 13 Jan 2007 (Originally published 1984)

Abstract: (Extract from text)

Describing this structure is an interdisciplinary task I commend to philosophers. My aim for now is not to do it -- that's a long term project -- but to describe the task. This requires combined efforts from several disciplines including, besides philosophy: psychology, linguistics, artificial intelligence, ethology and social anthropology.

Clearly there is not just one sort of mind. Besides obvious individual differences between adults there are differences between adults, children of various ages and infants. There are cross-cultural differences. There are also differences between humans, chimpanzees, dogs, mice and other animals. And there are differences between all those and machines. Machines too are not all alike, even when made on the same production line, for identical computers can have very different characteristics if fed different programs. Besides all these existing animals and artefacts, we can also talk about theoretically possible systems.


Filename: sloman-transformations.pdf
Title: Transformations of Illocutionary Acts (1969)

Author: Aaron Sloman

First published in Analysis Vol 30 No 2, December 1969 pages 56-59
Date Installed: 10 Jan 2007

Abstract: (extracts from paper)

This paper discusses varieties of negation and other logical operators when applied to speech acts, in response to an argument by John Searle.

In his book Speech Acts (Cambridge University Press, 1969), Searle discusses what he calls 'the speech act fallacy' (pp. 136,ff), namely the fallacy of inferring from the fact that

(1) in simple indicative sentences, the word W is used to perform some speech-act A (e.g. 'good' is used to commend, 'true' is used to endorse or concede, etc.)
the conclusion that
(2) a complete philosophical explication of the concept W is given when we say 'W is used to perform A'.
He argues that as far as the words 'good', 'true', 'know' and 'probably' are concerned, the conclusion is false because the speech-act analysis fails to explain how the words can occur with the same meaning in various grammatically different contexts, such as interrogatives ('Is it good?'), conditionals('If it is good it will last long'), imperatives ('Make it good'), negations, disjunctions, etc.

The paper argues that even if conclusion (2) is false, Searle's argument against it is inadequate because he does not consider all the possible ways in which a speech-act might account for non-indicative occurrences.

In particular, there are other things we can do with speech acts besides performing them and predicating their performance, e.g. besides promising and expressing the proposition that one is promising. E.g. you can indicate that you are considering performing act F but are not yet prepared to perform it, as in 'I don't promise to come'. So the analysis proposed can be summarised thus:

If F and G are speech acts, and p and q propositional contents or other suitable objects, then:

o Utterances of the structure 'If F(p) then G(q)' express provisional commitment to performing G on q, pending the performance of F on p
o Utterances of the form 'F(p) or G(q) 'would express a commitment to performing (eventually) one or other or both of the two acts though neither is performed as yet.
o The question mark, in utterances of the form 'F(p)?' instead of expressing some new and completely unrelated kind of speech act, would merely express indecision concerning whether to perform F on p together with an attempt to get advice or help in resolving the indecision.
o The imperative form 'Bring it about that . .' followed by a suitable grammatical transformation of F(p) would express the act of trying to get (not cause) the hearer to bring about that particular state of affairs in which the speaker would perform the act F on p (which is not the same as simply bringing it about that the speaker performs the act).
It is not claimed that 'not', 'if', etc., always are actually used in accordance with the above analyses, merely that this is a possible type of analysis which (a) allows a word which in simple indicative sentences expresses a speech act to contribute in a uniform way to the meanings of other types of sentences and (b) allows signs like 'not', 'if', the question construction, and the imperative construction, to have uniform effects on signs for speech acts. This type of analysis differs from the two considered and rejected by Searle. Further, if one puts either assertion or commendation or endorsement in place of the speech acts F and G in the above schemata, then the results seem to correspond moderately well with some (though not all) actual uses of the words and constructions in question. With other speech acts, the result does not seem to correspond to anything in ordinary usage: for instance, there is nothing in ordinary English which corresponds to applying the imperative construction to the speech act of questioning, or even commanding, even though if this were done in accordance with the above schematic rules the result would in theory be intelligible.


Filename: sloman-new-bodies.pdf (PDF)
Filename: sloman-new-bodies.html (HTML)
Title: New Bodies for Sick Persons: Personal Identity Without Physical Continuity

Author: Aaron Sloman

First published in In Analysis vol 32 NO 2, December 1971, pages 52 --55
Date Installed: 9 Jan 2007 (Originally Published 1971)

Abstract: (Extracts from paper)

In his recent Aristotelian society paper ('Personal identity, personal relationships, and criteria' in Proceedings the Aristotelian Society, 1970-71, pp. 165--186), J. M. Shorter argues that the connexion between physical identity and personal identity is much less tight than some philosophers have supposed, and, in order to drive a wedge between the two sorts of identity, he discusses logically possible situations in which there would be strong moral and practical reasons for treating physically discontinuous individuals as the same person. I am sure his main points are correct: the concept of a person serves a certain sort of purpose and in changed circumstances it might be able to serve that purpose only if very different, or partially different, criteria for identity were employed. Moreover, in really bizarre, but "logically" possible, situations there may be no way of altering the identity-criteria, nor any other feature of the concept of person, so as to enable the concept to have the same moral, legal, political and other functions as before: the concept may simply disintegrate, so that the question 'Is X really the same person as Y or not ?', has no answer at all. For instance, this might be the case if bodily discontinuities and reduplications occurred very frequently. To suppose that the "essence" of the concept of a person, or some set of general logical principles, ensures that questions of identity always have answers in all possible circumstances, is quite unjustified.

In order to close a loophole in Shorter's argument I describe a possible situation in which both physical continuity and bodily identity are clearly separated from personal identity. Moreover, the example does not, as Shorter's apparently does, assume the falsity of current physical theory.

It will be a long time before engineers make a machine which will not merely copy a tape recording of a symphony, but also correct poor intonation, wrong notes, or unmusical phrasing. An entirely new dimension of understanding of what is being copied is required for this. Similarly, it may take a further thousand years, or more, before the transcriptor is modified so that when a human body is copied the cancerous or other diseased cells are left out and replaced with normal healthy cells, if, by then, the survival rate for bodies made by this modified machine were much greater than for bodies from which tumours had been removed surgically, or treated with drugs, then I should have little hesitation, after being diagnosed as having incurable cancer, in agreeing to have my old body replaced by a new healthy one, and the old one destroyed before recovering from the anaesthetic. This would be no suicide, nor murder.


Filename: sloman-necessary.pdf (PDF)
Title: 'NECESSARY', 'A PRIORI' AND 'ANALYTIC'

Author: Aaron Sloman
Date Installed: 9 Jan 2007 (Published 1965)

First published in Analysis vol 26, No 1, pp 12-16 1965.
Abstract (actually the opening paragraph of the paper):
It is frequently taken for granted, both by people discussing logical distinctions and by people using them, that the terms 'necessary', 'a priori', and 'analytic' are equivalent, that they mark not three distinctions, but one. Occasionally an attempt is made to establish that two or more of these terms are equivalent. However, it seems me far from obvious that they are or can be shown to be equivalent, that they cannot be given definitions which enable them to mark important and different distinctions. Whether these different distinctions happen to coincide or not is, as I shall show, a further question, requiring detailed investigation. In this paper, an attempt will be made to show in a brief and schematic way that there is an open problem here and that it is extremely misleading to talk as if there were only one distinction.

BACK TO CONTENTS LIST


NOTE


Older files in this directory (pre 2007) are accessible via the main index


RETURN TO MAIN COGAFF INDEX FILE

See also the School of Computer Science Web page.

This file is maintained by Aaron Sloman, and designed to be lynx-friendly, and viewable with any browser.
Email A.Sloman@cs.bham.ac.uk