School of Computer Science


(Some of the documents were inserted under the year they were added to this directory. See



This file is
Maintained by Aaron Sloman.
It contains an index to files relevant to the Cognition and Affect Project's FTP/Web directory produced or published in the years 1962-1980. Some of the papers published in this period were produced earlier and are included in one of the lists for an earlier period. Some older papers recently digitised have also been included.

A list of PhD and MPhil theses was added in June 2003

This file Last updated: 10 Jun 2012; 7 Jul 2012; 24 Mar 2014

Produced or published in 1962-1980 (Approximately)
(Latest first)

Many of the papers listed here are in postscript and PDF format. Some, especially more recent additions are in PDF only.

The following Contents list (in reverse chronological order) contains links to locations in this file giving further details, including abstracts, and links to the papers themselves.


(1962 - 1980)

CONTENTS -- FILES 1962-1980 (Latest First)

What follows is a list of links to more detailed information about each paper. From there you can select the actual papers, in various formats, e.g. PDF, postscript and some in html.




  • Filename: sloman-ullman-gibson-bbs.pdf (PDF)
    Title: What kind of indirect process is visual perception?

    Author: Aaron Sloman
    Date: Originally published 1980. Added here 28 Sep 2012

    Where published:

    In: Open Peer Commentary on Shimon Ullman: 'Against Direct Perception'
    Behavioral and Brain Sciences Journal, (BBS) (1980) 3, pp. 401-404

    The whole publication, including commentaries is:

    S. Ullman, Against direct perception
    The Behavioral And Brain Sciences (1980) 3, 373-415
    No abstract in paper. Will add a summary here later.

    Compare my more recent discussion of Gibson:
    Aaron Sloman, What's vision for, and how does it work? From Marr (and earlier) to Gibson and Beyond,
    Online tutorial presentation, Sep, 2011 (with later updates).

  • Filename:sloman-croucher-searle.pdf
    Title: How to turn an information processor into an understander
    Commentary on John R. Searle: Minds, brains, and programs

    Authors:Aaron Sloman and Monica Croucher
    Date: Originally published 1980, installed here 9 Oct 2012

    Where published:

    Commentary on 'Minds, brains, and programs' by John R. Searle
    in The Behavioral and Brain Sciences Journal (BBS) (1980) 3, 417-457
    This commentary: pages 447-448


    Searle's delightfully clear and provocative essay contains a subtle mistake, which is also often made by AI researchers who use familiar mentalistic language to describe their programs. The mistake is a failure to distinguish form from function.

    That some mechanism or process has properties that would, in a suitable context, enable it to perform some function, does not imply that it already performs that function. For a process to be understanding, or thinking, or whatever, it is not enough that it replicate some of the structure of the processes of understanding, thinking, and so on. It must also fulfil the functions of those processes. This requires it to be causally linked to a larger system in which other states and processes exist. Searle is therefore right to stress causal powers. However, it is not the causal powers of brain cells that we need to consider, but the causal powers of computational processes. The reason the processes he describes do not amount to understanding is not that they are not produced by things with the right causal powers, but that they do not have the right causal powers, since they are not integrated with the right sort of total system.

  • Sussex Papers from Proceedings AISB 1980 Conference
  • AMSTERDAM 1st to 4th July, 1980
    Programme Chairman: Steven Hardy
    General Chairman: Bob Wielinga
    Programme Committee:
    Mike Brady (MIT)
    Steven Hardy (Sussex)
    Joerg Siekmann (Karlsruhe)
    Karen Sparck-Jones Cambridge
    Bob Wielinga (Amsterdam)
    Richard Young (Cambridge)
    Sponsored by:
    The Society for the Study of Artificial Intelligence and the Simulation of Behaviour
    <###ul> _______________________________________________________________________

  • Using models to augment rule-based programs (PDF)
    Stephen W Draper
    Cognitive Studies Programme, University of Sussex Brighton,Sussex,U.K.
    pages 86--91
    Installed here: 23 Mar 2014

    This paper discusses the design of a program that tackles the ambiguity
    resulting from the interpretation of line-drawings by means of geometric
    constraints alone. It does this by supplementing its basic geometric
    reasoning by means of a set of models of various sizes. Earlier programs
    are analysed in terms of models,and three different functions for models
    are distinguished. Finally, principles for selecting models for the present
    purpose are related to the concept of a "mapping event" between the picture
    and scene domains.

  • Intermediate Descriptions in "POPEYE" (PDF)
    David Owen,
    Cognitive Studies Programme, University of Sussex, Brighton, BN1 9QN England.
    pages 223--228
    Installed here: 23 Mar 2014

    Some ideas are presented, derived from work on the POPEYE vision project,
    concerning the nature and use of different kinds of intermediate picture
    descriptions. It is suggested that there are "natural elements" in terms of
    which stored models should be defined and that it is of prime importance to
    search for those intermediate picture descriptions which are most
    characteristic of the expression of such elements.

  • Why Visual Systems Process Sketches (PDF)
    Why Visual Systems Process Sketches (Text. extracted from PDF)
    Aaron Sloman and David Owen
    Cognitive Studies Programme, University of Sussex, Brighton, BN1 9QN England.
    Pages 244--253
    Installed here: 23 Mar 2014

    Why do people interpret sketches, cartoons, etc. so easily? A theory is
    outlined which accounts for the relation between ordinary visual perception
    and picture interpretation. Animals and versatile robots need fast,
    generally reliable and "gracefully degrading" visual systems. This can be
    achieved by a highly - parallel organisation, in which different domains of
    structure are processed concurrently, and decisions made on the basis of
    incomplete analysis. Attendant risks are diminished in a "cognitively
    friendly world" (CFW). Since high Levels of such a system process
    inherently impoverished and abstract representations, it is ideally suited
    to the interpretation of pictures.


  • Filename: sloman.primacy.inner.language.pdf
  • Filename:
  • Filename: sloman.primacy.inner.language.txt (Plain text)

    Title: The primacy of non-communicative language
    Author: Aaron Sloman

    In The Analysis of Meaning, Proceedings 5,
    (Invited talk for ASLIB Informatics Conference, Oxford, March 1979,)
    ASLIB and British Computer Society, London, 1979.
    Eds M. MacCafferty and K. Gray, pages 1--15.
    Date: Originally published 1979. Added here 2 Dec 2000


    How is it possible for symbols to be used to refer to or describe things? I shall approach this question indirectly by criticising a collection of widely held views of which the central one is that meaning is essentially concerned with communication. A consequence of this view is that anything which could be reasonably described as a language is essentially concerned with communication. I shall try to show that widely known facts, for instance facts about the behaviour of animals, and facts about human language learning and use, suggest that this belief, and closely related assumptions (see A1 to A3, in the paper) are false. Support for an alternative framework of assumptions is beginning to emerge from work in Artificial Intelligence, work concerned not only with language but also with perception, learning, problem-solving and other mental processes. The subject has not yet matured sufficiently for the new paradigm to be clearly articulated. The aim of this paper is to help to formulate a new framework of assumptions, synthesising ideas from Artificial Intelligence and Philosophy of Science and Mathematics.

    See also Title: What About Their Internal Languages? (1978 -- below)

    This theme is developed in several later papers and presentations, over several decades, e.g.
    Talk 111: Two Related Themes (intertwined) (2015)
    What are the functions of vision? How did human language evolve?
    (Languages are needed for internal information processing, including visual processing)

  • Filename: sloman-epist-ai.pdf
    Title: Epistemology and Artificial Intelligence

    Authors: Aaron Sloman
    Date Installed: 29 Aug 2009; Moved here 17 Apr 2019

    Where published:

    In Donald Michie (Editor) Expert Systems in the Microelectronic Age (Edinburgh University Press, 1979)


    A brief introduction to the main problems of epistemology as understood by philosophers and an explanation of (a) why they are relevant to AI, and (b) how they are transformed in the context of AI as the science of natural and artificial intelligent systems.


  • 1978 Book:

    Philosophy science and models of mind.

    Author: Aaron Sloman
                 (University of Sussex. At the University of Birmingham since 1991.)

    Date installed: 29 Sep 2001
    Last updated: August 2019

    Abstract: See the book contents list

    Published 1978: Revised Version, August 2016, August 2018

  • Latest version available for download at no cost from these locations: (HTML) (PDF)
    HTML Also available as

    The PDF version is more suitable for printing, and shows page structure better, but loses some of the detail, e.g. some text indentation. The PDF version should have contents in a side-panel, e.g. if viewed in XPDF or Acrobat Reader, but not if viewed "embedded" in a web browser, e.g. Firefox or Chrome.
    The page numbers of the PDF version are likely to change after further edits. For citations use section numbers/headings rather than page numbers.
    (Published free, with a Creative Commons Licence: details below.)

  • OCR-transcribed index pages were added to the main HTML and PDF version in August 2016, though they probably contain some undetected OCR errors. Non-searchable scanned index pages are also available (scanned from pages 288-304):
    (The index pages are images scanned from the book, and therefore bulky.

  • Afterthoughts page
    A web page has been set up which will gradually acquire pointers to documents published after 1978 and current work in progress extending the key ideas in the book (HTML and PDF):
    Only intermittently updated. See for related work post 2013:


    The book was originally published by Harvester Press and Humanities Press in 1978, but has been out of print for many years. It is now available online free of charge, with additional notes and some re-drawn images, under a Creative Commons licence.

    The original was photocopied by Manuela Viezzer in 2000, then scanned in by Sammy Snow. A lot of work remained to be done, correcting OCR errors and re-drawing the diagrams (for which I used the 'tgif' package on Linux). Since then most chapters have had additional notes and comments added, all clearly marked as new additions. In July 2015 the separate parts (except for the index) were combined to one integrated document with internal cross-references and made available in html and pdf formats listed above.

    Some reviews of the 1978 version are listed below and in this document (also pdf)

    - After the book had been scanned, a collection of separate chapters was made available at this web site (originally HTML only, then PDF versions were added). Those have now been merged into the new integrated version above.

    - Note added 10 Aug 2015
    I have discovered that a 2012 version of this book has been made available on the Archive.Org web site ( a non-profit organisation building an internet library. The book is available there in various formats:
    I don't know whether that archived version will ever be updated.

    - There is an out of date version online at the eprints web site(PDF) of ASSC
    (Association for the Scientific Study of Consciousness).

    - Kindle Ebook Version: added 18 Dec 2011
    Sergei Kaunov converted the online version available in 2011 to Amazon kindle format. (Alas now out of date.) It is available for download at a very low cost (the minimum allowed by Amazon): from

    Product description added by Sergei Kaunov:

    "It is a 1978 book on Artificial Intelligence by Aaron Sloman, professor of Birmingham University. It's not just interesting or representative, it is remarkable for the fact that through the years passed, from the time the book was written, it found its realization in real life like a step-by-step plan. The book mainly consists of philosophical and engineering ideas on intelligence (not only artificial) and relevant topics. After the decades passed we can compare prognoses and current state of the art which shows how these ideas meet its implementation today. Such confirmations highly raise the value of other, more abstract, and further conclusions we can get from the book. One of the core ideas of the book is the described approach which was applied and developed by author at the Birmingham university for AI construction and study of intellect, and the once started work is still on. So insights provided by this book show the beginning of coherent and complex actual research on human and artificial intelligence.
    It is a rare kind of scientific or philosophical book which become more valuable with time."

    - Kindle Mobi-file:
    Created by Sergei Kaunov - Epub-file:
    Created by Sergei Kaunov

    Reviews by Douglas Hofstadter and Steven Stich

    Added 4 Oct 2007: Hofstadter Review
    I have discovered that a review of 'The Computer Revolution in Philosophy'
    by Douglas Hofstadter is available online:
    Volume 2, Number 2, March 1980 (Copyright 1980 American Mathematical Society)
    The computer revolution in philosophy: Philosophy, science and models of mind
    by Aaron Sloman, Harvester Studies in Cognitive Science Humanities Press,
    Atlantic Highlands, N. J., 1978, xvi + 304 pp., cloth, $22.50.
    Reviewed by Douglas R. Hofstadter

    (The review rightly criticises some of the unnecessarily aggressive tone and
    throw-away remarks, but also gives the most thorough assessment of the main
    ideas of the book that I have seen.
    Like many reviewers and AI researchers, Hofstadter, like Stich (see below) regards the philosophy
    of science in the first part of the book, e.g. Chapter 2, as relatively uninteresting, whereas I think
    understanding those issues is central to understanding how human minds work as they learn
    about the world and about themselves, and also central to any good philosophy of science.)

    Added 23 Jul 2015: Stich Review
    A review of this book was published by Steven P. Stich, in 1981

    The Computer Revolution in Philosophy: Philosophy, Science and Models of Mind,
    by Aaron Sloman. Reviewed by Stephen P. Stich in
    The Philosophical Review,
    Vol. 90, No. 2 (Apr., 1981), pp. 300-307

    That review has now been made available, with the author's permission, here:

    The review (like Hofstadter's review) criticised the notion of 'Explaining possibilities' as one of the aims of science and my use of Artificial Intelligence as an example, in Chapter 2.

    Response to reviews
    A partial response to the reviews by Stich and Hofstadter is available here:
    Construction kits as explanations of possibilities (generators of possibilities) (Work in progress.)

  • Filename: bbs-chimps-sloman-1978.html
  • Filename: bbs-chimps-sloman-1978.pdf
    Title: What About Their Internal Languages?

    Author: Aaron Sloman
    Date Installed: 13 Dec 2007 (Originally published 1978)


    Commentary on three articles published in Behavioral and Brain Sciences Journal 1978, 1 (4)
    1. Premack, D., Woodruff, G. Does the chimpanzee have a theory of mind? BBS 1978 1 (4): 515.
    2. Griffin, D.R. Prospects for a cognitive ethology. BBS 1978 1 (4): 527.
    3. Savage-Rumbaugh, E.S., Rumbaugh, D.R., Boysen, S. Linguistically-mediated tool use and exchange by chimpanzees (Pan Troglodytes). BBS 1978 1 (4): 539.
    Despite the virtues of the target articles, I find something sadly lacking: an awareness of deep problems and a search for deep explanations.

    Are the authors of these papers merely concerned to collect facts? Clearly not: they are also deeply concerned to learn the extent of man's uniqueness in the animal world, to refute behaviourism, and to replace anecdote with experimental rigour. But what do they have to say to someone who doesn't care whether humans are unique, who believes that behaviourism is either an irrefutable collection of tautologies or a dead horse, and who already is deeply impressed by the abilities of cats, dogs, chimps, and other animals, but who constantly wonders: HOW DO THEY DO IT?

    My answer is that the papers do not have much to say about that: for that, investigation of designs for working systems is required, rather than endless collection of empirical facts, interesting as those may be.

    See also The primacy of non-communicative language (Above)

  • Filename:sloman-et-al-78.pdf
    Title: Representation and Control in Vision

    Authors: Aaron Sloman, David Owen, Geoffrey Hinton, Frank O'Gorman
    Date Installed: 10 Jun 2012 (Published 1978)

    Where published:

    in Proceedings AISB/GI Conference, 18-20th July 1978,
    Hamburg, Germany
    Programme Chair: Derek Sleeman
    Program Committee:
        Alan Bundy (Edinburgh)
        Steve Hardy (Sussex)
        H. -H. Nagel (Hamburg)
        Jacques Pitrat (Paris)
        Derek Sleeman (Leeds)
        Yorick Wilks (Essex)
    General chair: K. -H. NAGEL

    Published by: SSAISB and GI


    (Extract from text)
    Vision work in AI has made progress with relatively small problems. We are not aware of any system in which many different kinds of knowledge co-operate. Often there is essentially one kind of structure, e.g. a network of lines or regions, and the problem is simply to segment it, and/or to label parts of it. Sometimes models of known objects are used to guide the analysis and interpretation of an image, as in the work of Roberts (1965), but usually there are few such models, and there isn't a very deep hierarchy of objects composed of objects composed of objects....
    By contrast, recent speech understanding systems, like HEARSAY (Lesser 1977, Hayes-Roth 1977), deal with more complex kinds of interactions between different sorts of knowledge. They are still not very impressive compared with people, but there are some solid achievements. Is the lack of similar success in vision due to inherently more difficult problems?
    Some vision work has explored interactions between different kinds of knowledge, including the Essex coding-sheet project (Brady, Bornat 1976) based on the assumption that provision for multiple co-existing processes would make the tasks much easier. However, more concrete and specific ideas are required for sensible control of a complex system, and a great deal of domain-specific descriptive know-how has to be explicitly provided for many different sub-domains.

    The POPEYE project was an attempt to study ways of putting different kinds of visual knowledge together in one system.

    Chapter 9 of The Computer Revolution in Philosophy provides further information about the Popeye system.

  • Filename: sloman-on-pylyshyn-bbs-1978.html
    Filename: sloman-on-pylyshyn-bbs-1978.pdf
    Title: Artificial Intelligence and Empirical Psychology

    Author: Aaron Sloman
    Date Installed: 9 Oct 2012 (Originally published 1978) (Html 8 Feb 2016)
    (Punctuation fixed 4 May 2015)
    Commentary on Z. Pylyshyn:
    Computational models and empirical constraints
    Behavioral and Brain Sciences Vol 1 Issue 1 March 1978, pp 91 - 99

    This commentary: pp 115-6



  • Filename: sloman-hardy-gestalt-experiences-1976.pdf
    (Derived from scanned conference proceedings.)
    Title: Giving a Computer Gestalt Experiences

    Author: Aaron Sloman and Steven Hardy
    Originally published in
    Proceedings Summer Conference on Artificial Intelligence
    AISB-2, July 12-14th 1976,
    pp. 242-255.
    Editor Mike Brady
    Abstract (From first page)
    POPEYE is a vision program currently being developed by a small group at Sussex University. The aim is to explore the problems of interpreting messy and complex pictures of familiar objects. Familiarity is important because knowledge the objects helps to overcome the problems of dealing with noise and ambiguities. Pictures are presented to POPEYE in the form of a two-dimensional binary array, representing scenes containing overlapping letters made of "bars". Pictures are generated by programs either from descriptions or with the aid of an interactive graphics terminal. We are using POP2, the programming language developed for A.I. at Edinburgh University. However, we have found it useful to extend the language, and this paper describes some of the extensions. POPEYE's domain-specific knowledge will be described on another occasion. POPEYE should process the pictures in a sensible, flexible way, so that the main features to have emerged at any time can redirect the flow of attention. This applies at all levels.

    Further details of the program were summarised in Chapter 9 of The Computer Revolution in Philosophy available online at


  • Filename: sloman-afterthoughts.pdf
    (Via LaTeX: derived from a scanned version)
  • Filename: sloman-tinlap-1975.pdf
    (Original formatting -- but with photocopying errors: here)
    Title: Afterthoughts on Analogical Representations (1975)

    Author: Aaron Sloman
    Originally Published in in Theoretical Issues in Natural Language Processing (TINLAP-1), Eds. R. Schank & B. Nash-Webber, pp. 431--439, MIT,
    Now available online
    Reprinted in Readings in knowledge representation, Eds. R.J. Brachman & H.J. Levesque, Morgan Kaufmann, 1985.
    Date installed: 28 Mar 2005


    In 1971 I wrote a paper (in IJCAI 1971, reprinted in AIJ 1971) attempting to relate some old philosophical issues about representation and reasoning to problems in Artificial Intelligence. A major theme of the paper was the importance of distinguishing "analogical" from "Fregean" representations. I still think the distinction is important, though perhaps not as important for current problems in A.I. as I used to think. In this paper I'll try to explain why.


  • Filename: sloman-bogey.html (HTML)
  • Filename: sloman-bogey.pdf (incomplete PDF from OCR)
  • Filename: sloman-bogey-print.pdf
    (A more complete, PDF version, derived from the html version.)

    Title: Physicalism and the Bogey of Determinism

    Author: Aaron Sloman
    Date: Published 1974, installed here 29 Dec 2005


    Presented at an interdisciplinary conference on Philosophy of Psychology at the University of Kent in 1971. Published in the proceedings, as
    A. Sloman, 'Physicalism and the Bogey of Determinism'
    (along with Reply by G. Mandler and W. Kessen, and additional comments by Alan R. White, Philippa Foot and others, and replies to criticisms)
    in Philosophy of Psychology, Ed S.C.Brown, London: Macmillan, 1974, pages 293--304. (Published by Barnes & Noble in USA.)
    Commentary and discussion followed on pages 305--348.

    This paper rehearses some relatively old arguments about how any coherent notion of free will is not only compatible with but depends on determinism.
    However the mind-brain identity theory is attacked on the grounds that what makes a physical event an intended action A is that the agent interprets the physical phenomena as doing A. The paper should have referred to the monograph Intention (1957) by Elizabeth Anscombe (summarised here by Jeff Speaks), which discusses in detail the fact that the same physical event can have multiple (true) descriptions, using different ontologies.
    My point is partly analogous to Dennett's appeal to the 'intentional stance', though that involves an external observer attributing rationality along with beliefs and desires to the agent. I am adopting the design stance not the intentional stance, for I do not assume rationality in agents with semantic competence (e.g. insects), and I attempt to explain how an agent has to be designed in order to perform intentional actions; the design must allow the agent to interpret physical events (including events in its brain) in a way that is not just perceiving their physical properties. That presupposes semantic competence which is to be explained in terms of how the machine or organism works, i.e. using the design stance, not by simply postulating rationality and assuming beliefs and desires on the basis of external evidence.

    Some of ideas that were in the paper and in my responses to commentators were also presented in The Computer Revolution in Philosophy, including a version of this diagram (originally pages 344-345, in the discussion section below), discussed in more detail in Chapter 6 of the book, and later elaborated as an architectural theory assuming concurrent reactive, deliberative and metamanagement processes, e.g. as explained in this 1999 paper Architecture-Based Conceptions of Mind, and later papers.
    The html paper preserves original page divisions.
    (I may later add further notes and comments to this HTML version.)

    Note added 3 May 2006
    An online review of the whole book is available here. by Marius Schneider, O. F. M., The Catholic University of America, Washington, D. C., apparently written in 1975.

  • Filename: sloman-aisb-1974.pdf (Scanned from original: about 1.8MB)
    Title: On learning about numbers: Some problems and speculations
    In Proceedings AISB Conference 1974, University of Sussex, pp. 173--185,

    A slightly revised version (with clearer diagrams) was published as Chapter 8 of the 1978 book: The Computer Revolution in Philosophy

    Author: Aaron Sloman

    Date: Published/Presented 1974, installed here 3 Jan 2010.


    The aim of this paper is methodological and tutorial. It uses elementary number competence to show how reflection on the fine structure of familiar human abilities generates requirements exposing the inadequacy of initially plausible explanations. We have to learn how to organise our common sense knowledge and make it explicit, and we don't need experimental data as much as we need to extend our model-building know-how.




  • Filename: sloman-new-bodies.pdf (PDF)
  • Filename: sloman-new-bodies.html (HTML)
    Title: New Bodies for Sick Persons: Personal Identity Without Physical Continuity

    Author: Aaron Sloman
    First published in In Analysis vol 32 NO 2, December 1971, pages 52 --55
    Date Installed: 9 Jan 2007 (Originally Published 1971)

    Abstract: (Extracts from paper)

    In his recent Aristotelian society paper ('Personal identity, personal relationships, and criteria' in Proceedings the Aristotelian Society, 1970-71, pp. 165--186), J. M. Shorter argues that the connexion between physical identity and personal identity is much less tight than some philosophers have supposed, and, in order to drive a wedge between the two sorts of identity, he discusses logically possible situations in which there would be strong moral and practical reasons for treating physically discontinuous individuals as the same person. I am sure his main points are correct: the concept of a person serves a certain sort of purpose and in changed circumstances it might be able to serve that purpose only if very different, or partially different, criteria for identity were employed. Moreover, in really bizarre, but "logically" possible, situations there may be no way of altering the identity-criteria, nor any other feature of the concept of person, so as to enable the concept to have the same moral, legal, political and other functions as before: the concept may simply disintegrate, so that the question 'Is X really the same person as Y or not ?', has no answer at all. For instance, this might be the case if bodily discontinuities and reduplications occurred very frequently. To suppose that the "essence" of the concept of a person, or some set of general logical principles, ensures that questions of identity always have answers in all possible circumstances, is quite unjustified.

    In order to close a loophole in Shorter's argument I describe a possible situation in which both physical continuity and bodily identity are clearly separated from personal identity. Moreover, the example does not, as Shorter's apparently does, assume the falsity of current physical theory.

    It will be a long time before engineers make a machine which will not merely copy a tape recording of a symphony, but also correct poor intonation, wrong notes, or unmusical phrasing. An entirely new dimension of understanding of what is being copied is required for this. Similarly, it may take a further thousand years, or more, before the transcriptor is modified so that when a human body is copied the cancerous or other diseased cells are left out and replaced with normal healthy cells, if, by then, the survival rate for bodies made by this modified machine were much greater than for bodies from which tumours had been removed surgically, or treated with drugs, then I should have little hesitation, after being diagnosed as having incurable cancer, in agreeing to have my old body replaced by a new healthy one, and the old one destroyed before recovering from the anaesthetic. This would be no suicide, nor murder.

  • Filename: sloman-analogical-1971 (HTML)
    Includes short history of the paper at the beginning.
  • Filename: sloman-analogical-1971.pdf (PDF derived from HTML)
  • Filename: sloman-ijcai-71.pdf
    (PDF Original format scanned from IJCAI Proceedings)

  • Filename: sloman-71-AIJ.pdf (PDF: version in AI Journal)

    Title: Interactions between Philosophy and Artificial Intelligence: The role of intuition and non-logical reasoning in intelligence,
    Author: Aaron Sloman

    Originally published in:

    Proceedings IJCAI 1971
    (Proceedings also available here:
    , then reprinted in
    Artificial Intelligence, vol 2, 1971,
    then in
    J.M. Nicholas, ed. Images, Perception, and Knowledge, Dordrecht-Holland: Reidel. 1977

    This was later revised as Chapter 7 of The Computer Revolution in Philosophy (1978) - listed above.

    Date added: 12 May 2004


    This paper echoes, from a philosophical standpoint, the claim of McCarthy and Hayes that Philosophy and Artificial Intelligence have important relations. Philosophical problems about the use of 'intuition' in reasoning are related, via a concept of analogical representation, to problems in the simulation of perception, problem-solving and the generation of useful sets of possibilities in considering how to act. The requirements for intelligent decision-making proposed by McCarthy and Hayes in Some Philosophical Problems from the Standpoint of Artificial Intelligence (1969) are criticised as too narrow, because they allowed for the use of only one formalism, namely logic. Instead general requirements are suggested showing the usefulness of other forms of representation.

    There were several sequels to this paper including the Afterthoughts paper written in 1975, some further developments regarding ontologies and criteria for adequacy in a 1984-5 paper and several other papers mentioned in the section on diagrammatic/visual reasoning here.

    Response by Pat Hayes
    A much cited paper by Hayes discussing issues raised in the 1971 paper and elsewhere was presented at the AISB Conference at Sussex University in 1974, and later reprinted in the collection mentioned below. In view of its general significance and unavailability online I have included the 1974 Conference version here, with the permission of the author.

    File: hayes-aisb-1974-prob-rep.pdf (PDF)
    Patrick J. Hayes "Some Problems and Non-Problems in Representation Theory"
    in Proceedings AISB Summer Conference, 1974
    University of Sussex

    Reprinted in: Readings in knowledge representation,
    Eds. R.J. Brachman and H.J. Levesque, Morgan Kaufmann, Los Altos, California, 1985

    Related work includes:

  • Filename: sloman-tarski-liar.pdf (PDF)
  • Filename: sloman-tarski-liar.html (HTML)

    Title: Tarski, Frege and the Liar Paradox

    Originally in Philosophy, Vol XLVI, pages 133-147, 1971
    Author: Aaron Sloman
    Date installed: 16 Oct 2003


    The paper attempts to resolve a variety of logical and semantic paradoxes on the basis of Frege's ideas about compositional semantics: i.e. complex expressions have a reference that depends on the references of the component parts and the mode of composition, which determines a function from the lowest level components to the value for the whole expression. The paper attempts to show that it is inevitable within this framework that some syntactically well formed expressions will fail to have any reference, even though they may have a well defined sense. This can be compared with the ways in which syntactically well-formed programs in programming languages may fail to terminate or in some other way fail semantically and produce run-time errors.

    The paper suggests that this view of paradoxes, including the paradox of the Liar, is superior to Tarski's analysis which required postulating a hierarchy of meta-languages. We do not need such a hierarchy to explain what is going on or to deal with the fact that such paradoxes exist. Moreover, the hierarchy would not necessarily be useful for an intelligent agent, compared with languages that contain their own meta-language, like the one I am now using.


  • Filename: sloman-ought-and-better.html
  • Filename: ought-better.pdf
  • Filename: (scanned version)
    Title: 'Ought' and 'Better'

    Author: Aaron Sloman
    Date Installed: 19 Sep 2005


    Originally published as Aaron Sloman, 'Ought and Better' Mind, vol LXXIX, No 315, July 1970, pp 385--394)

    This is a sequel to the 1969 paper on "How to derive 'Better' from 'Is'" also online at this web site. It presupposes the analysis of 'better' in the earlier paper, and argues that statements using the word 'ought' say something about which of a collection of alternatives is better than the others, in contrast with statements using 'must' or referring to 'obligations', or what is 'obligatory'. The underlying commonality between superficially different statements like 'You should take an umbrella with you' and 'The sun should come out soon' is explained, along with some other philosophical puzzles, e.g. concerning why 'ought' does not imply 'can', contrary to what some philosophers have claimed.

    Curiously, the 'Ought' and 'Better' paper is mentioned at in the section on David Lodge's novel "Thinks...", which includes a reference to this paper 'What to Do If You Want to Go to Harlem: Anankastic Conditionals and Related Matters' by Kai von Fintel and Sabine Iatridou (MIT), which includes a discussion of the paper on 'Ought' and 'Better'.


  • Filename: sloman-transformations.pdf
    Title: Transformations of Illocutionary Acts (1969)

    Author: Aaron Sloman
    First published in Analysis Vol 30 No 2, December 1969 pages 56-59
    Date Installed: 10 Jan 2007

    Abstract: (extracts from paper)

    This paper discusses varieties of negation and other logical operators when applied to speech acts, in response to an argument by John Searle.

    In his book Speech Acts (Cambridge University Press, 1969), Searle discusses what he calls 'the speech act fallacy' (pp. 136,ff), namely the fallacy of inferring from the fact that

    (1) in simple indicative sentences, the word W is used to perform some speech-act A (e.g. 'good' is used to commend, 'true' is used to endorse or concede, etc.)
    the conclusion that
    (2) a complete philosophical explication of the concept W is given when we say 'W is used to perform A'.
    He argues that as far as the words 'good', 'true', 'know' and 'probably' are concerned, the conclusion is false because the speech-act analysis fails to explain how the words can occur with the same meaning in various grammatically different contexts, such as interrogatives ('Is it good?'), conditionals('If it is good it will last long'), imperatives ('Make it good'), negations, disjunctions, etc.

    The paper argues that even if conclusion (2) is false, Searle's argument against it is inadequate because he does not consider all the possible ways in which a speech-act might account for non-indicative occurrences.

    In particular, there are other things we can do with speech acts besides performing them and predicating their performance, e.g. besides promising and expressing the proposition that one is promising. E.g. you can indicate that you are considering performing act F but are not yet prepared to perform it, as in 'I don't promise to come'. So the analysis proposed can be summarised thus:

    If F and G are speech acts, and p and q propositional contents or other suitable objects, then:

    o Utterances of the structure 'If F(p) then G(q)' express provisional commitment to performing G on q, pending the performance of F on p
    o Utterances of the form 'F(p) or G(q) 'would express a commitment to performing (eventually) one or other or both of the two acts though neither is performed as yet.
    o The question mark, in utterances of the form 'F(p)?' instead of expressing some new and completely unrelated kind of speech act, would merely express indecision concerning whether to perform F on p together with an attempt to get advice or help in resolving the indecision.
    o The imperative form 'Bring it about that . .' followed by a suitable grammatical transformation of F(p) would express the act of trying to get (not cause) the hearer to bring about that particular state of affairs in which the speaker would perform the act F on p (which is not the same as simply bringing it about that the speaker performs the act).
    It is not claimed that 'not', 'if', etc., always are actually used in accordance with the above analyses, merely that this is a possible type of analysis which (a) allows a word which in simple indicative sentences expresses a speech act to contribute in a uniform way to the meanings of other types of sentences and (b) allows signs like 'not', 'if', the question construction, and the imperative construction, to have uniform effects on signs for speech acts. This type of analysis differs from the two considered and rejected by Searle. Further, if one puts either assertion or commendation or endorsement in place of the speech acts F and G in the above schemata, then the results seem to correspond moderately well with some (though not all) actual uses of the words and constructions in question. With other speech acts, the result does not seem to correspond to anything in ordinary usage: for instance, there is nothing in ordinary English which corresponds to applying the imperative construction to the speech act of questioning, or even commanding, even though if this were done in accordance with the above schematic rules the result would in theory be intelligible.

  • Filename:
  • Filename:
    Title: How to derive "better" from "is",

    Author: Aaron Sloman
    Originally Published as: A. Sloman How to derive "better" from "is" American Philosophical Quarterly,
    Vol 6, Number 1, Jan 1969, pp 43--52.
    Date Installed here: 23 Oct 2002


    ONE type of naturalistic analysis of words like "good," "ought," and "better" defines them in terms of criteria for applicability which vary from one context to another (as in "good men," "good typewriter," "good method of proof"), so that their meanings vary with context. Dissatisfaction with this "crude" naturalism leads some philosophers to suggest that the words have a context-independent non-descriptive meaning defined in terms of such things as expressing emotions, commanding, persuading, or guiding actions.

    There are well-known objections to both approaches, and the aim of this paper is to suggest an alternative which has apparently never previously been considered, for the very good reason that at first sight it looks so unpromising, namely the alternative of defining the problematic words as logical constants.

    This should not be confused with the programme of treating them as undefined symbols in a formal system, which is not new. In this essay an attempt will be made to define a logical constant "Better" which has surprisingly many of the features of the ordinary word "better" in a large number of contexts. It can then be shown that other important uses of "better" may be thought of as derived from this use of the word as a logical constant.

    The new symbol is a logical constant in that its definition (i.e., the specification of formation rules and truth-conditions for statements using it) makes use only of such concepts as "entailment," "satisfying a condition," "relation," "set of properties," which would generally be regarded as purely logical concepts. In particular, the definition makes no reference to wants, desires, purposes, interests, prescriptions, choice, non-descriptive uses of language, and the other paraphernalia of non-naturalistic (and some naturalistic) analyses of evaluative words.

    (However, some of those 'paraphernalia' can be included in arguments/subjects to which the complex relational predicate 'better' is applied.)

    NOTE Added 7 Nov 2013
    I was under the impression that no philosophers had ever paid any attention to this
    paper. I've just discovered a counter example:
        Paul Bloomfield 'Prescriptions Are Assertions: An Essay On Moral Syntax'
        American Philosophical Quarterly Vol 35, No 1, January 1998


  • Filename: sloman-explain-necessity.pdf
    (132 KBytes, via latex from OCR -- PDF)
  • Filename: sloman-ExplainNecessity.pdf
    (11.4 MB Scanned PDF from original)
    Title: Explaining Logical Necessity

    Author: Aaron Sloman
    Date Installed: 4 Dec 2007 (Published originally in 1968); Updated 19 Dec 2009
    in Proceedings of the Aristotelian Society, 1968/9, Volume, 69, pp 33--50.

    Note: some of the key ideas were in Aaron Sloman's Oxford DPhil Thesis (1962): Knowing and Understanding

    Abstract: (From the introductory section)

    I: Some facts about logical necessity stated.
    II: Not all necessity is logical.
    III: The need for an explanation.
    IV: Formalists attempt unsuccessfully to reduce logic to syntax.
    V: The no-sense theory of Wittgenstein's Tractatus merely reformulates
    the problem.
    VI: Crude conventionalism is circular.
    VII: Extreme conventionalism is more sophisticated.
    VIII: It yields some important insights.
    IX: But it ignores the variety of kinds of proof.
    X: Proofs show why things must be so, but different proofs show different things. Hence there can be no general explanation of necessity.

    An adequate theory of meaning and truth must account for the following facts, whose explanation is the topic, though not the aim, of the paper.

    (i) Different signs (e.g., in different languages) may express the same proposition.

    (ii) The syntactic and semantic rules in virtue of which sentences are able to express contingent propositions also permit the expression of necessary propositions and generate necessary relations between contingent propositions. E.g. although 'It snows in Sydney or it does not snow in Sydney' can be verified empirically (since showing one disjunct to be true would be an empirical verification, just as a proposition of the form 'p and not-p' can be falsified empirically), the empirical enquiry can be short-circuited by showing what the result must be.

    (iii) At least some such restrictions on truth-values, or combinations of truth-values (e.g., when two or more contingent propositions are logically equivalent, or inconsistent, or when one follows from others), result from purely formal, or logical, or topic-neutral features of the construction of the relevant propositions, features which have nothing to do with precisely which concepts occur, or which objects are referred to. Hence we call some propositions logically true, or logically false, and say some inferences are valid in virtue of their logical form, which prevents simultaneous truth of premisses and falsity of conclusion.

    (iv) The truth-value-restricting logical forms are systematically inter-related so that the whole infinite class of such forms can be recursively generated from a relatively small subset, as illustrated in axiomatisations of logic.

    Subsequent discussion will show these statements to be over-simple. Nevertheless, they will serve to draw attention to the range of facts whose need of explanation is the starting point of this paper. They have deliberately been formulated to allow that there may be cases of non-logical necessity.


    Filename: sloman-predictive-policies.pdf (PDF)
    Title: Predictive Policies: What makes some policies better than others?,

    Author: Aaron Sloman
    Date Installed: 25 May 2015 (Published 1967)

    Where published: Aristotelian Society Supplementary Volume, 41, pp. 77--94, Wiley-Blackwell, 1967,
    (Part of Symposium with R. McGowan.)


    (Extract from first page of paper.)
    Mr. McGowan's paper seems to have two main aims, first, to say what an inductive inference policy is and how it differs from alternative non-deductive policies, and secondly, to show that the inductive policy is better, or more rational, than the alternatives. I shall criticise his characterisation of induction, his arguments to show its superiority, and some of his undiscussed assumptions. Finally, I shall take the risk of discussing the nature of attempts to justify induction and suggesting some lines of further enquiry, based on an analysis of the logic of "better". I start with some comments on Mr. McGowan's preliminary discussion, before turning to his recursive characterisation of an inductive inference policy.

    (Extract from final paragraph:)
    I have criticised some of the details of his argument and put forward the counter-claim that policies based on his weaker principles of "desistence" ("all observed regularities come to an end, and sooner rather than later") are better at avoiding contradictions and conform to past experience more closely than policies based on his strong principle of persistence. Accordingly, some modifications of his predictive rules have been suggested. Perhaps most importantly of all, I have argued that the assertion that one policy or inference is better or more rational than another is an incomplete assertion until a basis of comparison has been specified, since different policies may be better or more rational in relation to different bases, and I have indicated some possible approaches for further investigation of this point. A final line of investigation which should be mentioned is the problem of deciding which of two bases of comparison is better relative to some higher-order basis of comparison, a problem which may turn out to be very important in connexion with justifications of predictive policies. It seems that I have asked more questions than I have answered. Perhaps formulating them will help someone more familiar with the field than I am to find interesting answers.
    See 'How to derive "Better" from "is"', (1969)



    Title: Functions and Rogators (1965)
    Author: Aaron Sloman

    Available in three formats:

    Date Installed: 23 Dec 2007; Updated 5 Apr 2016

    This paper was originally presented at a meeting of the Association for Symbolic Logic held in St. Anne's College, Oxford, England from 15-19 July 1963 as a NATO Advanced Study Institute with a Symposium on Recursive Functions sponsored by the Division of Logic, Methodology and Philosophy of Science of the International Union of the History and Philosophy of Science.

    A summary of the meeting by E. J. Lemmon, M. A. E. Dummett, and J. N. Crossley with abstracts of papers presented, including this one, was published in The Journal of Symbolic Logic, Vol. 28, No. 3. (Sep., 1963), pp. 262-272. accessible online here.

    The full paper was published in the conference proceedings:
    Aaron Sloman 'Functions and Rogators', in
    Formal Systems and Recursive Functions:
    Proceedings of the Eighth Logic Colloquium Oxford, July 1963
    Eds J N Crossley and M A E Dummett
    North-Holland Publishing Co (1965), pp. 156--175

    This paper extends Frege's concept of a function to "rogators", which are like functions in that they take arguments and produce results, but are unlike functions in that their results can depend on the state of the world, in addition to which arguments they are applied to.

    It was scanned in and digitised in December 2007. The html version was re-formatted on 5 Apr 2016 and a corresponding "lightweight" PDF version derived from it. The original 15MB scanned PDF file is now sloman-rogators-orig.pdf

    The key ideas were originally presented in the author's Oxford DPhil Thesis (Aaron Sloman, 1962): Knowing and Understanding
    (Now online).

    This paper was described by David Wiggins as 'neglected but valuable' in his 'Sameness and Substance Renewed' (2001).

    (Published also in E. J. Lemmon, M. A. E. Dummett, and J. N. Crossley) (1963)
    Frege, and others, have made extensive use of the notion of a function, for example in analysing the role of quantification, the notion of a function being defined, usually, in the manner familiar to mathematicians, and illustrated with mathematical examples. On this view functions satisfy extensional criteria for identity. It is not usually noticed that in non-mathematical contexts the things which are thought of as analogous to functions are, in certain respects, unlike the functions of mathematics. These differences provide a reason for saying that there are entities, analogous to functions, but which do not satisfy extensional criteria for identity. For example, if we take the supposed function 'x is red' and consider its value (truth or falsity) for some such argument as the lamp post nearest my front door, then we see that what the value is depends not only on which object is taken as argument, and the 'function', but also on contingent facts about the object, in particular, what colour it happens to have. Even if the lamp post is red (and the value is truth), the same lamp post might have been green, if it had been painted differently. So it looks as if we need something like a function, but not extensional, of which we can say that it might have had a value different from that which it does have. We cannot say this of a function considered simply as a set of ordered pairs, for if the same argument had had a different value it would not have been the same function. These non-extensional entities are described as 'rogators', and the paper is concerned to explain what the function-rogator distinction is, how it differs from certain other distinctions, and to illustrate its importance in logic, from the philosophical point of view.

  • Filename: sloman-necessary.pdf (PDF)
    Filename: sloman-necessary.html (HTML)

    Author: Aaron Sloman
    Date Installed: 9 Jan 2007 (Published 1965)
    First published in Analysis vol 26, No 1, pp 12-16 1965.
    Abstract (actually the opening paragraph of the paper):
    It is frequently taken for granted, both by people discussing logical distinctions and by people using them, that the terms 'necessary', 'a priori', and 'analytic' are equivalent, that they mark not three distinctions, but one. Occasionally an attempt is made to establish that two or more of these terms are equivalent. However, it seems me far from obvious that they are or can be shown to be equivalent, that they cannot be given definitions which enable them to mark important and different distinctions. Whether these different distinctions happen to coincide or not is, as I shall show, a further question, requiring detailed investigation. In this paper, an attempt will be made to show in a brief and schematic way that there is an open problem here and that it is extremely misleading to talk as if there were only one distinction.


  • Filename: rules-premisses.html (HTML)
  • Filename: rules-premisses.pdf (PDF)
    Title: Rules of inference, or suppressed premisses? (1964)

    Author: Aaron Sloman
    Date Installed: 31 Dec 2006
    First published in Mind Volume LXXIII, Number 289 Pp. 84-96, 1964.

    Abstract (actually the opening paragraph of the paper):

    In ordinary discourse we often use or accept as valid, arguments of the form "P, so Q", or "P, therefore Q", or "Q, because P" where the validity of the inference from P to Q is not merely logical: the statement of the form "If P then Q" is not a logical truth, even if it is true. Inductive inferences and inferences made in the course of moral arguments provide illustrations of this. Philosophers, concerned about the justification for such reasoning, have recently debated whether the validity of these inferences depends on special rules of inference which are not merely logical rules, or on suppressed premisses which, when added to the explicit premisses, yield an argument in which the inference is logically, that is deductively, valid. In a contribution to MIND ("Rules of Inference in Moral Reasoning", July 1961), Nelson Pike describes such a debate concerning the nature of moral reasoning. Hare claims that certain moral arguments involve suppressed deductive premisses, whereas Toulmin analyses them in terms of special rules of inference, peculiar to the discourse of morality. Pike concludes that the main points so far made on either side of the dispute are "quite ineffective" (p. 391), and suggests that the problem itself is to blame, since the reasoning of the "ordinary moralist" is too rough and ready for fine logical distinctions to apply (pp. 398-399). In this paper an attempt will be made to take his discussion still further and explain in more detail why arguments in favour of either rules of inference or suppressed premisses must be ineffective. It appears that the root of the trouble has nothing to do with moral reasoning specifically, but arises out of a general temptation to apply to meaningful discourse a distinction which makes sense only in connection with purely formal calculi.

  • Filename: colour-incompatibilities.pdf
    Title: Colour Incompatibilities and Analyticity

    Author: Aaron Sloman

    Date Installed: 6 Jan 2010; Published 1964

    Where published:

    Analysis, Vol. 24, Supplement 2. (Jan., 1964), pp. 104-119.

    Abstract: (Opening paragraph)

    The debate about the possibility of synthetic necessary truths is an old and familiar one. The question may be discussed either in a general way, or with reference to specific examples. This essay is concerned with the specific controversy concerning the incompatibility of colours, or colour concepts, or colour words. The essay is mainly negative: I shall neither assume, nor try to prove, that colours are incompatible, or that their incompatibility is either analytic or synthetic, but only that certain more or Less familiar arguments intended to show that incompatibility relations between colours are analytic fail to do so. It will follow from this that attempts to generalise these arguments to show that no necessary truths can be synthetic will be unsuccessful, unless they bring in quite new sorts of considerations. The essay does, however, have a positive purpose, namely the partial clarification of some of the concepts employed by philosophers who discuss this sort of question, concepts such as 'analytic' and 'true in virtue of linguistic rules'. Such clarification is desirable since it is often not at all clear what such philosophers think that they have established, since the usage of these terms by philosophers is often so loose and divergent that disagreements may be based on partial misunderstanding. The trouble has a three-fold source : the meaning of 'analytic' is unclear, the meaning of 'necessary' is unclear, and it is not always clear what these terms are supposed to be applied to. (E.g. are they sentences, statements, propositions, truths, knowledge, ways of knowing, or what?) Not all of these confusions can be eliminated here, but an attempt will be made to clear some of them away by giving a definition of 'analytic' which avoids some of the confused and confusing features of Kant's exposition without altering the spirit of his definition.


  • Title: Abstract of Functions and Rogators (1965)
    Author: Aaron Sloman
    Date Installed: 23 Dec 2007

    A summary of the 1963 Logic Colloquium was published by E. J. Lemmon, M. A. E. Dummett, and J. N. Crossley with abstracts of papers presented, including my 'Functions and Rogators', was published in The Journal of Symbolic Logic, Vol. 28, No. 3. (Sep., 1963), pp. 262-272. accessible online here.


  • Filename: sloman-1962 (HTML overview.)
    Title: Oxford University DPhil Thesis (1962): Knowing and Understanding
         Relations between meaning and truth, meaning and necessary truth,
         meaning and synthetic necessary truth
    Author: Aaron Sloman

    -- PDF version (transcribed, searchable version, 2.1MB)
         (Added 2016, then revised several times, fixing multiple transcription errors.)
    -- HTML version (transcribed, searchable version, 669KB)
         (Added 6 Jan 2018 and later revised)
         (Plain text, i.e. no italics/underlining, but with figures added, on pages 287, 288, 307)

    Since late 2018, the transcribed, searchable PDF version is also available at the Oxford site, along with the original, 74.1MB, scanned version, linked below.

    Thesis Abstract:

    The aim of the thesis is to show that there are some synthetic necessary truths, or that synthetic apriori knowledge is possible. This is really a pretext for an investigation into the general connection between meaning and truth, or between understanding and knowing, which, as pointed out in the preface, is really the first stage in a more general enquiry concerning meaning. (Not all kinds of meaning are concerned with truth.) After the preliminaries (chapter one), in which the problem is stated and some methodological remarks made, the investigation proceeds in two stages. First there is a detailed inquiry into the manner in which the meanings or functions of words occurring in a statement help to determine the conditions in which that statement would be true (or false). This prepares the way for the second stage, which is an inquiry concerning the connection between meaning and necessary truth (between understanding and knowing apriori). The first stage occupies Part Two of the thesis, the second stage Part Three. In all this, only a restricted class of statements is discussed, namely those which contain nothing but logical words and descriptive words, such as "Not all round tables are scarlet" and "Every three-sided figure is three-angled". (The reasons for not discussing proper names and other singular definite referring expressions are given in Appendix I.)

    Everything now available is derived from the carbon copy of the manually typed thesis deposited at the Oxford Bodleian library in May 1962. I believe this was the first manually typed thesis in Oxford to be scanned and made available online, in 2007. The original scanned version was made freely available by Oxford University Research Archive (ORA, at the Bodleian library) in the form of PDF versions of the chapters. Those PDF files, totalling over 70MB, contained the scanned image content and were readable and printable, but not searchable by computer.

    The scanned document, based on the carbon copy of the typed thesis has slightly fuzzy text, which is easy for humans to read, but seems to defeat OCR technology.

    So, in 2016, at the instigation of my former student, Luc Beaudoin (, an Indian company (Hitech) was engaged to retype the remaining chapters. After much tedious checking and editing to correct transcription errors and omissions (including much help from Luc and his partner) all the chapters are now (December 2018) available as a free online book. in searchable PDF and HTML formats.

    The original PDF files scanned without OCR totalled about 74Mbytes and were not searchable by computer. The transcribed PDF version installed in 2018, linked below, is searchable and much smaller, about 2.1Mbytes. There is also a new HTML version, derived from the transcribed chapters, also linked below.

    Oxford ORA versions (since December 2018)
    In December 2018 the scanned chapters that had originally been made available as separate pdf files in Oxford, were concatenated into a single (non-searchable) PDF file (74.1MB), now available both here
    and in Oxford,

    The Oxford site now also provides the much smaller transcribed PDF version of the searchable thesis linked above, as well as the scanned 74.1MByte non-searchable version.

    More about the thesis
    More detailed information about the thesis is also available here, including information about the contents, the background to the thesis, and some references to later developments and publications.

    Later work
    Some of the ideas developed here were later presented, and in some cases expanded, in the following publications.



    Other files in this directory are accessible via the main index


    See also the School of Computer Science Web page.

    This file is maintained by Aaron Sloman: