Afterthoughts (HTML and PDF) on
THE COMPUTER REVOLUTION IN PHILOSOPHY
Philosophy, science and models of mind.
The book was originally published in 1978.
Free online revised version available since July 2015 (HTML and PDF)
(Some notes, references and re-formatting added since then.)
The Computer Revolution in Philosophy (CRP) was published at the instigation of colleague and friend Margaret Boden, by Harvester Press (UK) and Humanities Press (USA) in 1978, when the author was at the university of Sussex. It has been out of print for some time, though used copies are still offered for sale.
Thanks to much appreciated labours of colleagues as described below, versions of most parts of the book have been available online since 2001, initially as separate HTML files (with many scanning and OCR errors that were gradually corrected), and later converted to PDF. A few notes and comments were added from time to time.
During 2015 work began on combining all the chapters into a single document with internal cross references, with some minor new corrections and a few comments that had previously been inserted in the electronic versions of the chapters. The new integrated version of the book, including an integrated table of contents, is available in two formats, HTML and PDF, made freely available in August 2015, with a creative commons licence:
The table of contents of the integrated online version,
included in both formats, can
be viewed here:
An early online version of this book did not include the original index (pages 288-304). A searchable version of the index was added in 2016. It is also available as a separate collection of images available in two formats:
The need for an index is reduced by the fact that the electronic versions can easily be searched for key words or phrases.
the multi format version at 'archive.org':
the Amazon Kindle version kindly produced by Sergei Kaunov (with a very kind
review) in 2011:
and others referenced here:
From time to time various corrections and additions were made. The additions were indicated in the text, though minor corrections were not all labelled. In some cases the additions were notes at ends of chapters, though a few were inserted (usually clearly indicated) in the main text, adding to or (hopefully) clarifying what was said.
Fourteen years later, all the various parts except for the name/subject index (which is still in scanned image mode, not text mode) were combined to form a single html file, along with a derived pdf file -- thanks to two excellent (free) linux programs, html2ps and ps2pdf.
The 1978 book had many flaws of presentation, some of them formatting flaws (e.g. arbitrary changes of font or indentation), and others to do with the fact that the ideas needed far more development. I hope the new version improves on the format at least. This document attempts to improve on some of the flaws of presentation of ideas in the original book, and to summarise some of the work that grew out of the book over the next few decades, with pointers to relevant publications.
One of the features of the book that seemed to annoy some readers and reviewers was that it made no attempt to survey the achievements of Artificial Intelligence, and instead focused on gaps and unsolved problems, along with suggestions for making progress, often requiring long term research. Perhaps I was foolish, but I assumed that excellent introductory overviews and edited collections of papers, produced by others especially Margaret Boden (Artificial Intelligence and Natural Man, also published in 1978), and collections edited by Feigenbaum and Feldman, Minsky, McCarthy and others provided adequate introductions to the rapidly expanding field.
Perhaps the worst fault was the arrogance and intolerance displayed in several places, where I reacted to theories I felt were shallow, arguments that I thought were inconclusive, and prejudices against new forms of explanation, including prejudices against computational answers to philosophical questions and explanations making use of new ideas about virtual machinery.
I was also strongly opposed to a very popular view of science (which is still too popular) deeply influenced by Popper's falsificationism, which I felt did not do justice to key features of many great scientific advances. That was the belief that only empirically falsifiable statements could be part of scientific knowledge. Popper himself eventually recognized flaws in the requirement that scientific theories be empirically falsifiable, for example when he recognized the great scientific merit of Darwin's theory of natural selection, and even began to speculate about evolutionary mechanisms himself, e.g. in Popper (1978).
Despite the arrogance and intolerance of the style, and the apparent difficulty most readers had in appreciating Chapter 2, on which everything else rested, some readers, including the reviewers referenced below, recognized that important new ideas were being discussed in the book. Those ideas have been developed further in other publications since 1978, but some students and researchers may find it useful to have the earliest versions of the ideas readily available in this free edition.
Not all the hopes at the time of writing have been justified. In particular, there remain important features of animal perception that are closely related to human abilities to make mathematical discoveries, that, as far as I know, have not yet been modelled in AI systems and seem to resist such modelling for reasons that are not entirely clear. For example, Chapter 7 of CRP attempted to demonstrate the importance of non-Fregean (i.e. "analogical") forms of representation in human intelligence, but good working implementations of the ideas proposed do not yet exist (as far as I know). As a result we cannot yet model the processes of discovery that originally led to Euclid's Elements about 2,500 years ago. Some of the required visual mechanisms are illustrated and discussed in:
But in humans, and presumably some other animals, the fact of choosing one desire does not make other desires inactive. A "rejected" desire can remain as strong as it was despite the other desire having been selected for action, and the rejected desire can go on interfering with reasoning and actions. It is fairly obvious how simple forms of persistent conflicts might be implemented in neural nets, but it is not clear how the richness of human experiences of conflict and indecision can be explained, nor what happens to decisively rejected desires. An incomplete discussion of these ideas can be found in a draft online presentation here (work in progress):
I have therefore included, alongside the new online edition of the book, a paper written in part in response to the criticisms of Chapter 2 in reviews by Steven Stich and Douglas Hofstadter (both of whom apparently liked other aspects of the book). The new paper (new in November 2014) can be found here:
NOTE ADDED 19 Oct 2015
A tentative discussion paper shows a connection between having an explanation of a collection of related possibilities and a useful strategy for assessing competences based on those possibilities.
After further methodological preliminaries, the rest of the book presented (sometimes tentative) examples illustrating how AI (including computational linguistics) could advance our ability to explain (and sometimes predict) possibilities, as theories in physics and chemistry had done previously. I deliberately chose phenomena that had not yet been satisfactorily explained to show how developments in AI might advance understanding, rather than presenting examples of past achievements in AI, as several other authors had done.
For example, Chapter 6 attempted to explain (only in outline) how a machine with a certain sort of mind (more specifically: with a certain sort of information-processing architecture, with changing, interacting, concurrently active, components, directly or indirectly linked to sensors and effectors) could deal creatively with a complex, changing and locally unpredictable universe. Those architectural ideas were later elaborated with the help of a succession of PhD students and colleagues, after I moved to Birmingham in 1991, and started the "Cognition and Affect (CogAff)" project, initially called "Attention and Affect" and based on collaboration with Glyn Humphreys (then head of psychology in Birmingham). Some of the ideas in the CogAff project are summarised in another document, here: http://www.cs.bham.ac.uk/research/projects/cogaff/#overview and further developed in later presentations and papers, including an overview of Virtual Machine Functionalism (VMF).
However, most psychologists and neuroscientists paid no attention, though I later learnt that cognitive psychologists and various researchers on mental disorders had chosen the label "executive" for the mechanisms and processes that I called "deliberative".
Chapter 7 attempted to explain how valid reasoning, using physical mechanisms or mental (virtual machine) systems could validly and fruitfully use non-Fregean forms of representation including both maps and diagrams depicting physical machines. Fregean representations are composed entirely of functions (including higher order functions) applied to arguments (which could also be functions). I call such representations "Fregean" because Gottlob Frege was the first person (as far as I know) to identify the full generality of the concept of a function, including showing predicates and relation words in ordinary language can be treated as functions whose results are truth values (i.e. true or false) and how the universal and existential quantifiers ("all" and "some", or "there exists") can be interpreted as higher order functions applied to predicates and relations. (My 1962 DPhil thesis generalised this to allow "the state of the world" to be an additional implicit argument.)
Chapter 7 was originally a slightly modified version of a paper on Fregean vs Analogical representations presented in 1971 at the 2nd International Joint Conference on Artificial Intelligence (IJCAI), and published later that year in the journal Artificial Intelligence vol 2, 3-4, pp 209-225, 1971). The paper was primarily a criticism of the claim by McCarthy and Hayes (1969) that a notation based on first order logic would be adequate for an intelligent robot. (They distinguished three levels of adequacy, Metaphysical, Epistemological and Heuristic.) My paper (and the version in Chapter 7) aimed to show that whereas the logical notations were Fregean (i.e. using function/argument relationships), in some cases a different sort of notation, which I called "analogical" where properties and relationships represented properties and relationships, possibly in a context-sensitive manner, would have advantages, especially heuristic advantages.
Many other thinkers made related distinctions before that and after that, though in many cases the subdivision was mistakenly described as involving continuity vs discontinuity, or a vaguely defined distinction between symbolising and being similar.
Often the role of analogical representations was confused with use of isomorphism or similarity between representation and things represented (ignoring all the dissimilarities between 2-D images and 3-D structures they represent, discussed in the chapter). The role of non-Fregean representations in mathematical discovery and reasoning remains largely unexplained, and human performances are still not even closely approximated by AI systems developed so far. Several examples are presented in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html and related documents.
The themes in Chapters 7 and 8 are related to work begun two decades earlier on
my Oxford DPhil thesis, completed in 1962:
Knowing and Understanding:
Relations between meaning and truth, meaning and necessary truth
I was too academically naive to realise that the thesis should be published in book form. However, thanks to much help from Luc Beaudoin, it was eventually digitised with searchable text in 2016, and is now available online here in plain text and PDF formats:
That work is still in progress. E.g. the key questions about the nature of still
unexplained mathematical discoveries made centuries ago by Euclid, Archimedes
and many others are discussed in this conference presentation.
I learnt in 2018 that Alan Turing had made related points in his thesis (published in 1938) where he distinguished mathematical intuition and mathematical ingenuity and claimed that computers (e.g. Turing machines) were capable only of mathematical ingenuity, not mathematical intuition, though he did not say why. His claim is discussed in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/turing-intuition.html (also PDF).
Chapter 8 attempted to explain (in very sketchy outline) how a young learner could begin to learn about numbers, by learning about one-one correspondences -- a dependence pointed out by David Hume, and exploited in the great work of Frege, Russell and others attempting to reduce arithmetic to logic -- completely ignoring the mental mechanisms required for uses of number concepts, a gap the chapter aimed to fill, at least in outline, since many details were not mentioned.
The chapter explained various conjectured procedural and representational mechanisms available for using numbers in counting and reasoning activities, including mechanisms for concurrent execution of procedures (e.g. pointing at objects while reciting number names), and mechanisms for observing and controlling concurrently running sub-systems. It attempted to show how a child (for example) could begin to use a memorised sequence of arbitrary symbols, to perform a variety of tasks involving setting up one-to-one correspondences over time, or correspondences in spatial configurations, and then go on later to discover "theorems" about numbers and new procedures for using them.
Unlike many psychological investigations of uses of number concepts this chapter emphasised the central importance of one-one correlations, and the variety of mechanisms for detecting and using them, mostly ignored by psychologists and neuroscientists as far as I know. One researcher who seems to have independently developed a closely related approach to explaining number competences is a theoretical linguist, Heike Wiese (2007). There is of course a great deal more to mathematical competences than understanding and use of cardinal numbers, including topological and geometric competences that still (in 2015) have not been adequately modelled or explained.
Chapter 9 (partly inspired by the work of Max Clowes on human visual perception, whose ideas about "domains" I generalised by allowing more concurrently active interpretative domains, illustrated by the operation of the POPEYE program) explained in outline how visual perception could make use of a combination of bottom-up, top-down, and middle-out processing, straddling a variety of structural domains (not necessarily based on the kind of 3-D to 2-D projection proposed by Marr and others as essential to biological vision). Unlike some of the promoters and defenders of AI I thought that many of the problems were very difficult, especially problems in machine vision. Section 9.12 ended with the remark
Such tasks still (in 2019) remain well beyond the competences of robots, and I believe that the current approaches to intelligent robotics based on vast amounts of statistical learning will merely give the impression of closing the gap between natural and artificial vision systems, without actually doing so. In particular, it will not allow robots to replicate the ancient forms of spatial reasoning that led to the mathematical discoveries reported by Archimedes, Euclid, Zeno and many others, or even the spatial intelligence of crows and squirrels. (I suspect that will require new forms of computation, a topic discussed in some online papers, e.g. this incomplete, highly speculative paper: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/super-turing-geom.html
Chapter 10 began to speculate about how the sorts of mechanisms discussed in the book and others being developed in computer science and AI might account for some aspects of human consciousness, including explaining how some aspects of an individual's mind may be inaccessible to that individual while others are not. The chapter built on some of the architectural ideas in Chapter 6 and later chapters. Later work developed the ideas in the context of the Cogaff project mentioned in that chapter. Further features of consciousness, including evidence for the existence of qualia were later explained in terms of layers of virtual machinery combined with meta-cognitive mechanisms, e.g. in Sloman and Chrisley (2003).
The Epilogue, a late addition anticipating by several decades some of the concerns about the so-called "AI Singularity", explained (semi-seriously) in outline how super-intelligent machines might improve our planet by limiting the freedom of humans, whose morals, knowledge and intelligence, on the whole, leave much to be desired. People who worry about "the singularity" (e.g. in 2019) don't understand how far current Artificial Intelligence lags behind human (and squirrel and toddler) intelligence in crucial ways. All the worries people have should be directed at the humans who design and deploy such machines, just as they should be concerned about all the other potentially dangerous machines (e.g. aeroplanes able to drop bombs) designed by humans.
It's above all the humans who design, deploy and use the machines that need to be controlled (or at least well educated), not the machines (at present).
The Postscript, another late addition, attempted to explain how a computer programming language, like human languages, could be a rich and powerful tool yet allow the specification of contradictions arising from useless non-terminating procedures analogous to the expression of Russell's Paradox (and others) in human languages, a point developed in more detail in Sloman (1971)
I have tried to indicate how most of the ideas linking AI and Philosophy (and directly or indirectly also psychology, neuroscience, and biology) discussed in the book are still under development, with many problems still unsolved. The Turing-inspired Meta-Morphogenesis project, summarised in the next section, vastly expanded the scope of the research.
The most ambitious of version of this crystallized as a goal late in 2011, when I had been asked to contribute to a book being put together for the Turing Centenary, and the editors somehow decided that I should comment on Turing's 1952 paper "The chemical basis of morphogenesis", now one of his most influential papers among scientists. It is a fine example of a great scientist proposing an explanation of a class of possibilities (though I don't claim to have taken in all the mathematical details). But it led me to ask "What might Turing have done had he not died two years after publication of that paper?"
That question led me to propose the Meta-Morphogenesis Project as the answer to the question.
The project aims to identify the many changes in information processing produced by biological evolution and its products since the very simplest life forms (or pre-life forms), and to propose explanations of how they are possible, though at present that is beyond our reach for many of the important examples.
New developments in biological information processing do not occur in accordance
with some predictive law specifying biological necessities or regularities. But
the facts show that an enormous variety of possibilities has been realised, and
if we understand how those forms of information processing came into being and
what made them possible, that will help us see that what actually exists is part
of a much larger realm of possibilities that science needs to explain. In
someways this is similar to, though in the long run far more complex than, the
explanation of the possibility of a wide variety of chemical elements with
systematically varying physical and chemical properties,
provided by the work of Mendeleef, Moseley and others.
One of the key ideas that has come out of that investigation is the idea of a construction kit: evolution and its products make use of many sorts of construction kit. And at any time the products of the construction kits that have so far developed can make possible the development of new kinds of construction kit. This is very obvious in the history of technology. But the depth, variety and complexity of the biological phenomena are far greater, and much less well understood -- especially the implication that natural selection is a process that makes and uses mathematical discoveries, albeit blindly, as discussed in this draft paper:
Draft documents on the Meta-Morphogenesis project (a potentially huge project,
still largely unnoticed), the role of construction kits, the implicit
mathematical discoveries made by biological evolution and by its products, and
links to further work in that area can be found in these documents and in online
papers that they refer to:
Exploring design space and niche space, Invited keynote for 5th Scandinavian Conference on AI, Trondheim, May 1995
The ``Semantics'' of Evolution: Trajectories and Trade-offs in Design Space and Niche Space. Invited talk at 6th Iberoamerican Conference on AI (IBERAMIA). 1998
Architecture-Based Conceptions of Mind (Final version) (Invited talk) in Proceedings 11th International Congress of Logic, Methodology and Philosophy of Science,
The SimAgent toolkit
Discussion of a method for trisecting an arbitrary angle in a plane surface, apparently contradicting the proofs of impossibility. This has implications for philosophy/foundations of mathematics.
Compare Chapters 7 and 8 of CRP
Since 1978, I have given a number talks and conference presentations on the nature of mathematics, and the roles of mathematical discoveries in human and animal intelligence. Sever of the talks in this directory discuss aspects of that topic: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
An incomplete, still growing, discussion of the role of "Toddler theorems" in
child development is here:
TO BE REVISED AND CONTINUED
Created as a template: 29 Jul 2015
Updated: 10 Aug 2015; 1 Sep 2015; 23 Dec 2015