School of Computer Science THE UNIVERSITY OF BIRMINGHAM Machine

What Singularity?
Notes on why I (mostly) ignore the much discussed AI Singularity.

(EARLY DRAFT PLACE-HOLDER: Liable to change)

Aaron Sloman
School of Computer Science, University of Birmingham
(Philosopher in a Computer Science department)

Installed:2 Aug 2016
Last updated: 12 Jan 2019 (Added reference to video)
This paper is
A PDF version may be added later.

A partial index of discussion notes is in

Why I don't waste my time on the AI singularity
(These notes may be expanded later)

I ignore the so-called "AI-Singularity" about which much has been written that is ill-informed about the science, usually over-optimistic, occasionally over-pessimistic, and to me mostly uninteresting. (Good science fiction writers are often deeper and more interesting, e.g. [Forster 1909].)

For anyone who really wants a short sharp summary and critique, a good start could be the final chapter of [Boden 2016], or even the whole book.

Some thoughts relevant to the possibility of super-intelligent machines were published in the (semi-serious) Epilogue to my 1978 book (The Computer Revolution in Philosophy):
The Epilogue included this prediction:
     "There will, of course, be a Society for the Liberation of Robots,...".
I have been informed (by Luke Muehlhauser) that this prediction had already come true e.g. This society is even older:

Often discussions about the supposedly approaching singularity make unjustified assumptions such as the the assumption that neurons are the units of processing in brains, and therefore as numbers of transistors in a computer approach the number of neurons in a human brain, that will imply that we are close to achieving human-like artificial intelligence.

If chemistry-based computation is important in brains (as suggested by Turing in his Mind (1950) paper) then the number of transistors (or successors to transistors) required for accurate brain modelling or simulation, could be several orders of magnitude larger than many of supposed, and many more decades, or perhaps centuries, will be required to replicate brain functions in human-made systems. In contrast, John von Neumann's brief speculations in [von Neumann 1958] (written in 1956 for the Silliman Memorial Lectures and published in 1958) was written while he was dying of cancer, and may therefore be incomplete in important ways. It includes discussion of possible limits to feasibility of computational replicas of brains.

If chemistry-based computation is the main basis of informatin processing in brains (as suggested by Turing in his Mind (1950) paper) then the required number of transistors could be several orders of magnitude larger than the number of neurons. In that case, very much longer times (perhaps centuries, or millennia rather than decades) may be required to replicate brain functions in human-made systems. von Neumann recognized that possibility in The Computer and the Brain.

The debate about numbers is summarised by Tuck Newport in his little book Newport(2015) Brains and Computers: Amino Acids versus Transistors, in which he points out some of the consequences for AI if most human intelligent capabilities are implemented at a molecular level inside neurones. If that is correct, it defuses hardware-based arguments about how soon the hoped-for, or feared, singularity can be expected.

Additional science/philosophy-based arguments closer to my research interests against an imminent singularity refer to deep aspects of human and animal cognition that are mostly ignored by AI/robotics researchers, psychologists, cognitive scientists, and neuroscientists, including human abilities to make discoveries about and to reason about possibilities and necessities as in ancient discoveries in geometry and topology.

These mathematical discoveries, some of which are very close to discoveries made by pre-verbal human toddlers, have features that are completely ignored by AI researchers, the majority of whom now seem to focus mainly on machines that learn about probabilities, rather than possibilities, impossibilities and necessities.

Perhaps the most spectacular examples come from the (mostly unknown) pre-history of ancient mathematics (especially geometry and topology) features of of which are echoed in the achievements of non-human intelligent animals and pre-verbal human toddlers. Examples are given in several papers on this web site, e.g.

Another Singularity: The Singularity of Cognitive Catchup
Notes to be added.


[Boden 2016]
Margaret A Boden (2016), AI: Its nature and future, OUP, Oxford.
[Forster 1909]
E.M. Forster, 1909, The Machine Stops, (Full Text)
The Machine Stops was first published in The Oxford and Cambridge Review in 1909.
Note added 12 Jan 2019
I have just discovered that there is a truncated (10 Minute) video version of this story online here
"A 10-minute adaptation of E.M. Forster's 1909 tale, "The Machine Stops.""
Apparently it was a student production for a competition, and won a prize.
One of the comments states: "This was a pretty decent distillation of a much longer story. Well done!"

Newport (2015)
Tuck Newport (2015)
Brains and Computers: Amino Acids versus Transistors
Available only as an (very short) e-book:
Aaron Sloman [1978, revised]
The Computer Revolution in Philosophy: Philosophy, Science and Models of Mind
Harvester Press (and Humanities Press), 1978, Hassocks, Sussex,
Digitised, expanded, with links to reviews, later developments, etc, here:
The 1978 Epilogue contained (semi-serious) comments related to a possible future AI-takeover, unwittingly criticising future discussions on this topic:
[von Neumann 1958]
John von Neumann The Computer and the Brain (Silliman Memorial Lectures), 1958 (Yale University Press, 2012, 3rd Edition, with Foreword by Ray Kurzweill),

Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham