School of Computer Science THE UNIVERSITY OF BIRMINGHAM Machine

What Singularity?
Notes on why I (mostly) ignore the much discussed AI Singularity.

(EARLY DRAFT PLACE-HOLDER: Liable to change)

Aaron Sloman
School of Computer Science, University of Birmingham
(Philosopher in a Computer Science department)

Installed:2 Aug 2016
Last updated: XXX
This paper is
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ai-singularity.html
A PDF version may be added later.

A partial index of discussion notes is in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html

Why I don't waste my time on the AI singularity
(These notes may be expanded later)

I ignore the so-called "AI-Singularity" about which much has been written that is ill-informed about the science, usually over-optimistic, occasionally over-pessimistic, and to me mostly uninteresting. (Good science fiction writers are often deeper and more interesting, e.g. [Forster 1909].)

For anyone who really wants a short sharp summary and critique, a good start could be the final chapter of [Boden 2016], or even the whole book.

Some thoughts relevant to the possibility of super-intelligent machines were published in the (semi-serious) Epilogue to my 1978 book (The Computer Revolution in Philosophy): http://www.cs.bham.ac.uk/research/projects/cogaff/crp/#epilogue.
It included a prediction: "There will, of course, be a Society for the Liberation of Robots,...".
I was recently informed (by Luke Muehlhauser) that this prediction had already come true e.g. http://petrl.org/. This one is even older: http://www.aspcr.com/.

Often discussions about the supposedly approaching singularity make unjustified assumptions such as the the assumption that neurons are the units of processing in brains, and therefore as numbers of transistors in a computer approach the number of neurons in a human brain, that will imply that we are close to achieving human-like artificial intelligence.

If chemistry-based computation is important in brains (as suggested by Turing in his Mind (1950) paper) then the number of transistors (or successors to transistors) required for accurate brain modelling or simulation, could be several orders of magnitude larger than many of supposed, and many more decades, or perhaps centuries, will be required to replicate brain functions in human-made systems. In contrast, John von Neumann's brief speculations in [von Neumann 1958] (written in 1956 for the Silliman Memorial Lectures and published in 1958) was written while he was dying of cancer, and may therefore be incomplete in important ways. It includes discussion of possible limits to feasibility of computational replicas of brains.

If chemistry-based computation is the main basis of informatin processing in brains (as suggested by Turing in his Mind (1950) paper) then the required number of transistors could be several orders of magnitude larger than the number of neurons. In that case, very much longer times (perhaps centuries, or millennia rather than decades) may be required to replicate brain functions in human-made systems. von Neumann recognized that possibility in The Computer and the Brain.

The debate about numbers is summarised by Tuck Newport in his little book Brains and Computers: Amino Acids versus Transistors
https://www.amazon.com/Brains-Computers-Amino-versus-Transistors-ebook/dp/B00OQFN6LA/

These considerations defuse hardware-based arguments about how soon the hoped-for, or feared, singularity can be expected.

Additional science/philosophy-based arguments closer to my research interests against an imminent singularity refer to deep aspects of human and animal cognition that are mostly ignored by AI/robotics researchers, psychologists, cognitive scientists, and neuroscientists, including human abilities to make discoveries about and to reason about possibilities and necessities as in ancient discoveries in geometry and topology.

These mathematical discoveries, which I think are very close to discoveries made by pre-verbal human toddlers, have features that are completely ignored by AI researchers, the majority of whom now seem to focus mainly on machines that learn about probabilities, rather than possibilities, impossibilities and necessities.

Perhaps the most spectacular examples come from the (mostly unknown) pre-history of ancient mathematics (especially geometry and topology) features of of which are echoed in the achievements of non-human intelligent animals and pre-verbal human toddlers. Examples are given in several papers on this web site, e.g.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html

Another Singularity: The Singularity of Cognitive Catchup
Notes to be added.


REFERENCES AND LINKS FOR FUTURE

[Boden 2016]
Margaret A Boden (2016), AI: Its nature and future, OUP, Oxford. https://www.amazon.co.uk/AI-nature-future-Margaret-Boden/dp/0198777981
[Forster 1909]
E.M. Forster, 1909, The Machine Stops, http://archive.ncsa.illinois.edu/prajlich/forster.html (Full Text) https://en.wikipedia.org/wiki/The_Machine_Stops
The Machine Stops was first published in The Oxford and Cambridge Review in 1909.

[von Neumann 1958]
John von Neumann The Computer and the Brain (Silliman Memorial Lectures), 1958 (Yale University Press, 2012, 3rd Edition, with Foreword by Ray Kurzweill),

Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham