(Comments and criticism welcome: Send to a.sloman[at]cs.bham.ac.uk)

Alan Turing's 1938 thoughts on intuition vs ingenuity
in mathematical reasoning

Did he unwittingly re-discover key ideas first presented
in Immanuel Kant's philosophy of mathematics?

Followed by some comments and questions below.

Posted here December 2018 by Aaron Sloman (Work in progress)

Note 1: This originally started with Turing's 1938 summary, followed by a short explanation of how I thought it related to various questions about the nature of mathematics. Gradually I added more explanatory comments as I reflected on the implications of what Turing had written, and within a few weeks the document had grown much larger. This is now part of the Meta-Morphogenesis project, Sloman(2012-...). Later it will be re-organised.
This is available in two formats

A summary of core features of Kant's philosophy of mathematics (as I understand it) is available, for comparison with Turing's claims about mathematical intuition: Sloman(2018c).

Turing submitted his PhD thesis at Princeton University in 1938. In 2014 it was transcribed to Latex/PDF, available here: http://www.dcc.fc.up.pt/~acm/turing-phd.pdf by Armando B. Matos, Artificial Intelligence and Computer Science Laboratory Universidade do Porto, Portugal
It was originally published as A.M. Turing, Systems of Logic Based on Ordinals in Proc. London Mathematical Society, pp. 161-228, 1938
A part of the thesis, including the remarks on intuition and ingenuity, was also included in
     Alan Turing: His Work and Impact
     Editors: S. Cooper J. van Leeuwen
     eBook ISBN: 9780123870124, Hardcover ISBN: 9780123869807
     Imprint: Elsevier Science, Published 3rd May 2013

(I am grateful to Francesco Beccuti for drawing my attention to the fact that Turing had distinguished intuition and ingenuity in 1938 http://www.fbeccuti.it/, and to Timothy Chow http://timothychow.net/ for criticisms of an earlier draft. Neither can be assumed to agree with any of my claims.)


Turing on intuition vs ingenuity
     Extract from Section 11 of Systems of Logic Based on Ordinals By A. M. Turing, 1938
     Brief notes added by A.S.

Comments on Turing's distinction between intuition and ingenuity
Note added 7 Dec 2018: Piccinini on Turing's distinction.
Human mathematics is not something fixed
Debates about what should be included as Mathematics
What is the effect of using modern logic to prove geometrical discoveries?
     Note: Pre-Shannon information
Psychology and Neuroscience research
Modal features of mathematical discoveries
Prime numbers -- adored by mathematicians and cryptographers
Logicisation changes the subject
Euclidean spaces also have topological properties
What brain mechanisms are required?
Intuitions with infinite power/scope
Properties of polyhedral 3D shapes
Turing seems not to have believed the "strong" Church-Turing thesis
Other limits of Turing machines
"... cannot seriously be doubted"
Mathematical results do not rest on empirical observations
Euclidean geometry minus the parallel axiom
An example: Pardoe's proof of the triangle sum theorem
Contrast with logicist AI
Confusions about analogical representations
Do we need to investigate chemistry-based forms of computation?
Connections with process perception
Recurring Themes
Not possible-worlds semantics
An alternative view regarding Turing's later thought (Hodges)
Related documents
Examples discussed online

Turing on intuition vs ingenuity
Extract from Section 11 of
Systems of Logic Based on Ordinals
By A. M. Turing, 1938

11. The purpose of ordinal logics. (Page 106)
Mathematical reasoning may be regarded rather schematically as the exercise of a combination of two faculties[*], which we may call intuition and ingenuity. The activity of the intuition consists in making spontaneous judgments which are not the result of conscious trains of reasoning. These judgments are often but by no means invariably correct (leaving aside the question what is meant by "correct"). Often it is possible to find some other way of verifying the correctness of an intuitive judgement. We may, for instance, judge that all positive integers are uniquely factorizable into primes; a detailed mathematical argument leads to the same result. This argument will also involve intuitive judgments, but they will be less open to criticism than the original judgement about factorization. I shall not attempt to explain this idea of "intuition" any more explicitly.
[*] (Turing) We are leaving out of account that most important faculty which distinguishes topics of interest from others; in fact, we are regarding the function of the mathematician as simply to determine the truth or falsity of propositions.

The exercise of ingenuity in mathematics consists in aiding the intuition through suitable arrangements of propositions, and perhaps geometrical figures or drawings. It is intended that when these are really well arranged the validity of the intuitive steps which are required cannot seriously be doubted.

The parts played by these two faculties differ of course from occasion to occasion, and from mathematician to mathematician. This arbitrariness can be removed by the introduction of a formal logic. The necessity for using the intuition is then greatly reduced by setting down formal rules for carrying out inferences which are always intuitively valid. When working with a formal logic, the idea of ingenuity takes a more definite shape. In general a formal logic, will be framed so as to admit a considerable variety of possible steps in any stage in a proof. Ingenuity will then determine which steps are the more profitable for the purpose of proving a particular proposition. In pre-Goedel times it was thought by some that it would probably be possible to carry this programme to such a point that all the intuitive judgments of mathematics could be replaced by a finite number of these rules. The necessity for intuition would then be entirely eliminated.

In our discussions, however, we have gone to the opposite extreme and eliminated not intuition but ingenuity[+], and this in spite of the fact that our aim has been in much the same direction. We have been trying to see how far it is possible to eliminate intuition, and leave only ingenuity. We do not mind how much ingenuity is required, and therefore assume it to be available in unlimited supply. In our metamathematical discussions[**] we actually express this assumption rather differently.

[**] (Sloman) If any reader knows which are the relevant "metamathematical discussions" mentioned by Turing as expressing his assumption "rather differently", I would be grateful for the information.

Brief notes added by A.S.
(More detailed comments are below.)
By claiming to have "eliminated" ingenuity here, Turing seems to mean that products of human ingenuity can also be produced by a suitably designed or programmed computing machine. For example, an automated Geometry theorem prover, such as the one reported in Gelernter et.al (1964), and its many successors, will usually start from a logical formalisation of some or all of Euclid's axioms and postulates (e.g. Hilbert's axiomatisation) and formally derive logically formulated versions of theorems in Euclid's Elements. Another early example was Goldstein(1973).

For the ancient mathematicians, however, the axioms and postulates were not arbitrarily adopted postulates used as starting points for chains of reasoning, but were all significant discoveries, based on mechanisms that can be described, following Turing (and much earlier Kant), as using "mathematical intuition", not arbitrarily chosen logical axioms. Turing seems to be contrasting use of mathematical intuition, whose nature is unspecified, with applying symbol manipulating algorithms to logical axioms and inference rules expressed in a formal logical language, like predicate calculus.

The original forms of spatial intuition about combinations of lines, circles, circular arcs, etc. that originally produced mathematical discoveries can then be replaced by logical and algebraic manipulations of symbolic expressions together with reasoning about Cartesian-coordinate based representations of geometry, especially after David Hilbert had produced his his axiomatisation of Euclidean geometry in 1899.

Some online discussions of standard and non-standard examples of spatial (geometrical/topological) reasoning and discovery are referenced below.

Turing seems to have thought that the original discoveries reported in Euclid's Elements were products of mathematical intuition. Those discoveries (and some newer examples below) seem to me, and seemed to Immanuel Kant (1781)) to be strong candidates for being described as results of mathematical intuition (especially spatial intuition), although Turing gives no examples in the quoted text, nor, as far as I can tell, in other things he wrote around that time.

However, Euclid's axioms were not arbitrary postulates, and (by definition of "axiom") were not derived from other axioms by logical reasoning: they were all ancient mathematical discoveries. Other sets of axioms discovered more recently have been shown to suffice to generate all, or important subsets of, Euclidean geometry, e.g. Tarski's axioms, but there are also extensions to Euclidean geometry, some of which were known to ancient mathematicians, that are not included in Euclid's Elements, nor Hilbert's or Tarski's axiomatisations.

Brain mechanisms required for those ancient discoveries are still unknown, as discussed briefly below. Moreover there are geometrical axioms or constructions that are not derivable from Euclidean geometry, e.g. the neusis construction mentioned below, with Mary Pardoe's construction, which supports a proof of the triangle sum theorem without reference to parallel lines. I'll argue (inconclusively) below that the brain mechanisms required for the ancient mathematical discoveries and more recent extensions are related to Immanuel Kant's claims about mathematical knowledge as being non-empirical, non-analytic and non-contingent, alternatively expressed as a priori, synthetic and necessary, as explained in the companion document, Sloman(2018c), which elaborates on the summary in Sloman(1965), derived from my DPhil thesis Sloman(1962).

Note: At first I thought that Turing's claim to have "eliminated not intuition but ingenuity" was a typographical error and he had intended to say the opposite. But his claim to have "gone to the opposite extreme" from replacing "all the intuitive judgments of mathematics ... by a finite number of these rules" implies that he was talking about eliminating not the intuitive judgements, but the use of human-like ingenuity. Human ingenuity is replaced by the "mechanised" ingenuity of a class of non-human reasoning machines about which nothing was known before the twentieth century, although Babbage had anticipated some of the key ideas a century earlier. As explained below, Andrew Hodges, Turing's main biographer, implies that my interpretation of Turing's thinking around 1938 is along the right lines, but suggests that he changed his mind later and abandoned his ideas about mathematical intuition.

Comments on Turing's distinction between intuition and ingenuity

The above extract suggests that Turing thought of Turing machines, and by implication digital computers invented later, as capable of applying mathematical techniques (using ingenuity), but lacking in mathematical intuition (insight?).

He seemed to be suggesting that computers lack a capability that, in humans, guides ingenuity by identifying some powerful "starting propositions" (axioms?), and additional propositions (theorems?) for which ingenuity can be used to find formal proofs starting from those axioms. Presumably all such proofs would contain only steps that conform to logical patterns of valid inference. The steps could be found and checked by computers using "automated ingenuity".

If I have interpreted Turing correctly he would describe current (logic-based) automated geometrical theorem provers as using ingenuity but not intuition. Examples of such early automated geometrical reasoners by Gelernter and Goldstein were referenced in Note [+] above. However, automated theorem provers have developed enormously since then, and there are now far more advanced geometry theorem provers, some reported in Ida and Fleuriot(2012)

Computer-based AI reasoners of that general sort are able to derive theorems in Euclidean geometry by constructing (and checking) proofs based on modern, logical, formulations of Euclid's axioms and postulates (e.g. Hilbert's or Tarski's axiomatisation), but they cannot replicate the original discovery processes based on mathematical intuition (using still unknown cognitive mechanisms in brains), that somehow enabled ancient mathematicians to discover Euclid's axioms, and centuries later Hilbert's and Tarski's axioms (among others).

[Note added 16 Jun 2019:
   I have today corrected a dreadful formatting error in this paragraph that made
   all occurrences of XXX and YYY invisible in the online version of this document]

If the Euclidean theorems are all stated in this conditional form
     IF XXX then YYY
where XXX is a conjunction of all the axioms and postulates in Euclid's Elements, then exactly what the status of each such theorem is, and what the status of YYY is depends on what is in the axioms. If the axioms are all expressions in standard logical (e.g. Predicate calculus) notation, along with some abbreviative definitions, then if the theorems are provable using only logic then in an extended interpretation of Kant's ideas, the consequents YYY are all analytic, as opposed to synthetic conclusions which require either additional axioms, or some form of reasoning that is not purely logical, but makes use of insights into properties of space, for example.

For a defence of the use of diagrams in mathematics (but without reference to information processing mechanisms required) see Manders (1998), (2008), discussed in Hamami and Mumma(2013).

Will future automated theorem provers, perhaps using new (non-digital?) computing technology, be able to replicate, and perhaps extend, the originally used mechanisms of "intuition-based" mathematical reasoning?

Adequate answers to these question will need to include specifications of
   -- the information processing functions used in such mathematical discovery processes,
   -- the physical mechanisms that implement those functions and
   -- explanations of how the mechanisms support the functions.

The evolutionary precursors of those cognitive mechanisms, and the physical/chemical brain mechanisms that make them possible, are unknown, though I suggest that some of them are closely related to mechanisms involved in perception of spatial structures and processes and in intelligent control of spatial actions in humans and other species. In humans, the mechanisms begin developing well before language is used for communication. But they must use powerful internal languages supporting structural variability and compositional semantics, as noted in Sloman(1978b), Mumford(2016).

In particular, those human visualisation abilities may have grown out of biologically older spatial reasoning mechanisms that are shared with several other intelligent species especially mechanisms required to support visual control of actions of many sorts, including approaching, avoiding, chasing, grasping, twisting, pulling, pushing, bending, biting, peeling, breaking, and throwing objects. The ancient geometers may also have (unconsciously) used precursors of modern logical formalisms, including precursors of the Universal and Existential quantifiers.

So, using Turing's terminology, the logical formulation of both non-logical axioms (e.g. axioms about points, lines, circles, etc.) and previously discovered logically valid rules of inference that can be used by logical mechanisms in brains, were all originally based on discoveries made using mathematical intuition. (He was not the first to have such ideas, but the earlier mathematical logicians did not attempt to design and build machines able to replicate the operations of logical mechanisms in brains, as far as I know.)

It is sometimes suggested, at least implicitly, that true mathematical reasoning and discovery did not begin until modern logical notations and mechanisms became available, in the 19th and 20th centuries. This ignores the fact that the earlier discoveries and proofs were regarded as important contributions to mathematics for centuries before the discovery of modern logic.

Many of those ancient mathematical discoveries are still in regular use by scientists, engineers, architects and mathematicians around the planet. (It is arguable that Euclid's Elements is the most important book ever written, at least on this planet.)

So Turing's claim that such logical mechanisms have "eliminated" ingenuity, mentioned [+]above, is merely a claim that the notation, axioms and rules of modern symbolic logic, suffice to replace human ingenuity, after human mathematical intuition has been used to identify concepts, problems, algorithms (methods of reasoning) and some non-formal proofs.

As a result of that transition, the axioms and inference rules of geometry together identify all the truths (of the domain covered by the axioms). Finding particular examples of those truths can be done simply by (blindly) following standard logical inference rules, starting from the axioms: a task that digital computers can be programmed to do. (I'll ignore here the meta-mathematical discussions and debates about alternative collections of axioms and rules, some more powerful than others. These debates have also influenced the development of computer science and AI.)

Suitable working machines did not exist in 1938, although the design specified by Babbage in the previous century, if fully implemented, would have met the requirements: see

Note added 7 Dec 2018 Piccinini on Turing's distinction
I have discovered that Gualtiero Piccinini (2003) provides a useful detailed discussion of Turing's distinction between intuition and ingenuity but does not relate it to Kant's views on mathematics or the questions raised here.

Human mathematics is not something fixed
Mathematics is not a frozen discipline: the intuitions of human mathematicians constantly generate new mathematical domains, problems, and forms of reasoning. Turing seems to have believed in 1938, that these highly creative intuition-based discovery processes cannot be replicated using computers as we know them -- machines that operate on sequences of discrete structures. Examples of such machines include modern digital computers, Turing machines, implementations of Alonzo Church's Lambda Calculus, Emil Post's Production systems, and other mechanisms that have all been proved to be equivalent in power.

What has not been proved is that the (still unknown) mechanisms in human brains -- perhaps sub-neural chemical mechanisms -- that have made all human mathematics possible so far, are also in that equivalence class. If Turing's comments on mathematical intuition are correct, they are not. I.e. in 1938 he apparently did not believe what later came to be called "The strong Church-Turing thesis/conjecture" mentioned below.

The rest of this document elaborates on some of the implications of these thoughts. However, it should be remembered that in 1938 (when I was 2 years old, and like all normal children, already having learnt much about spatial structures and processes, without being able to describe any of it!) Turing was still very young, not yet 26 years old. I have no idea whether he later retracted any of his 1938 ideas. Whether he changed his mind or not: the 1938 comments, though very brief, seem to me to be connected with some very deep points about the nature of mathematical discovery that have not been widely appreciated. I'll provide illustrative examples below. I suspect he had unknowingly replicated some of Immanuel Kant's insights about mathematical discovery processes Kant(1781), summarised in Sloman(2018c).

Debates about what should be included as Mathematics
I don't know whether Turing was aware of ancient disputes about whether additional axioms should be included in geometry as it was studied and taught, such as axioms allowing use of the "neusis" construction, which makes it easy to trisect an arbitrary angle, which is impossible for most angles in "pure" Euclidean geometry. The discovery of the neusis construction, whose possibility is not derivable in Euclidean geometry, must have involved the sort of capability Turing referred to as "mathematical intuition". The construction is demonstrated and discussed here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trisect.html (also pdf).
and http://en.wikipedia.org/wiki/Neusis_construction.
(Mary Pardoe's proof of the triangle sum theorem, discovered when she was teaching geometry in the early 1970s, referenced below, seems to be related to the neusis construction, though much less powerful.)

Although modern formal/symbolic logic was not available to Euclid, his attempt to assemble geometric "starting statements", from which all the rest of geometry could be derived using only general purpose forms of reasoning, can be seen as dimly anticipating the modern use of logical formalisms, axioms, and rules of inference, that mathematicians eventually developed about two millennia later, although he used ancient Greek words and syntactic constructs.

Readers who have never studied Euclidean geometry may find useful this very brief, simplified, presentation on Euclid and his role in history (introduced by Liliana de Castro):

What is the effect of using modern logic to prove geometrical discoveries?
Once a body of knowledge has been encapsulated using axioms expressed in a logical notation, the availability of modern logical know-how makes it possible to derive consequences from the axioms -- removing the need for further use of mathematical intuition concerning spatial structures and processes. A logical theorem prover, whether human or mechanised, needs only to understand relationships between discrete symbols in axioms, rules and derivations. It can search for, and find, proofs and theorems, without knowing anything about what the symbols it manipulates refer to.

In particular, even if the axioms use symbols like "point", "line", "surface", "length", "area", "volume", and "continuous", logical theorem provers using those axioms do not need to know anything about spaces containing points, lines, surfaces, volumes, and continuous structures and processes. They simply compare formulae in the axioms or partial proofs with previously provided axioms and definitions or partial proofs, in deciding what to do next.

In contrast, if a video camera is connected to a computer that contains an axiomatisation of Euclidean geometry and a theorem prover, a great deal of additional work is required to enable the machine to describe structures and processes in video images in terms of lines, points, planes and the other concepts of Euclidean geometry. (As Kenneth Craik 1943 pointed out, explaining how straightness can be represented in a tangled network of neurons is a difficult challenge. As far as I know he did not meet Turing before he was killed in a road accident.)

It is not clear what else would need to be added to the machine to enable it to go on to discover extensions to Euclidean geometry that ancient mathematicians discovered, such as the neusis construction, or origami geometry, or the proof of the triangle sum theorem discovered by Mary Pardoe, explained below.

Even more difficult, and as far as I know not yet achievable using current AI tools, is designing a mobile robot with manipulators and video cameras that is capable of making the discoveries about geometry that were summarised in Euclid's Elements.

A human or machine deriving theorems in geometry using only axioms expressed in a logical formalism, and logical (or logical and arithmetical) reasoning abilities (logical ingenuity?), does not need to know anything about the space we perceive and act in, and its contents. It needs only to know about the formulae used to state conclusions, and the rules of inference that allow new truths to be derived from old ones. But that would not allow a machine to explore spatial locations and their contents.

After suitable axioms and definitions have been provided by a programmer, the machine may give the impression of understanding Euclidean geometry by typing out answers to questions, even though it cannot use that ability in controlling actions or describing things in view. What such a machine types out has no meaning for it apart from its role in a space of possible syntactic manipulations, unless the machine also understands connections between such formulae, entities observable in the environment, and possible actions in the environment and their effects.

I am not here endorsing the claim made by many (e.g. John Searle in his "Chinese room" argument and various online video lectures and discussions) that computers cannot deal with semantics, only syntax. That claim ignores the fact that the ability of computers to use ingenuity to find formal proofs (or refutations) depends on semantic abilities to refer to and reason about formal (e.g. logical, algebraic) structures, and in some cases machine instructions and memory locations. Learning to design, build, test, document, debug, modify and explain the operations of virtual machines running in computers should be a standard part of a philosophical education, as I rashly, over-optimistically, predicted it would soon be in Sloman(1978). Alas, instead, philosophers are still mostly taught to discuss "the singularity" or whether machines can have "qualia", without the required conceptual tools.

The possibility of reducing mathematical reasoning to such formal operations was perhaps first proposed by Leibniz, but the modern claim that this can be done for geometry is due to David Hilbert, using the formal logical apparatus developed in the 19th Century. Some of the issues, including the (obscure) disagreement between Hilbert and Frege can be found in:

It seems clear that for an animal or machine to be able to reason about spatial structures and processes it needs to be able to associate rich contents relating to varieties of space-occupants, their relationships, their interactions, and their the formation and successful execution of many different goals involving spatial structures. Understanding syntactic and inferential relationships between symbols used internally, is not enough.

A considerable amount has been published on these topics by philosophers and historians of mathematics. But until the 20th century, the biological information processing mechanisms that underpin human capabilities were often ignored (although Kant implicitly referred to them). Automated theorem-proving mechanisms have been proposed, discussed and implemented on computers, by AI researchers, usually based on logical theorem provers. But those researchers usually make no claims regarding whether their mechanisms have any connection with the mechanisms at work in the brains/minds of ancient mathematicians who made the original remarkable discoveries. Had Turing known about those AI achievements, his comments in 1938 suggest that he would have described such theorem provers as modelling human mathematical ingenuity, leaving mathematical intuition/insight unexplained.

Note: Pre-Shannon information
In referring to biological information processing, I use "information" in the standard pre-Shannon sense, not the technical sense introduced by Shannon, which is mostly concerned with measurable features of information-bearers rather than the types and uses of the information contents they can "bear" (express, carry, communicate, store, etc.).

The ordinary, pre-Shannon sense of "information" is much older, illustrated by the use of the word in Jane Austen's novel Pride and prejudice, published in 1813, discussed in Sloman(2013-2018). In what follows I hope it is clear whether I am referring to information content (e.g. a fact about numbers, lines, circles, planes, brains, sensors, goals, etc.) or information-bearers (e.g. logical formulae or bit patterns stored in and manipulated by brains or computers). Turing's thesis preceded both Shannon's publication, and Schrödinger(1944), which in some ways anticipated Shannon's ideas about measuring information bearers. So it is perhaps not surprising that he does not provide much detail regarding intuition's role as a mechanism for processing information, and how the mechanism (or mechanisms) differ from those required for exercising ingenuity. I suspect that what he meant by "mathematical intuition" was more concerned with information contents, whereas he was using "mathematical ingenuity" to refer to operations on information bearers.

Psychology and Neuroscience research
What brain/mind mechanisms made those ancient discoveries (including recognition of necessary truths and necessary consequences) possible? Ancient mathematicians (e.g. Archimedes) were aware that surfaces of a cone or sphere or ellipsoid are not Euclidean. So I suppose some of them would not have been terribly surprised to learn what was only discovered long after Euclid, namely that the parallel postulate was not a truth about physical space, though it can be regarded as implicitly defining the domain of Euclidean geometry in contrast with other geometries, e.g. the geometry of the surface of a sphere.

The original discovery that a geometry can be Euclidean was a major mathematical discovery. Did it use what Turing called intuition? What brain mechanisms could have enabled all these discoveries, including discovery of the possibility of non-Euclidean spaces?

Modern logic-based automated geometry theorem provers use recently invented formalisms and inference rules that were unknown to the ancient geometers. Ancient thinking about geometry made essential use of spatial reasoning (mathematical intuition?) aided by diagrams, either drawn in an external physical medium, e.g. sand, or a suitable surface such as slate, or created in the minds of mathematicians. My own education in geometry, in the 1950s, used pencil and paper, or chalk on blackboards: plus much imagining.

These processes and mechanisms are very different from recent logic-based forms of reasoning that use only strings of discrete symbols, or 2D configurations of discrete symbols, and mechanisms for manipulating them -- previously only human brains, but computers also now.

Here's an example of a form of reasoning that's very different from logical thinking: I suspect you can visualise a planar configuration containing a circle and a closed polygon, which overlap, forming a common portion whose boundary is partly curved and partly made up of one or more straight lines. There are infinitely many such configurations, and however many you imagine you can probably think of a variation that will produce a new imagined configuration. What "imagine" means here, how brains do imagining, and what sorts of machines can replicate such imagining require further discussion, touched on below.

Here's a slightly harder, deeper question: consider all possible configurations in which a circle and a convex polygon with N sides and N corners overlap: what's the maximum possible number of points in common to the circle and polygon, i.e. points where their boundaries cross or touch?

A different question: is it possible for a simple (non-self-crossing) convex polygon to have M sides and N vertices where M and N are not equal. How do brains represent such an impossibility, as required for understanding the question? (Impossibility is not to be confused with a minuscule probability.) What brain mechanisms enable discovery and proof of impossibility? I know of no attempt to prove that neural nets can discover or represent impossibility (or necessity). The question is normally ignored by people who apparently don't understand (as Kant did, and apparently Turing also) the requirements for explaining human mathematical competences, as opposed to abilities to learn empirical generalisations.

Mechanisms have been proposed (usually with far less precision and testability than computer programs) by psychologists and neuroscientists who study evidence based on brain research and laboratory or classroom observations. But their evidence is (in my experience) usually based on deeply inadequate prior analyses of what needs to be explained: they consider only a very tiny subset of the phenomena that need to be explained. Almost all the researchers in psychology and neuroscience I have encountered fail to mention the need to explain mathematical abilities to reason about and discover impossibilities and necessities. (Piaget was an exception.)

Regarding the circle and N-sided polygon question above: after writing down a collection of suitable axioms you may find that it is easy for a logic-based reasoner to deduce the answer using advanced "computer ingenuity" (including some use of mathematical induction to cover the infinite variety of possible convex polygonal shapes?)

But humans above a certain age, with some prior experience of thinking about straight lines, polygons, and circles, can fairly easily figure out, using a combination of visual imagination and logical reasoning, that the maximum possible number of intersection points is 2xN. I suspect no current theory in psychology or neuroscience can explain how a brain can work out such a conclusion about infinitely many different geometrical configurations. Notice that the conclusion is about what is necessarily the case: it is not a conclusion about a very high probability.

Most of the scientists studying mathematical cognition, and even some philosophers, alas, have neither studied, nor re-discovered Kant's important observations on the nature of mathematical knowledge (as non-empirical, non-analytic, and non-contingent -- summarised and explained in the companion piece Sloman(2018c).

I think the quotation above from Turing's thesis indicates that he had made a discovery similar to, but not as well-articulated as Kant's.

But most psychologists and neuroscientists consider only a tiny subset of relatively shallow features of naive mathematical discovery and reasoning, such as abilities to answer relatively simple questions about numbers of relatively small sets of objects.

Then grand conclusions (e.g. about innateness of mathematical abilities and concepts) are based on flimsy evidence, supported by no demonstration or description of working AI mechanisms with the required explanatory powers. Not all psychologists are guilty. E.g. Rips et al, 2008 were aware of the problem, but as far as I can tell, provided no satisfactory explanatory mechanism.

Having benefited from learning to program (spending a 1972-3 as a visiting researcher in Edinburgh University), I was later able in Chapter 8 of Sloman(1978) to make some (admittedly shallow and incomplete) suggestions about possible implementations of competences based on numerical tasks explicitly using 1-1 correspondences.

Most tasks involving whole numbers require an explicit or implicit understanding of 1-1 correspondences, although after a while the foundations tend to be ignored and only shallow consequences remembered and used, e.g. addition and multiplication tables.

But I still don't know, and I don't know of anyone who does know, what brain mechanisms enable human mathematical learners to grasp that 1-1 correspondence is necessarily transitive and symmetric, no matter how many entities are in the sets being matched.

But for that competence, our current understanding of numbers and many of their uses would be impossible. Piaget's work (in 1952) suggested that children were not able to understand these features of 1-1 correspondence until they were close to 6 years old. What brain mechanisms make that possible? Why do they develop so late?

Do any other animals have them?

Modal features of mathematical discoveries
To summarise one of the main points I have been labouring: many researchers into mathematical cognition seem to be unaware of an important requirement for adequacy of explanatory models pointed out by Kant in 1781, namely that any good theory of mathematical cognition should explain how it is possible for mathematical discoveries to include discovery of impossibilities and necessities (which are not extremes on probability scales). Jean Piaget was one of the few exceptions: Piaget (1981,1983). He had read Kant.

Prime numbers -- adored by mathematicians and cryptographers
Understanding that a number N is prime requires being able to detect that it is impossible to find two smaller numbers whose product is N. How can brains detect or represent impossibilities, or necessary connections of this sort?

In particular, recognition of mathematical necessities and impossibilities cannot be explained in terms of mechanisms for statistics-based learning, which now seem to dominate neural theories and Deep Learning AI systems.

For example, understanding both cardinal and ordinal numbers depends on understanding that one-to-one correspondence (bijection) is necessarily transitive and symmetric, and therefore reflexive. As mentioned above, there is some evidence (e.g. in studies by Piaget and others) that such understanding does not develop in humans until several years after birth, e.g. the fifth or sixth year, but I know of no explanation, or even attempt to explain, how brain mechanisms enable discovery of such non-empirical features of complex relations, or even how brains express the results of such discovery, or why the relevant genetic mechanisms are not expressed sooner.

My own (partial) answer, developed in collaboration with biologist Jackie Chappell, is based on a theory of "Meta-configured" epigenesis, summarised in various papers, e.g.

Logicisation changes the subject
For ancient mathematicians, Euclid's axioms were not arbitrarily chosen starting points, and were not expressed in a modern logical formalism. They were important mathematical discoveries based at least partly on mathematical intuition, especially intuitive insight into interactions between spatial structures and processes involving lines, circular arcs, planar surfaces, and alterations produced using a straight-edge and compasses to control production of new straight lines and circles or portions of circles.

Such mathematical discoveries were made by examining spatial structures and processes -- structures perceived in physical space (e.g. diagrams in sand or on a physical surface) or merely imagined. The thought processes used mechanisms that are still not understood, or replicated in computers. They were not, originally, revealed by transforming sentences or logical formulae according to fixed rules.

However, as Kant (1781) stressed, these discoveries were not discoveries about the physical world that needed testing in a wide range of physical conditions. Once you have examined a process of starting with a line L containing a point P and using a pair of compasses to construct a perpendicular to L through P your geometric intuition enables you to understand that that construction will always work, in a perfectly flat (planar) surface: you don't need to try surfaces of different colours, at different locations on and off this planet, at different temperatures, in weak and strong magnetic fields, and using lines of different lengths, circles of different radii, etc.

I am not claiming, and I suspect Kant would not have claimed that human mathematical/spatial intuition is infallible: every working mathematician knows that mistakes are possible, and can usually be detected and removed, as documented in detail by Lakatos(1976). But they are not mistakes in collecting empirical evidence or mistakes in deriving probabilities from statistical evidence. They are mistakes in reasoning that can be discovered by imagining counter-examples in your armchair Sloman(1962).

As mentioned above: you don't even need to produce physical lines and curves, since geometrical discoveries can be made by examining imagined structures and processes. Moreover if physical lines and curves are examined they do not have to be perfectly thin, perfectly straight -- etc. They merely need to be capable of being interpreted as indicating how perfectly thin, perfectly straight lines can be arranged. (It is hard to explain this to people who have never had personal experience of finding and checking such proofs, which used to be a standard part of academic education, and urgently needs to be restored in schools that don't teach geometry.)

That is one of the reasons Kant claimed that they were not empirical discoveries. But he also claimed that they were not all made by starting from definitions and using logic to derive consequences: that is why he called them synthetic not analytic.

Later, following development of Einstein's General Theory of Relativity, the question whether the physical space we inhabit is Euclidean, was answered empirically by Eddington's observations of the 1919 solar eclipse. But the original discoveries about Euclidean space as specified by Euclid remain, and apart from the parallel axiom, the other axioms of Euclidean geometry, and its local topological properties remain unchallenged as features of the physical space we inhabit. (I have not checked implications of quantum physics for sub-microscopic features of physical space. Comments welcome.)

Even if general relativity or modern quantum physics implies that on large scales, or at sub-sub...microscopic levels, space is not Euclidean, we can still use ancient forms of spatial reasoning to discover what would be true of those spatial structures if they were Euclidean -- a concept many humans understand intuitively, as did ancient mathematicians.

Using logic, we can also prove from a logicised version of Euclid's axioms that if the logicised Euclidean axioms are true of space at arbitrarily small levels then the theorems must be true also. Likewise, even if physical space happens to be finite we can argue that if it were everywhere locally Euclidean then it would be infinite, e.g. because Euclidean straight lines and planar surfaces are indefinitely extendable. But we do not need to use logic to reach these conclusions: we can use our spatial (mathematical) intuition, in the same way as ancient mathematicians did, even though nobody knows which brain mechanisms make such reasoning possible.

I suspect other species with high spatial intelligence have related abilities, but not the meta-cognitive abilities to reflect on their reasoning and explain it to others, nor abilities to "chain" such discoveries to derive increasingly complex results. (Not all humans can do this to the same extent. There seem to be some genetic differences in mathematical abilities, as is clearest in the case of individuals with extraordinarily powerful mathematical reasoning abilities that can't be traced to unusually good teachers, for example.)

Euclidean spaces also have topological properties
An example that as far as I know was not recorded by Euclid but can be seen by many humans (although not immediately after birth) to be obviously true, is the fact that the relation of spatial containment of simple (i.e. non-self-crossing and non-self-touching) closed curves on a flat (Euclidean) surface is transitive and antisymmetric: If curves C1, C2, and C3 are all distinct, then if C1 contains C2 and C2 contains C3, then C1 contains C3. Moreover C2 does not contain C1, and C3 does not contain C2. As I think Kant noted, these are necessary truths. Their negations are impossible.

The relation of 3D volumetric containment can also be seen to be transitive. If volume V1 contains V2, and V2 contains V3, then V1 contains V3. What brain mechanisms allow humans to see that that must be true, even when V2 and V3 are invisible because container V1 is opaque? This could not be simply a generalisation from sensory experiences.

A more fundamental question is: What brain mechanisms allow humans to think about, reason about, or recognize instances of volumetric containment involving an unlimited variety of 2D or 3D shapes of container and contained object? How is that ability extended to allow mathematical discoveries of the sorts mentioned above, discoveries concerning possibility, impossibility and necessity? Such competences cannot be based solely on empirical abstraction and generalisation from sensory data? That would not prove impossibility, or a universal generalisation. Yet mathematicians have been doing such things for centuries (even if that sort of geometrical investigation became unfashionable during the 20th century -- for bad reasons).

Most physical examples of the geometric and topological relationships between nested volumes cannot be perceived because most physical containers are not transparent! However glass was discovered at least about 3000 years BCE. Could that have made a difference to the development of geometry? I suspect the (still unknown) required cognitive mechanisms evolved much earlier and not merely in humans: understanding 3D containment is a requirement for understanding that a banana needs to be peeled, or the shell of a nut cracked open, or a carcass ripped open, in order to get at food. It may also be involved in selecting and creating shelters, and in choosing places to hide offspring out of sight of potential predators. I don't know how many species that can think about such 3D containment, can also discover the fact, or even represent the hypothesis, that 3D containment is (necessarily) transitive.

What brain mechanisms are required?
As far as I can tell, current neuroscience, psychology, and AI all fail to provide explanations of how such topological and geometric concepts are used, or how the discoveries of necessities and impossibilities are made or the results represented. I don't think anyone now knows what the brain mechanisms are, or how a discovery that something is impossible or is necessarily true is represented in a human brain in such a way as to provide what Turing might describe as the intuitive knowledge that it is true. (Reminder: necessity and impossibility are not extremes of probability.)

That gap in our knowledge exists partly because most researchers (i.e. psychologists and neuroscientists) aiming to provide explanatory mechanisms do not understand what needs to be explained, such as the features of mathematical knowledge identified by Kant, discussed further below, and summarised very briefly in Sloman(1965). (Piaget, as mentioned above, was an exception.) So they fail to recognize that the proposed explanations referring to statistics-based mechanisms are constitutionally incapable of representing or justifying impossibility statements. How much of this Turing thought about, is not clear from the quoted text.

Intuitions with infinite power/scope
The mechanisms underlying such intuitive insights have some remarkable features. For example, they allow discoveries to be made about closed planar curves of infinitely many different shapes and sizes. If a closed curve is produced by starting at a location in a plane surface and constantly moving through the surface with arbitrary (smooth or sudden) changes of direction until the moving tip of the curve meets the starting location, without ever previously touching or crossing itself, the variety of possible shapes of the enclosed boundary is enormous (but would exclude figures like "8", "B", "L", "P" and many others).

Despite that variety, many people are able to think about two points, P1 and P2, on opposite sides of a closed planar curve (one point inside and one outside the enclosed region), and somehow understand (how?) that there cannot be a continuous route between those two points lying entirely in the plane that nowhere crosses the closed curve (or, equivalently, every continuous route between P1 and P2 crosses the curve).

In order to convince yourself you may have to consider various convex and non-convex curves. This is not a process of sampling a population of curves to find the proportion that pass a test. A mathematical discovery is much deeper than an empirical statistical record!

Another feature of mathematical intuition is the ability to discover new "spaces" in which previous intuitions are invalid. If we consider non-planar surfaces we can find situations were some of the generalisations of Euclidean geometry are false. E.g. on the surface of a torus (a thick ring, or inflated tube of a car tyre, or a doughnut shape) there are some simple (non-self-crossing and non-self-touching) closed curves that do not have a distinct inside and outside. Such a curve does not divide the surface of the torus into two disconnected regions. For any such curve C, any two points on the surface of the torus can then be joined by a smooth continuous line that does not cross C.

FIG Torus Varieties of closed curves on a torus
Which closed curves divide the surface of the torus into two disconnected regions?
See also http://www.cs.bham.ac.uk/research/projects/cogaff/misc/torus.html

Other curves on the torus do share that property of a closed curve on a plane, namely the curve divides the surface into two disconnected regions. Thinking about such facts is left as an exercise for readers who are able to visualise curves on the surface of a torus: a doughnut shaped 3D object. For more details see:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/torus.html also pdf.

Partly related is this discussion of abilities to decide whether a "chain" of loop-linked rubber bands can be closed, to form a "super-band":

A more familiar impossibility that seems to be obvious even to fairly young children (though I don't know the earliest age) is linking or unlinking two solid rings made of impermeable material, with no gaps. Examples are presented in:
   Reasoning About Rings and Chains (Impossible linking and unlinking)

Properties of polyhedral 3D shapes
An easier example is considering what happens if you cut a cube in two using a (planar) cut parallel to two of the surfaces . The cut will produce two rectangular polyhedra (3D shapes bounded only by flat surfaces) neither of which is a cube. How do you know that?

For any given Euclidean cube there are infinitely many different ways of slicing it using a plane parallel to two of the surfaces. Anyone reading this is likely to find the effects of such cuts obvious and will notice that such a cut through a cube cannot produce a new polyhedron with all faces equal, like a cube. Why not? In order to answer that question do you have to go back to Euclid's axioms and definitions and derive the answer using formal logic? Or is this an example of Turing's notion of mathematical intuition? Or Kant's observation that we can discover synthetic necessary truths in geometry?

The above examples are concerned with impossibilities (and necessities) relevant to continuously changing spatial configurations, e.g. continuously varying shapes that could be created by planar slices parallel to a face of a cube.

An example involving discrete numbers of features arises from considering what happens to a convex polyhedron, i.e. a 3D structure bounded by planar (flat) surfaces, such as a tetrahedron, a cube and many more, if it is cut in two by a planar cutter (flat knife).

If you start with an arbitrary convex polyhedron and cut it in two in such a way as to remove a portion with only one vertex, how do the numbers of faces, edges and vertices change in the remaining (larger) portion?

Many people who (perhaps with a little difficulty) can think about and answer that question correctly, would be incapable of producing a proof using a logical formulation of Euclid's axioms. I presume Turing would have regarded that as an example of mathematical intuition.

I know of no one who has any idea what brain mechanisms could make possible such discoveries of necessary truths of geometry or topology. Those discoveries are not simply generalisations learnt from many examples, because the result of a mere generalisation from examples is not guaranteed to make exceptions impossible, no matter how many examples have been considered. Noticing impossibility requires additional insight/intuition. As far as I know, there is no current AI theorem prover that convincingly models human thought processes when solving such problems.

I have many more examples available online, e.g.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/deform-triangle.html (or pdf) and

Turing seems not to have believed the strong Church-Turing thesis
There is a version of the Church-Turing thesis that specifies that all the members of a certain set of inference/reasoning mechanisms are equivalent in what they can achieve (what they can compute) despite differences in efficiency, i.e. resources required for every task.

For example, Lambda calculus, Turing machines, general recursive functions, and Post productions, can all compute the same functions. This is sometimes expressed by saying they all have Turing-equivalent powers, and they are all "Turing complete". For more details see

There is a more general claim that is often attributed to Turing (and others) namely that any form of computation (= any form of information processing) can in principle be implemented or modelled in all of those Turing-equivalent mechanisms. This is sometimes referred to as the "Strong Church-Turing thesis". This claims that, apart from efficiency concerns, a universal Turing machine can match the achievements of any other kind of information-processing mechanism.

Turing's remarks in the original quotation above imply that (in 1938) he did not believe the Strong Church-Turing thesis, insofar as he claimed that the specified mechanisms cannot implement mathematical intuition.

In a letter Turing wrote to Ross Ashby in 1946 about his proposed Advanced Computing Engine (ACE) he pointed out that it could be programmed to act in a manner that is "... entirely uncritical when anything goes wrong ... will also be necessarily devoid of anything called originality". Later he appears to have changed his mind, suggesting that his machine could "be used as a model of any other machine, by making it remember a suitable set of instructions".
(Thanks to Rodney Brooks for this link. I previously had a defunct reference.)

Other limits of Turing machines
In discussing powers of Turing machines, it is sometimes forgotten that (by definition) a Turing machine has no access to any information outside itself (i.e. not on its tape or its machine table). Moreover, allowing external influences will undermine the proofs of powers of Turing machines, which assume nothing outside the machine can alter the tape.

In contrast, biological information processing mechanisms, including human brains, can have very rich connections with external physical environments. Therefore any claim that a Turing machine can match the power of an animal brain is either false or incoherent (because what "match the power" means is not defined). It isn't clear that such a claim makes any sense given what we know about the variety of types of machinery in the physical universe and the ways in which their construction and interconnections can change over time (including animal brains).

Nevertheless, it is sometimes claimed that the whole physical universe can be completely modelled (very, very slowly??) on a universal Turing machine, e.g. the physicist David Deutsch 1997 and 2011. This implies that the physical universe includes no continuous changes, or continuous regions of space time.

By suggesting that operations of human (mathematical) intuition go beyond the powers of forms of computation that support ingenuity, Turing claims that there are reasoning/inference mechanisms that are more powerful than Turing machines (and their equivalents). So Turing's remarks in 1938 about human intuition achieving something that cannot be implemented in computers, suggest that he had by then rejected the strong version of the Church-Turing thesis, and by implication had rejected, in advance, claims about the possibility (in theory) of a Turing machine modelling the whole universe.

This rejection does not amount to a claim about something intrinsically mysterious or inexplicable. It simply implies that we have not yet identified all the forms of computing/information-processing machinery. Perhaps that is one of the reasons why Turing was investigating chemical mechanisms that combine continuous and discrete changes, shortly before he died. Insofar as such processes include continuous variation (e.g. continuous motion of molecules) they are intrinsically different from processes running on a machine capable only of discrete state-changes, like Turing machines and their equivalents.

Of course, that would not rule out approximate simulations on a digital computer. See Turing's discussion of computer simulations of reaction-diffusion processes combining continuous changes and discrete changes, in Turing(1952).

"... cannot seriously be doubted"
Turing also states that in some cases, when the propositions, geometrical figures or drawings are "really well arranged", the validity of the intuitive steps "cannot seriously be doubted".

I am sure this "cannot ... be doubted" description was not a comment on the intellectual weakness of the mathematicians in question (inabilities to doubt), but a comment on the mathematical relevance and power of the (unknown) mechanisms producing and using "the propositions, geometrical figures or drawings".

This may be Turing's (obscure) way of expressing what Kant had claimed (less obscurely(?)) in his Critique of Pure Reason 1781, namely that certain kinds of knowledge do not fit either of David Hume's two types of knowledge: they express neither purely logical consequences of defining relations between ideas (analytic knowledge, in Kant's terminology) nor empirical contingent knowledge, that can be established only by observation and experiment.

Observation-based general propositions can never be conclusively established because it is impossible to repeat the experiments in all parts of the universe, or even in all possible types of situation. (Whether they can be assigned numerical probabilities is also a much debated philosophical question, ignored here.)

In my DPhil Thesis Sloman(1962), written before I had heard of artificial intelligence and before I had learnt to program, I defended Kant's claim that there are kinds of knowledge that are non-empirical, non-analytic (not based solely on definitions and logic) and non-contingent (necessary) truths. I now think the best defence would make use of a specification for the design of an implementable intelligent machine able to replicate the modes of mathematical discovery used by the ancient mathematicians.

Demonstrating conclusively that such a machine is capable of running accurate models of ancient minds (different models for different individual minds) would be more difficult than demonstrating that it can make the same geometrical and topological discoveries.

Whether such a machine can be implemented using digital computer technology or whether it requires a mixture of discrete and continuous variation, as can occur in a chemical soup, is an open question. I suspect Turing (1952) was partly motivated by this question. Perhaps also the remarkable, unexplained, assertion in Turing(1950) is also relevant:
     "In the nervous system chemical phenomena are at least as important as electrical."
Was he thinking of the different kinds of internal complexity in chemical processes (discussed in Schrödinger(1944)) and electrical processes? Electricity involves currents and fields (including magnetic fields). Chemistry involves discrete structures (with discrete, but changeable, bonds) combined with continuous changes (of location, orientation and shape).

John von Neumann seems to have raised doubts about resources required to replicate chemically implemented brain functions in his Silliman Lectures, written while he was dying 1958 (discussed in Newport(2015)). There is now a small (but growing) collection of neuroscientists who believe that far more chemistry-based information processing happens inside each synapse than is generally recognised, e.g. Trettenbrein(2016).
Note: There are several online versions of Turing(1950) that erroneously quote Turing as having written

"I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning".

where it is clear that he intended to refer to    .... a storage capacity of about 109 ....

Note that if the interrogator is right 70% of the time, then the test is passed (the machine succeeds) only 30% of the time. Turing's tortuous formulation leads many readers to think he was predicting a 70% success rate, perhaps because they missed the point of what Turing was claiming. ]

Mathematical results do not rest on empirical observations
Such mathematical truths, unlike most scientific generalisations, don't need to be tested in a wide range of possible physical conditions: they are not empirical. E.g. Pythagoras' theorem for planar geometry, and the unique prime factorization theorem (the "Fundamental theorem of arithmetic", mentioned by Turing in the extract above) don't need to be tested in situations of varying temperature, climate, etc.

However, some of them are not applicable to all possible spaces: e.g. there are truths about planar surfaces that don't hold for curved surfaces, such as the surface of a sphere, or the surface of a torus.

There are some examples of necessity and impossibility that depend on purely logical deductions from definitions or logical reasoning about combinations of truth values. Simple examples in propositional logic are given in this IJCAI 2017 talk

Unlike such truth-table-based reasoning, many mathematical discoveries require spatial (geometrical and/or topological) intuition, but not in the way that an empirical generalisation does. E.g. "Every human over seven foot tall can learn to play the violin" may or may not be true, but if it is true that is not a necessary mathematical truth, but a contingent proposition that could turn out to be true or false, depending on the outcomes of biological evolution, for example.

In contrast these are necessary truths:
"Every human under ten foot tall has a height in feet that is not divisible by three different prime numbers".
"In a 3D Euclidean space it is impossible for three planar surfaces to completely enclose a finite portion of the space." (Kant mentioned the simpler 2D version.) As far as I know, what goes on in your brain when you convince yourself of the truth of such a geometric impossibility statement, cannot be explained by any current theory or model in psychology, neuroscience, AI, or philosophy of mathematics.

In particular, neural mechanisms and deep learning mechanisms that use statistical information to derive probabilities cannot discover that something is impossible or necessarily true. Impossibility and necessity are not points on a scale of probabilities.

Kant recognized the problem and struggled to formulate requirements for an explanation.

I suspect that Turing had not studied Kant, but independently made the same discovery about the nature of ancient mathematical modes of reasoning as Kant, namely that "the activity of the intuition" can sometimes lead to discoveries that are non-empirical, and include knowledge of non-trivial, non-definitional, mathematical truths. Some of those discovery processes differ from the kinds of "ingenuity" of digital computers that produce formal proofs, by searching in a symbolic space.

Is that consistent with Turing's claim that "we have gone to the opposite extreme and eliminated not intuition but ingenuity"? I take that to mean that humans no longer need to use their ingenuity to find proofs in formal systems because in many (but not all?) cases computers can search for valid proofs much faster than humans do. But that still leaves unsolved the problem of modelling, replicating or explaining what Turing called intuition, illustrated by examples above.

This is illustrated by the original processes of discovery in geometry and topology by ancient mathematicians whose results were summarised and organised in Euclid's Elements. The fact that Hilbert was able to produce a logical formalisation of Euclidean geometry that can be used by AI theorem provers to derive new consequences Hilbert(1899) does not suggest that the ancient mathematicians unconsciously used similar reasoning powers.

Unfortunately the use of Euclidean diagrammatic constructions to make discoveries and check hypotheses is no longer taught in all schools to bright children. As a result many people who attempt to study mathematical cognition or to model intelligent spatial reasoning in computers lack any first hand experience of some of the oldest and deepest forms of mathematical cognition, although they make regular use of the more basic forms of spatial cognition that underly those ancient mathematical abilities.

No current philosopher of mathematics would accept Turing's claim that the axioms and derived theorems produced by those ancient mathematicians "cannot seriously be doubted" (in the quotation from Turing above), especially in the light of the discovery that physical space is not Euclidean, and in the light of the historical examples of errors made by outstanding mathematicians (including Euler) reported in Lakatos(1976). Clearly some of Euclid's axioms can be and have been doubted, and shown not to be required for the most general mathematical features of physical space. And clearly human mathematical intuition can lead to errors.

However, there are many special cases where Euclid's axioms are true -- not on the surface of a sphere or a torus, but definitely for a class of intuitively "flat" plane surfaces. These have something in common that can be expressed in different ways, including Euclid's formulations and others. E.g. see the non-standard presentation of Euclidean geometry in Scott(2014).

Euclidean geometry without the parallel axiom
If we exclude the parallel axiom (and its equivalents) what is left of Euclidean geometry is a characterisation of a broader and deeper domain of mathematical investigation which includes non-Euclidean spaces as well as Euclidean spaces.

In particular, Euclid's characterisation of geometry (at least implicitly) includes a broad and deep partial specification of what was later called topology, the study of "Properties of space that are preserved under continuous deformations, such as stretching, twisting, crumpling and bending, but not tearing or gluing" https://en.wikipedia.org/wiki/Topology.

Those aspects of Euclidean geometry were left unchallenged by the developments in Physics that are sometimes wrongly thought to have challenged Euclid's achievements, or refuted Kant's characterisation of them.

In particular, there is something right in Kant's claim that those ancient mathematical discoveries were unlike empirical discoveries such as Boyle's law (relating pressure and volume of a gas at constant temperature), and were also not mere logical consequences of arbitrarily chosen definitions.

My DPhil thesis Sloman(1962) defended Kant in this respect, before I had heard of Artificial Intelligence or had learnt to program, and nearly sixty years before I encountered Turing's very relevant distinction between mathematical intuition and ingenuity. A more detailed defence of Kant could be a presentation of a design for a machine that can replicate those ancient discoveries, but, as far as I know, nobody has worked out how to build such a machine, although the achievements of ancient mathematical brains imply that such a machine is possible.

Neither logic-based geometric theorem provers nor neural-net-based models are candidates. The former use modes of reasoning from pre-formalised axioms (e.g. Hilbert's axioms for Euclidean geometry) that were not available to ancient mathematicians, whereas the latter use statistics-based neurally inspired reasoning mechanisms that cannot establish impossibilities or necessary truths that are part of geometry. (Impossibility and necessity are not ratio concepts as 0% and 100% probability are.)

Some of the problems in developing automated geometric reasoners are discussed in Matsuda and Vanlehn(2004).

An example: Pardoe's proof of the triangle sum theorem
It isn't always noticed that human geometrical reasoning powers extend beyond the details presented in Euclid's Elements, since humans have discovered alternative sets of axioms that have the same consequences as Euclid's. An example is the construction used in Mary Pardoe's (re-)discovery of a proof of the triangle-sum theorem, without any mention of the parallel axiom, which I suspect Turing would have regarded as based on an intuitive grasp of the possibility of rotating a line segment about each of the internal angles of a triangle in turn, followed by an intuitive grasp of the necessity of the three rotations always summing to half a complete rotation of the line segment.

There are also examples of important geometric discoveries that were not derivable from Euclid's axioms and postulates or Hilbert's logical formalization of Euclidean geometry, including the neusis construction mentioned above.

Origami constructions also extend Euclidean geometry:

The discovery and use of these extensions of Euclidean geometry clearly involved new intuitions/insights into spatial possibilities, rather than mere use of ingenuity in deriving new consequences from old intuitions expressed as axioms formulated in something like the notation of predicate calculus. (I have no idea whether Turing knew of the neusis construction or other extensions to Euclid.)

Contrast with logicist AI
The role of spatial intuition in making discoveries in geometry and topology was the main basis for my defence of Immanuel Kant in Sloman(1962) and my 1971 criticism of the logicist AI thesis presented in McCarthy and Hayes(1969), although I used simpler examples in 1962 and 1971.

McCarthy and Hayes presented an extension of predicate logic to include time, and representations of actions and other processes that occur at a time or endure for some time. They claimed that the resulting formalism is adequate for the purposes of an intelligent agent in three respects.
1. It is metaphysically adequate: it can express anything that is capable of being true or false.
2. It is epistemologically adequate: it can express anything that agents are capable of knowing or believing, and using in their reasoning.
3. It is heuristically adequate: it provides a mode of representation that allows efficient searches for proofs or refutations of hypotheses, in addition to being useful for reasoning about plans or decisions.

In response to McCarthy and Hayes McCarthy and Hayes(1969), my 1971 paper, later expanded in Chapters 7 to 9 of Sloman(1978), argued that although claims (1) and (2) might be correct, the third claim was not correct, since for various kinds of spatial reasoning, the use of analogical representations, such as diagrams, maps and pictures, was more heuristically powerful when formulating hypotheses and searching for proofs, at least in some domains, including Euclidean geometry. At that time I was unaware of Turing's distinction between intuition and ingenuity. My paper claimed that use of non-Fregean forms of representation had greater heuristic power than Fregean forms, but did not claim that such reasoning with spatial structures could extend the scope of mathematical cognition, though in retrospect I think such a claim is supported by examples like the discovery of the neusis construction, or the discovery of origami geometry. These additions extended the scope of Euclidean geometry rather than merely allowing previous results to be found or proved more easily, as implied by "greater heuristic power". (The neusis construction is discussed in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trisect.html.)

Despite the fact that my 1971 paper received a significant amount of attention in the AI community and was soon re-published in the AI journal and a Reidel volume Images, Perception, and Knowledge}, nobody at that time noticed the connection between my claims in 1971 and Turing's distinction between intuition and ingenuity in 1938, as far as I know.

Confusions about analogical representations
In what I labelled "analogical representations", such as diagrams, maps and pictures, the semantics (i.e. the properties and relationships in whatever is depicted) are not determined by application of functions to arguments, as in Fregean representations, but by interpreting properties and relations in the representation as corresponding to properties and relations in the objects or states of affairs depicted. Typically, interpreting such a picture requires problem solving since the parts of pictures are locally ambiguous and how each part is to be interpreted will depend on how the other parts are interpreted. I.e. what represents what is context-sensitive, and, as shown by work on so-called 'scene-analysis' by David Huffman, Max Clowes, and others around that time finding consistent interpretations was a non-trivial task in general. (For example see Barrow and Tennenbaum(1981))

The need for such context-sensitive interpretation processes is unlike the standard interpretation of complex Fregean representations in which well-formed parts are interpreted by (recursively) applying functions to arguments (including higher order functions such as quantifiers).

In 1971 I had not encountered Turing's distinction between intuition and ingenuity, and was unaware of it until very recently. I cannot tell whether Turing would have regarded my distinction between (a) reasoning based on Fregean notations (e.g. logic and algebra) and (b) reasoning based on insight into possibilities and impossibilities in topological and geometric structures, as identical with or closely related to his own distinction between intuition and ingenuity, but the two distinctions seem to be closely related.

Many readers did not pay proper attention to what I had written and thought I was recommending use of representations that are isomorphic with what they represent, despite the fact that I explicitly denied that and gave examples of 2D pictures or diagrams representing 3D scenes, pointing out that 2D pictures cannot be isomorphic with 3D scenes. This is one of the reasons why it is often difficult to get computers to interpret 2D pictures (or movies) of 3D scenes (or processes) accurately, as demonstrated dramatically by Clowes(1973).

Do we need to investigate chemistry-based forms of computation?
For many years I hoped that I could work out how to use computers to model the use of analogical representations in making the deep discoveries of ancient mathematicians. It seems to me that nobody in AI has achieved that, and the currently fashionable deep learning mechanisms are not candidates because they cannot discover impossibilities and necessary connections, since they work with statistical evidence and derived probabilities, which are entirely different concepts. Impossibility and necessity are not extremes of probability.

So I have recently begun to wonder whether the combination of discontinuous and continuous changes in sub-neural chemical processes made possible by quantum mechanisms, as pointed out in Erwin Schrödinger (1944) (where he showed how such mechanisms might be necessary to explain aspects of biological reproduction, influencing the thinking of Watson and Crick about DNA) might one day be shown to explain how to expand the abilities of digital computers (ingenuity) with new forms of reasoning (intuition). For some tentative early thoughts about replacing the tape and machine table of a Turing machine with components capable of producing and detecting continuous spatial changes, see the (still incomplete) discussion in Sloman(2018b).

Is it possible that Turing's surprising switch to research on chemical reaction-diffusion mechanisms, with their combination of continuity and discontinuity Turing(1952) was motivated by a similar interest? Perhaps we'll never know the answer to that. But that question triggered the Meta-Morphogenesis project, proposed in 2012, Sloman(2012-...), and still on-going. (This document is an addition to that project.)

Connections with process perception
There seem to be deep connections between biological (evolved) mechanisms for intelligent reasoning about spatial (topological and geometrical) structures and relationships and requirements for perception of spatial processes, including
-- a static perceiver viewing moving (structured) objects
-- a moving perceiver viewing static (structured) objects, and
-- a moving perceiver viewing moving (structured) objects,
All with static or moving background objects.

Another class of problems includes understanding changing visible appearances caused by various kinds of motion of the viewer or the motion of the object or scene viewed, or a combination of motions. The possible motions include forward and backward translations, sideways translations, and rotations of view direction or orientation, and possible tilts of the head while something is scene.

Even in a static scene, viewer motion can produce a wide variety of changes in what is perceived, including portions of surfaces becoming visible or invisible, relative visible lengths changing (e.g. objects looming larger when approached, or shrinking when moving away), and projected angles changing, e.g. if you look at a rectangular table top from another part of the room and see two of the corner angles as obtuse and two as acute.

Although James Gibson Gibson(1979) drew attention to some aspects of motion perception that are biologically useful (have positive affordances), e.g. changing optical flow patterns as a textured surface is approached from an angle, he merely scratched the surface. As far as I know, he did not relate this to geometrical or topological discoveries.

For example, if you walk through a typical botanical garden there will be a huge variety of shapes, colours, textures and static and changing patterns projected onto your retina, providing a vast amount of information about relative distances, sizes and shapes. I suspect neither AI researchers, nor neuroscientists nor psychologists understand much about what brains do with such information. As an exercise for researchers there are few videos demonstrating some of the phenomena using kitchen furniture and a pot plant here:

An important research challenge is to characterise precisely the kinds of information afforded by such structures and processes when they occur in visual perception, and to identify the brain mechanisms that extract that information and use it for reasoning about structures, processes and possible or impossible future changes (affordances) in the environment.

I suspect the mathematical discovery powers (using intuition) of ancient geometers made essential use of these older mechanisms shared with other intelligent species, combined with additional newer meta-cognitive mechanisms unique to humans, that provided abilities to reflect on and reason about the uses of the older mechanisms.

So squirrels, elephants, crows, orangutans, and others share some of the mechanisms that are central to human mathematical intuition, but the lack the additional mechanisms that allow systematic shared exploration of the results of those powers to develop, as happened in ancient humans making, debating, and using mathematical discoveries.

Exploring and developing those ideas is one of the tasks for the Meta-Morphogenesis project. As far as I know, the ideas have so far gone unnoticed in AI research on vision and robotics, mainly because so many researchers have made restrictive assumptions about the functions of biological vision systems, leaving out the important functions that extend Gibson's ideas about perception of affordances, to include intuitive discovery of spatial impossibilities and necessities that can be used in intelligent action selection.

It may turn out that the precise mechanisms required cannot be implemented on digital computers but require the sort of Super-Turing alternative tentatively discussed in Sloman(2018b).

I suspect Turing would have made major contributions to these problems had he lived a few decades longer, perhaps showing how brains make essential use of sub-neural chemical structures and processes with a mixture of continuous and discrete changes, instead of only digital sensing and storage mechanisms and probabilistic reasoning mechanisms. Compare the outstanding and very relevant recent work by neuroscientist Seth Grant (2018).

Recurring Themes
Recurring themes in all these examples are:
-- The discoveries involve spatial (topological and/or geometrical) reasoning -- they do not merely use logic to derive consequences from a collection of axioms;
-- Some of the discoveries express non-contingent facts, i.e. things that are necessarily true or false, since counter-examples can be seen to be impossible.
-- Human mathematical abilities are not infallible, as Lakatos(1976) demonstrated, using the fact that even great mathematicians can make mistakes. Some critics of Kant use the work of Lakatos as evidence. But that would be relevant only if Kant had claimed that mathematicians were infallible.
-- The discoveries that do not involve errors are not mere empirical discoveries liable to refutation by new examples, and they express non-contingent facts about necessary features or impossibilities.
-- That implies that the reasoning cannot be produced by mechanisms based on statistical evidence and probabilistic reasoning: mathematical claims about impossibility and necessity are not claims about low or high probabilities.

Not possible-worlds semantics
In Sloman(1962) I pointed out that the modal concepts used in this context (e.g. "impossible", "possible", "necessary") are not to be understood in terms of truth or falsity in all possible worlds. The statement that the ratios of lengths of sides of a planar triangle cannot be varied without also varying the angles is a statement about possible changes of a configuration in this world. There is no reason to believe that a child discovering that linked rings cannot be unlinked simply by moving them around in space is thinking about possible complete alternative universes. I expect the same was true of ancient mathematicians discovering impossibilities and necessary connections.

For these reasons "Possible world" semantics for modal concepts is irrelevant to the nature of mathematical necessities discovered by ancient mathematicians. We need "Possible configuration" semantics, or possible world-fragment semantics.

The same is likely to be true of a squirrel reasoning about possible and impossible ways to get nuts from a bird feeder, or a human toddler reasoning about possible actions on her toys. If a squirrel or toddler considers and rejects something as impossible, that achievement does not require a brain that is able to contemplate all possible worlds, or even understand the concept of a possible world.

For reasons related to Turing's remarks contrasting mathematical intuition with mathematical ingenuity, I am exploring the possibility that standard digital computers cannot implement the required forms of reasoning. Perhaps mathematical brains make use of a (chemistry-based?) mixture of discrete and continuously deformable forms of representation? I don't yet know know yet whether Turing(1952) provides clues or red-herrings, regarding this problem.

What's clear is that neither deep learning based on statistical evidence and probabilistic reasoning, nor purely logical deductions from some class of axioms can explain ancient processes of mathematical discovery -- including the original discovery of Euclid's axioms. The use of deep learning would make mathematical knowledge empirical (and relatively shallow!). The second option, logical deductions from (unproven) axioms, is believed by many, but Kant gave plausible counter examples, defended against still fashionable 20th Century criticisms in Sloman(1962), and in a growing collection of online papers, including examples of impossibility Sloman(2015-18).

Less obviously, I think all(?) currently proposed neural mechanisms also fail to explain everyday mechanisms of perception, reasoning, decision making, and control of actions, which use brain mechanisms closely related to those involved in mathematical discovery -- requiring modal powers of representation and reasoning. To some extent that is true also of other intelligent species, e.g. squirrels, weaver birds, elephants, and many other object manipulators. The ability to reject tempting options because their successful fruition is impossible can make a huge difference to speed of problem solving, though how brains do it, and how they represent necessity/impossibility is, as far as I know, still a complete mystery. Moreover, the vast majority of scientists (and even philosophers nowadays) seem not to have understood Kant's remarkably accurate characterisation of the problem.

If my guess about Turing's understanding of mathematical intuition is correct: he independently discovered (a simplified, less precise, version of) Kant's ideas about mathematics.

The Meta-Configured genome hypothesis, developed in collaboration with Jackie Chappell, Chappell&Sloman (2007a), Sloman, 2017-8, adds more content to these ideas and also helps to highlight the roles of mathematical discovery involved in mechanisms and processes of biological evolution and epigenetic mechanisms. (There is some overlap with ideas of the late Annette Karmiloff Smith.)

An alternative view regarding Turing's later thought (Hodges)
An alternative view of Turing's thinking is attributed in Christensen(2013), to Andrew Hodges, one of the authorities on Turing's life and work. Christensen reports Hodges as noting that Turing asked in the 1938 paper ("Systems of logic based on ordinals") ".... whether it is possible to formalize those actions of the mind which are not those of following a definite method: mental actions one might call creative or original in nature" and conjectures that "a change of thought" occurred in 1941. The post-war Turing claims that Turing machines can mimic the effect of any activity of the mind, not only a mind engaged in a "definite method" Hodges(1999) p. 35.

Hodges conjectures that in 1941, after "... a bitter struggle to break U-boat Enigma, Turing could taste triumph. Machines turned and people carried out mechanical methods unthinkingly, with amazing and unforeseen results .... I would now go further and suggest that it was at this period that he abandoned the idea that moments of intuition corresponded to uncomputable operations. Instead, he decided, the scope of the computable encompassed far more than could be captured by explicit instruction notes, and quite enough to include all that human brains did, however creative or original. Machines of sufficient complexity would have the capacity for evolving into behaviour that had never been explicitly programmed." Hodges(1999) p. 28-29

(Note: I have not yet checked the original. A.S.)

This wording is a bit strange, since in a sense it must always have been obvious that computers can produce behaviours that are not explicitly programmed insofar as those behaviours are produced by selections made at multiple conditional branches, or after uses of loops or recursion have produced repeated execution of portions of programs with different parameters not explicitly specified by the programmer and for numbers of cycles decided at run time and not explicitly specified by the programmer. Without that "creative" power, Turing's Universal Turing Machine would not have had the ability to emulate every possible Turing machine, including infinitely many machines not yet designed. I.e. it would not have been universal.

But if we replace "behaviour that had never been explicitly programmed" with "behaviour that could not have been explicitly programmed" the claim becomes stronger, but very unclear, since it is hard to understand what could make some behaviour executed on a Turing machine or computer impossible to program, unless it is an infinite sequence of random operations, though even that could be "programmed" on a machine with an infinite starting tape.

Another puzzle concerns Hodges' use of "evolving" in this sentence: "Machines of sufficient complexity would have the capacity for evolving into behaviour that had never been explicitly programmed." If this evolution merely refers to what can be produced by extended execution of programs running on a Turing machine (or equivalent), then that cannot extend the class of computable operations. If this is an implicit reference to biological evolution, then anything goes, since biological evolution potentially has available all the resources of a physical universe, including the mixtures of discreteness and continuity that characterise chemical processes. That's exactly the possibility I thought might have been at least part of the motivation for Turing's work on chemistry-based morphogenesis in Turing(1952).

So perhaps there was a change in Turing's view, whether Hodges has described it accurately or not, and that might explain why there is no mention of mathematical intuition and its un-programmability in Turing's 1950 paper insofar as he was trying, in that paper to emphasise the scope of digital computation.

Anyhow, all that still leaves the challenge of explaining how the ancient mathematicians made all their discoveries long before the development of modern logic and metamathematics, and long before Hilbert's axiomatization of Euclid was available. Moreover there is also a need to explain the many examples of powerful spatial intelligence in non-human animals and the the ancient human mathematical discoveries that are not derivable within Euclidean geometry, including the discovery of the "neusis" construction mentioned above and origami geometry, and others discussed in:

I am left with the impression that there remain many unanswered questions about Turing's later thoughts, including the questions that inspired the Meta-Morphogenesis project in 2012 Sloman(2012-...) -- whose interest is independent of Turing's actual thoughts and motives.

Related documents
My Sloman IJCAI 1971 ideas criticising the logicist manifesto of McCarthy and Hayes(1969), were slightly expanded in Chapters 7, 8 and 9 of Sloman(1978), on varieties of representation and on vision, written when I still hoped it would be possible to implement the ancient modes of mathematical reasoning about geometry in AI systems, about which I am now doubtful. (As was Turing, in 1938 apparently.)

Examples discussed online
I have assembled a wide variety of examples of spatial (geometrical and topological) reasoning in humans that are very different from the forms of reasoning produced by AI researchers in geometrical theorem provers using logic and the Cartesian (coordinate-based) representation of geometry. These are all challenges for future AI systems and for neuroscience:
and others referred to in those documents.



Aaron Sloman, 2013--2018, Jane Austen's concept of information (Not Claude Shannon's)
Online technical report, University of Birmingham,

H.G. Barrow and J.M. Tenenbaum, 1981, Interpreting Line Drawings as Three-Dimensional Surfaces, in Artificial Intelligence, 17, pp. 75--116,

Jordana Cepelewicz, 2016 How Does a Mathematician's Brain Differ from That of a Mere Mortal? Scientific American Online April 12, 2016

Jackie Chappell, & Aaron Sloman (2007a). Natural and artificial meta-configured altricial information-processing systems. International Journal of Unconventional Computing, 3(3), 211-239. http://www.cs.bham.ac.uk/research/projects/cogaff/07.html#717

M.B. Clowes, 1973, Man the creative machine: A perspective from Artificial Intelligence research, in The Limits of Human Nature, Ed. J. Benthall, Allen Lane, London.

Chris Christensen (2013) Review of Biographies of Alan Turing, Cryptologia, 37:4, 356-367,

Kenneth Craik, 1943, The Nature of Explanation, Cambridge University Press, London, New York
Craik drew attention to previously unnoticed problems about biological information processing in intelligent animals. For a draft incomplete discussion of his contribution, see

David Deutsch, 1997 The Fabric of Reality,
Allen Lane and Penguin Books

David Deutsch, 2011 The Beginning of Infinity: Explanations That Transform the World,
Allen Lane and Penguin Books, London.

Euclid and John Casey (2007) The First Six Books of the Elements of Euclid, Project Gutenberg, Salt Lake City, Third Edition, Revised and enlarged. Dublin: Hodges, Figgis, \& Co., Grafton-St. London: Longmans, Green, \& Co. 1885,

H. Gelernter, 1964, Realization of a geometry-theorem proving machine, reprinted in Computers and Thought, Eds. Edward A. Feigenbaum and Julian Feldman, McGraw-Hill, New York, pp. 134-152,

Robert Geretschlager, 1995. Euclidean Constructions and the Geometry of Origami, Mathematics Magazine, 68, 5, pp. 357--371, Mathematical Association of America, http://www.jstor.org/stable/2690924

James J. Gibson, 1979 The Ecological Approach to Visual Perception, Houghton Mifflin, Boston, MA,

Ira Goldstein, 1973, Elementary Geometry Theorem Proving MIT AI Memo 280, April 1973

Seth G. N. Grant, 2018, Synapse molecular complexity and the plasticity behaviour problem, Brain and Neuroscience Advances 2, pp. 1--7,

Yacin Hamami and John Mumma, 2013, Prolegomena to a Cognitive Investigation of Euclidean Diagrammatic Reasoning, in Journ Log Lang Inf 22, pp 421-448

David Hilbert, 1899, The Foundations of Geometry,, available at Project Gutenberg, Salt Lake City, http://www.gutenberg.org/ebooks/17384 2005, Translated 1902 by E.J. Townsend, from 1899 German edition,

Andrew Hodges, 1999. Turing. New York: Routledge.

T. Ida and J. Fleuriot, Eds., Proc. 9th Int. Workshop on Automated Deduction in Geometry (ADG 2012), Edinburgh, September, 2012, University of Edinburgh, Informatics Research Report,

Immanuel Kant's Critique of Pure Reason (1781)
has relevant ideas and questions, but he lacked our present understanding of information processing (which is still too limited)

Imre Lakatos, Proofs and Refutations,
Cambridge University Press, 1976,

John McCarthy and Patrick J. Hayes, 1969, "Some philosophical problems from the standpoint of AI", Machine Intelligence 4, Eds. B. Meltzer and D. Michie, pp. 463--502, Edinburgh University Press,

Kenneth Manders (1998) The Euclidean Diagram, reprinted 2008 in The Philosophy of Mathematical Practice, OUP, pp.80--133 Ed Paolo Mancosu,

Kenneth Manders (2008) "Diagram-Based Geometric Practice", In Paolo Mancosu (ed.), The Philosophy of Mathematical Practice. OUP, pp.65--79

Noboru Matsuda and Kurt Vanlehn (2004), GRAMY: A Geometry Theorem Prover Capable of Construction, Journal of Automated Reasoning Vol 32 (3--33) Kluwer Academic Publishers. Netherlands.

David Mumford, 2016, Grammar isn't merely part of language, Online Blog,

Tuck Newport, Brains and Computers: Amino Acids versus Transistors,
2015, Kindle,
Discusses implications of href="#von-Neumann-brain">von Neumann 1958,

Piaget, 1981, 1983 Jean Piaget's last two (closely related) books written with collaborators are relevant, though I don't think he had good explanatory theories.

Possibility and Necessity
Vol 1. The role of possibility in cognitive development (1981)
Vol 2. The role of necessity in cognitive development (1983)
University of Minnesota Press, Tr. by Helga Feider from French in 1987

(Like Kant, Piaget had deep observations but lacked an understanding of information processing mechanisms, required for explanatory theories.)

Gualtiero Piccinini (2003), Alan Turing and the Mathematical Objection, Minds and Machines
Feb, 2003, Kluwer Academic

L. J. Rips, A. Bloomfield and J. Asmuth, 2008, From Numerical Concepts to Concepts of Number, The Behavioral and Brain Sciences, Vol 31, no 6, pp. 623--642,

Erwin Schrödinger (1944) What is life? CUP, Cambridge,
I have an annotated version of part of this book here

Dana Scott, 2014, Geometry without points. (Video lecture, 23 June 2014,University of Edinburgh)

Frege on the Foundation of Geometry in Intuition Journal for the History of Analytical Philosophy Vol 3, No 6. pp 1-23,

Sloman, A. (1962). Knowing and Understanding: Relations between meaning and truth, meaning and necessary truth, meaning and synthetic necessary truth (DPhil Thesis), Oxford University. (Transcribed version online.)

Aaron Sloman, 1965, "Necessary", "A Priori" and "Analytic", Analysis, Vol 26, No 1, pp. 12--16.

A. Sloman, 1971, "Interactions between philosophy and AI: The role of intuition and non-logical reasoning in intelligence", in Proc 2nd IJCAI, pp. 209--226, London. William Kaufmann. Reprinted in Artificial Intelligence, vol 2, 3-4, pp 209-225, 1971.
A slightly expanded version was published as chapter 7 of Sloman 1978, available here.

A. Sloman, 1978 The Computer Revolution in Philosophy,
Harvester Press (and Humanities Press), Hassocks, Sussex.
Free, partly revised, edition online:

A. Sloman, (1978b). What About Their Internal Languages? Commentary on three articles by Premack, D., Woodruff, G., by Griffin, D.R., and by Savage-Rumbaugh, E.S., Rumbaugh, D.R., Boysen, S. in BBS Journal 1978, 1 (4). Behavioral and Brain Sciences, 1(4), 515.

Aaron Sloman (2012-...), The Meta-Morphogenesis (Self-Informing Universe) Project (begun 2012, with several progress reports, but still work in progress).

Aaron Sloman, 2015-18, Some (possibly) new considerations regarding impossible objects, (Their significance for mathematical cognition, current serious limitations of AI vision systems, and philosophy of mind, i.e. contents of consciousness), Online research presentation,

Aaron Sloman (with help from Jackie Chappell), 2017-8, The Meta-Configured Genome, (online research paper)

A. Sloman, 2018a, A Super-Turing (Multi) Membrane Machine for Geometers Part 1
(Also for toddlers, and other intelligent animals)
PART 1: Philosophical and biological background

A. Sloman, 2018b A Super-Turing (Multi) Membrane Machine for Geometers Part 2
(Also for toddlers, and other intelligent animals)
PART 2: Towards a specification for mechanisms

A. Sloman, 2018c. Key Aspects of Immanuel Kant's Philosophy of Mathematics Ignored by most psychologists and neuroscientists studying mathematical competences. (Online discussion note, December 2018, derived from ideas in Sloman(1962).)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/kant-maths.html (also pdf)

Wikipedia contributors, Tarski's axioms for geometry Wikipedia, The Free Encyclopedia,
[Accessed 6-November-2018]

Trettenbrein, Patrick C., 2016, The Demise of the Synapse As the Locus of Memory: A Looming Paradigm Shift?, Frontiers in Systems Neuroscience, Vol 88,

A. M. Turing, (1950) Computing machinery and intelligence,
Mind, 59, pp. 433--460, 1950,
(reprinted in many collections, e.g. E.A. Feigenbaum and J. Feldman (eds)
Computers and Thought McGraw-Hill, New York, 1963, 11--35),
WARNING: some of the online and published copies of this paper have errors,
including claiming that computers will have 109 rather than 109 bits
of memory. Anyone who blindly copies that error cannot be trusted as a commentator.

A. M. Turing, (1952), 'The Chemical Basis Of Morphogenesis', in
Phil. Trans. R. Soc. London B 237, 237, pp. 37--72.
(Also reprinted(with commentaries) in S. B. Cooper and J. van Leeuwen, EDs (2013)).

A useful summary of Turing's 1952 paper for non-mathematicians is:
Philip Ball, 2015, Forging patterns and making waves from biology to geology: a commentary on Turing (1952) `The chemical basis of morphogenesis', Royal Society Philosophical Transactions B,

John von Neumann, 1958 The Computer and the Brain (Silliman Memorial Lectures), Yale University Press. 3rd Edition, with Foreword by Ray Kurzweill. Originally published 1958.

Wikipedia contributors, 2018, Mathematics of paper folding Wikipedia, The Free Encyclopedia,

Alastair Wilson, 2017, Metaphysical Causation, Nous


Updates Originally installed: 11 Dec 2018
Based on a different paper installed early October 2018.

Updated: 29 Jan 2019; 8 Nov 2020

Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham