Shapiro summarises the self-evidence claim (without endorsing it):
"We know the axioms, individually, to be truths about their subject matter, and this knowledge does not rely on anything else."
From our viewpoint, the knowledge in question (e.g. knowledge about the nature and properties of spatial structures, numbers, arithmetical operations, etc.) depends on mechanisms that were originally products of biological evolution, though humans may one day be able to design and implement alternative discovery mechanisms.
I claim that some animals, including humans of various ages, and other intelligent species, are able to discover facts about necessary connections or incompatibilities between various features of structures and processes, and to use those discoveries in solving practical problems. For most such species this may happen without individual members being aware of what they are doing or why it works: they lack appropriate meta-cognitive mechanisms. The same is true of young children. It is possible that the mechanisms do not develop in the same way in all adult humans because of genetic or environmental differences.
A subset of those animals can also discover what they are doing, teach others to do it and discuss limitations and errors that sometimes occur in the process. The metacognitive resources required for the latter presumably evolved later, and develop later in humans. It is also likely that those metacognitive mechanisms have several distinct layers some of which depend on products of evolution (species learning), while others are products of individual development.
Instead of specific competences, evolution often produces meta-competences that allow individuals to develop specific competences tailored to different environments. The most obvious examples are products of evolution that make it possible for humans to develop linguistic competences: the genome does not provide knowledge of particular spoken words, grammatical constructs, and other features that vary between human languages. So evolution produced something very abstract, but very rich in generative power: a collection of mechanisms providing a developmental framework that allows an enormous variety of different languages, differing at many different levels, to be created in individuals or groups of individuals. I suggest that some of the products of evolution that make possible human mathematical discoveries are similarly abstract -- with deep generative powers that go beyond the mathematical knowledge acquired in any individual human, or any human research community.
What mechanisms make those discoveries possible is far from obvious. Describing some mathematical discoveries as "self evident" in effect rejects the requirement to explain how they are discovered and used, and why the ability to discover and apply those truths can be very valuable Shapiro, 2009.
Different kinds of justification
Justification 1: how we know it's true
A difference between my viewpoint and the various viewpoints discussed by Shapiro is that the mathematicians and philosophers he discusses seek some way of justifying mathematical claims by using some kind of mathematical reasoning, which, according to some philosophers and mathematicians, may ultimately rest on introspectable features of mathematical discovery and reasoning process. Usually "justify" in this sort of context means something like "establish as true" or "provide reasons for accepting as true".
Justification 2: how we know it is used (and used successfully)
In contrast, the biology-informed and AI-informed viewpoint I am trying to develop seeks specifications for designs for working mathematical minds, able to replicate all the main advances in various branches of mathematics, possibly working in collaboration with other mathematicians who raise challenges and make suggestions and also interacting with an environment that in principle can produce counter-examples to some mistaken forms of reasoning (e.g. discovering non-euclidean geometries by examining examples of spherical and toroidal shapes).
Such a design specification can be evaluated in two different ways: (a) by checking whether it is the design that is actually in use, and (b) by explaining why it is a good design -- i.e. why it works.
One test for such a design specification of type (a) is whether evidence can be found for it in human (or non-human) developmental processes and resulting brain mechanisms, along with theoretical analysis establishing the generative potential of such mechanisms.
Another test would be to use the proposed evolved design specification in the construction of robots of various sorts able to develop in ways that mirror empirically discovered developmental trajectories in humans and other intelligent species.
It is very likely that there is not a single such design produced (discovered) by evolution at a particular stage in our evolutionary history, but a wide variety of such designs found in both humans and other species that require various forms of intelligent information processing -- e.g. the mechanisms in fly brains that make them so successful at avoiding being hit by fly-swatters -- unlike other flying insects.
I do not yet have an implementable theory of what human brains (or fly brains) do, let alone one that can be demonstrated to be actually in use, but there is one important difference between the geometric cases and the validity of logical inferences in classical propositional calculus, such as
namely that this sort of logical inference involves a discrete set of possibilities (combinations of P true or false with Q true or false), that can be exhaustively analysed to ensure that it is impossible for the premisses be true while the conclusion is false, whereas geometrical/topological reasoning usually starts with a continuum of possibilities within which partitions can be discovered that split the possibilities into distinct subsets.
For example there are indefinitely many ways in which a vertex of a triangle can move while the opposite side remains fixed and all those motions will change properties of the triangle, including its shape, its area, and relationships between parts of the triangle. The ability to think about such changes and to partition them into sub-cases can be related to Gibson's ideas about affordances (1979), though we need to generalise Gibson's ideas to accommodate these cases.
Most, if not all, of Gibson's examples of affordances are concerned with opportunities for action by the perceiver. In some cases detected features or processes in the environment (e.g. perceived texture expansion when moving towards a textured surface, or reflex triggers, such as rapid nearby motion) automatically produce a response, e.g. triggering a blinking reflex or a change of motion, e.g. swerving to avoid an obstacle, or decelerating to achieve gentle contact.
In contrast, the geometrical cases involve opportunities for spatial relationships between objects such as points, lines, circles and polygons to change. Sometimes the sets of possible changes can be usefully discretised. For example, the vertex moving on the line specified can be split into two cases: moving up (away from the opposite side of the triangle) and moving down (toward the opposite side). Such discretisation often allows proof of some general fact (or theorem) by analysing cases.
Another discrete transition (such as the transition that occurs when a vertex
moving up or down the line through the vertex passes through the original
vertex), allows a proof to consider only the finitely many alternatives
separated by that transition. Replicating that on a computer would require
development of virtual machinery supporting abilities to discover such
transitions and recognize the invariants that they separate
Sloman (2013a). In a separate document I investigate
what I've provisionally called a "Super-Turing membrane machine" that can make
I am not claiming that this is the only (non-arithmetical) way to understand why the top angle of a triangle must get smaller as the top vertex moves further up the line. Another is to consider what happens if the two angles at the top remain unchanged while the vertex moves.
After I talked about this example at the PTAI conference in Leeds, in Nov 2017 Diana Sofronieva, a philosophy student with unusualy deep experience of Euclidean geometry, pointed out to me that the problem had complexities I had not noticed, and was connected with Appolonius problem. For more on that see
(Work in progress.)
It is an interesting feature of Euclidean geometry that many theorems can be proved in a variety of different ways, all, or most, of them essentially visual, or at least spatial (e.g. in an amodal form of representation of space). For example, 118 proofs of Pythagoras' theorem are presented in [Pythag].
That is, in part, evidence of the power of human visual/spatial reasoning
systems to discern mathematical properties and relationships of geometrical
structures and processes. including impossibilities, illustrated in several
online documents on this web site, including
Many (most normal?) humans clearly have (ill-understood) mechanisms for exhaustively examining an infinite, smoothly varying, collection of cases, sometimes including a few discrete transitions (like the transition from moving a vertex away from the base of a triangle to moving it in the opposite direction, as discussed in
How could evolution have produced such investigative competences? How do brains make them possible? How do they develop between birth (where they are clearly lacking) and later stages of development. Can we replicate the required mechanisms and epigenetic processes on computers, using epigenetic mechanisms with properties sketched in Sloman (+Chappell), 2017.
As far as I know there is nothing known to neuroscience that explains those abilities, and nothing in AI, so far, that simulates them. I suspect that until we know how to build machines that are capable of supporting such reasoning we shall not be able to build robots with the same kinds of intelligence in dealing with spatial structures and processes as humans and many other intelligent species have.
In a separate document I have begun to specify some of the requirements for a
machine that can make the sorts of discoveries that led up to Euclid's
Still very much "work in progress". The mechanisms proposed may show how an intelligent machine can discover that certain structures allow only a small set of possible features of a certain type. To describe such mechanisms as produce "self-evidence" seems to me to be misguided.
J. J. Gibson, The Ecological Approach to Visual Perception, Houghton Mifflin, Boston, MA, 1979.
Stewart Shapiro, 2009
We hold these truths to be self-evident: But what do we mean by that?
The Review of Symbolic Logic, Vol. 2, No. 1
A. Sloman, 2013a "Virtual Machine Functionalism (The only form of
functionalism worth taking seriously in Philosophy of Mind and theories of Consciousness)', Research note,
School of Computer Science,
The University of Birmingham.
Aaron Sloman (in collaboration with Jackie Chappell), 2017
The Meta-Configured Genome