(DRAFT: Liable to change)
Any theory of consciousness that does not include and explain
ancient forms of mathematical consciousness is seriously deficient.
29 Apr 2018 Parts of a paper on deforming triangles have been moved into this paper.
A partial index of discussion notes in this directory is in
This is part of the Turing-inspired Meta-Morphogenesis project
Which is part of the Birmingham Cogaff (Cognition and Affect) project
Being a "Blind Watchmaker" in Richard Dawkins' sense is a side effect of this.
If AI researchers wish to produce intelligent organisms they will need to understand the deep, pervasive, and multi-faceted roles of mathematics in the production of all organisms, in addition to many and varied uses of mathematical mechanisms and competences in the information processing of individual organisms.
A consequence is that many forms of consciousness involve deep mathematical (e.g. topological) competences. The ideas of theorists like Immanuel Kant, von Helmholtz, Jean Piaget, Richard Gregory, Max Clowes, James Gibson, David Marr and others, all contribute fragments towards a deep theory of consciousness. And all have errors or omissions.
Any theory of consciousness that says nothing about mathematical consciousness, e.g. the forms of consciousness involved in ancient mathematical discoveries by Archimedes, Euclid, Zeno and others (including pre-verbal human toddlers), must be an incomplete, and generally also incorrect, theory of consciousness.
Biological evolution (The blind mathematician) made many such mathematical discoveries and put them to good use, long before any individual organism was aware of them.
More sophisticated examples used abstractions with parameters, e.g. a type of organism using an epigenetic control mechanism whose parameters change as the size, strength, and speed of the organism change.
Evolution made a vast number of such mathematical discoveries used in control of physical/chemical growth and development, and in particular forms of sensing and action control.
It also made meta-mathematical discoveries leading to mechanisms that allowed individual organisms to make mathematical discoveries, e.g. about how to derive information about the environment from sensory-motor data, or how to control actions to optimise speed, or accuracy or other features. Example, using structures and processes in the optic array to infer structures and relationships of perceived objects.
Many animals, including pre-verbal human toddlers, can do that sort of thing without knowing that they are doing it or how and why what they do works.
Only much later in our evolutionary history could individuals have begun making mathematical discoveries that they were aware of making and using, with the ability to try to understand how they worked, and in some cases (much later?) the ability to communicate them to others and debate the merits of alternative modes of reasoning.
Compare: Stewart Shapiro, 2009 We hold these truths to be self-evident: But what do we mean by that? The Review of Symbolic Logic, Vol. 2, No. 1
It may appear that I am using 'self-evidence' as a type of justification. I
don't! I am more concerned with explanatory mechanisms than with justifications.
For a short discussion of 'self-evidence' and how it differs
from the notion of non-empirical discovery of necessary truths see:
Extreme (and wrong) answers refer to social conventions, aesthetic/moral decisions, pragmatic claims about usefulness, etc. ....
"Mathematical reasoning may be regarded rather schematically as the exercise of a combination of two faculties, which we may call intuition and ingenuity. The activity of the intuition consists in making spontaneous judgments which are not the result of conscious trains of reasoning. These judgments are often but by no means invariably correct. . . . The exercise of ingenuity in mathematics consists in aiding the intuition through suitable arrangements of propositions, and perhaps geometrical figures or drawings."he was moving toward ideas something like the ideas presented here. But that assumes a connection between his thinking in the late 1930s and his thinking around 1950.
There is a partial form of mathematical discovery, that involves noticing a regularity without understanding why that regularity exists and cannot be violated. Pat Hayes once told me he had encountered a conference receptionist who liked to keep all the unclaimed name cards in a rectangular array. However she had discovered that sometimes she could not do it, which she found frustrating and blamed it on her own lack of intelligence. She had unwittingly discovered empirically that some numbers are prime, but without understanding why some are prime and some not, or what the mathematical implications are. http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html#primes
A child with mathematical talents may discover that some collections of cards of the same size and shape can be rearranged on a flat surface to form a rectangular array of rows and columns, with 2 or more rows and 2 or more columns and later discover that some collections of cards cannot, although every collection can be arranged in a single row or column.
However, there is a difference between merely noticing a regularity or lack of regularity (like the receptionist) and understanding why the regularity does or does not exist, in this case understanding why not all numbers are prime. This follows obviously from the fact that for any two numbers N1 and N2 (possibly the same number) a regular array of objects can have N1 columns and N2 rows. Equivalently, it is possible to form a pile of N1 objects and then make copies of that pile until there are N2 piles.
A mathematical mind can grasp not merely those facts but also that the sizes of the piles or arrays that are possible are not constrained by what size pile of cards he or she or anyone else can handle, or whether they could fit on this planet, or within this solar system.
But understanding how such mathematical minds are possible will involve understanding how various mathematical exploration and discovery processes in evolution are possible.
Whether such minds can exist in this universe will depend on mathematical features of the physical universe that make the required evolutionary processes and the required physical and chemical facts that make the required brains possible.
I suspect that because most physicists do not attend to such questions about how physics makes biology possible (although Schroedinger did in 1944) most work on fundamental physics ignores relevant constraints on required explanatory power of physical reality.
For example, it could turn out that the vast networks of numerical relationships between numerical values that characterise modern physics (e.g. as presented by Max Tegmark, 2014 Our mathematical universe, my quest for the ultimate nature of reality, among others) have deep structural gaps that physicists will not notice until they try to explain more of the fine details of life, including development, reproduction, evolution, and ever increasing sophistication of information processing of organisms, especially mathematically minded organisms.
There are two different views of ancient mathematical reasoning using diagrams and words. One regards the diagrams as mere useful aids (dispensable props?) supporting a kind of diagram-free thinking using logical, arithmetical and algebraic forms of representation identified and studied in great depth centuries after the original geometrical discoveries were made.
The other, older, view (found in Kant 1781, for example) developed with some new details here, is that diagrammatic forms of reasoning play a crucial role in some ancient and modern forms of reasoning and they are as much parts of mathematical reasoning as numerical, logical and algebraic reasoning developed in the last two centuries. In part that is because of the requirement for mathematical discovery to produce information about what is possible, impossible, and necessarily the case, and not merely information about what actually has or has not occurred and the relative frequencies of various occurrences (i.e. statistical information or derived probabilities).
Most philosophers of mind and theorists who write about consciousness seem to ignore mathematical consciousness. This is probably because they don't realise (as Kant did) that mathematical discoveries since ancient times are deeply connected with perception of spatial structures and processes and abilities to reason about what to do and what can and cannot be done in a physical environment. So mathematical competences are part of the intelligence of pre-verbal humans and many other species including corvids, squirrels, elephants, octopuses, orangutans, ... and many more. Any theory of consciousness that ignores all this is seriously deficient.
For an incomplete discussion of "conscious" as a polymorphous concept see the section on consciousness here:
But for domains like Euclidean geometry and its topological basis, variations in size, shape, and relationships are continuous, not discrete, e.g. variations in curvature of a spiral, or variations in shape of a polygon as one vertex moves while the rest are fixed, or variations in relationships as one geometric structure rotates relative to another, or distances between parts of structures vary. The sets of possibilities generated by those continuous variations are inherently unbounded and therefore cannot be exhaustively examined in order to determine that within some variety of cases a particular relationships will always hold (necessity) or that it cannot hold (impossibility).
That means that reasoning machinery in such domains needs to be able to find discrete subdivisions between subsets of continuously varying classes of cases. The ability to do that sort of thing seems to have been used repeatedly in the ancient diagrammatic modes of reasoning discovered and or recorded by Archimedes, Euclid, Zeno, Pythagoras and others. They were also part of my own experience learning (and enjoying) Euclidean geometry at school about 60 years ago.
My 1962 DPhil thesis ) was an attempt to expound and
defend the ideas about such ancient mathematical modes of reasoning that I
encountered in Kant's
Critique of Pure Reason (1781) as a graduate student, which I hoped
could be given a deeper, more precise, justification using AI techniques after
Max Clowes introduced me to AI in 1969.
(Compare Sloman 1971, and 1978 Chapter 7.)
For many years I suspected that the required forms of reasoning, and the forms of spatial perception on which they are based, could be implemented in digital computers using suitably designed virtual machinery.
See this discussion of Virtual machine functionalism (not understood by most philosophers who discuss computational models of mind):In Sloman 1978 I argued that digital computers must suffice for implementing human-like minds, since any continuous process can be simulated as accurately as required by a discrete process. At that stage I think I was unaware of chaotic systems in which arbitrarily small differences can have arbitrarily large consequences very rapidly.
Another important fact is that there are ways of thinking about continuous structures and processes that yield deep insights. For example, one of the assumptions made implicitly by Euclid, but not formulated as an axiom was an assumption of "completeness". For example, if C is a closed continuous curve in a plane and L a straight line segment in the same plane, and part of L is in the interior of L and part not, then there must be a point P that is common to C and L. This might not be true in all possible spaces, (for example, if straight lines are approximated in a rectangular grid then there could be a pair of lines that cross over each other without sharing a point of intersection, e.g. if one line occupies a diagonal collection of points, the other line could cross it without the two lines sharing a common point. But in Euclidean geometry that's not possible: if two lines in the same plane cross over each other then there must be a point common to both lines, in that plane.
All of this lends support to the conjecture that there are forms of information processing, and especially mathematical discovery, that have not yet been fully understood and may not be easy (and perhaps not even possible) to implement on digital computers.
This paper is an early attempt to specify a (speculative, and still incompletely defined) alternative to digital computers (e.g. Turing machines), that could provide a basis for many hard to explain aspects of animal perception, reasoning, and problem solving, and which could also explain how the deep discoveries made by ancient mathematicians were possible.
I have been calling the new type of machine (provisionally) the Super-Turing Membrane machine, or Super-Turing Geometry machine. There have been previous proposals for Super-Turing computing machines, but not, as far as I know, in the context of producing a robot mathematician able to make discoveries in Euclidean geometry.
It is sometimes forgotten that the axioms of Euclidean geometry were not arbitrary assumptions assembled to specify a form system that can be explored using logical inference methods.
Those axioms were all important discoveries, which seem to require mechanisms of reasoning that we don't yet understand. I don't think current neuroscience can explain them and they are not yet included in AI reasoning systems. So a suggested role for the new type of machine is as part of an explanation of the early forms of reasoning and discovery that led to Euclidean geometry, long before there was a logic-based specification using cartesian coordinate representations of geometrical structures and processes.
The examples below are merely illustrative of the possible roles for the previously unnoticed type of machine, for which I cannot yet give a precise and detailed specification. These are still early, tentative, explorations.
Dana Scott, 2014, Geometry without points. (Video lecture, 23 June 2014,University of Edinburgh)
He assembles and develops some old (pre-20th century) ideas (e.g. proposed by Whitehead and others) concerning the possibility of basing Euclidean geometry on a form of topology that does not assume the existence of points, lines, and surfaces, but constructs them from more basic notions: regions in a point-free topology.
Although it may be possible to produce a formal presentation of the ideas using standard logical inferences from axioms, his presentation clearly depends on his ability, and the ability of his audience, to take in non-logical, spatial forms of reasoning, supported diagrams, hand motions, and verbal descriptions of spatial transformations.
Perhaps the ancient geometers could have discovered that mode of presenting geometry, but they did not. It seems to be a fundamental feature of geometry (the study of spatial structures and processes) that there is no correct basis for it: once the domain is understood we can find different "starting subsets" from which everything else is derived.
And sometimes surprises turn up, like the discovery of the Neusis construction that extends Euclidean geometry in such a way that trisection of an arbitrary angle becomes easy, whereas in pure Euclidean geometry it is provably impossible. For more on that see
The non-logical (but not illogical!) forms of representation, that seem to be involved in the original discovery processes and much current human reasoning about spatial structures and processes, seem to have used what I called "analogical" rather than "Fregean" forms of representation in Sloman 1971 and in chapter 7 of Sloman 1978 (not to be confused with representations that make use only of isomorphism).
Further examples of this essentially spatial rather than logical kind of
reasoning and discovery
are presented in a collection of web pages on this site,
including, most recently (Nov 2017, onwards):
and these older explorations (some of which are available in both html and pdf formats):
Sloman (2007-14) (explaining why AI vision and action control mechanisms could benefit from such mechanisms),
and this video presentation (for a workshop at IJCAI 2017, August 2017):
Shapes in this space can change their location, their size, their orientation, and relative speeds of motion. Moreover, two non-overlapping shapes can move so that they overlap, for example causing new shapes to be formed through intersections of structures. Two initially separate lines can move in such a way as to form a structure with a point of intersection and three or four branches from that point. If the point of intersection moves, then relative lengths of parts of the structure will change: e.g. one getting smaller and the other larger.
The structures and processes are not restricted to points, lines, circles and polygons: arbitrary blobs, including blobs that move and change their shape while they move can occur in the space. If two of them move they can pass through each other, producing continuously changing boundary points and regions of overlap and non-overlap.
Groups of items, e.g. regular or irregular arrays of blobs and other shapes can exist and move producing new shapes and processes when they interact. How they move should correspond to various physical situations: e.g. the visible silhouette of a complex 3-D object may go through complex changes as the orientation of the object to the line of sight changes. Compare hand shadow art, in which interpretations of shadows of hands vary enormously: https://www.youtube.com/watch?v=4drz7pTt0gw.
The visual space in which percepts move is NOT required to have a metric -- partial orderings suffice for many biological functions, and many other purposes, as illustrated in Sloman (2007-14), although very precise metrics are required for some activities, e.g. playing darts, trapeze displays, and death-defying leaps between tree branches used by spider monkeys, squirrels and others. That paper shows why the explanatory relevance of the ideas presented here extends far beyond mathematical competences.
The mechanisms seem to be required for many aspects of human and non-human intelligence, including, for example, the ability of a crawling baby to work out how to move to shut a door with his legs after passing through the doorway, as depicted in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html#door-closing and also in much ancient mathematical reasoning, along with many applications in painting, architecture, engineering design, etc.
The (still unknown) conjectured implementation machine is used both for many practical activities in which physical actions are considered, selected, intended, performed and controlled, and also used (at a much later stage in biological evolution, followed by cultural evolution) in ancient mathematical discoveries.
The underlying machine seems to have some features partly analogous to data-structures used in some computer-based sketch-pads, either using materials supporting arbitrary resolution (most unlikely!), or else uses to which resolution of images is irrelevant, with additional meta-cognitive apparatus as suggested below.
For example if you look at a scene, e.g. a long straight road, and imagine how its appearance will change as your viewpoint changes, e.g. if you move vertically upwards while still looking along the road, the bounding edges of the appearance of the road may increase in length and the perceived angle at which they meet in the distance will vary in ways that can be discovered by experimenting with examples. However, there is no need to experiment with actual examples, since this is a type of correspondence between two spatial changes that can be discovered merely by thinking about the process. E.g. suppose you are standing on a straight road with parallel edges and looking along the road at some fixed location on the road. If the road lies in a planar surface and your viewpoint starts moving upward perpendicular to the surface, while you continue to focus on the same distant part of the road, you may, with a little practice be able to work out how the appearance of the left and right edges of the road will change as you move up. This imaginative ability is probably not innate, but neither is it based entirely on experience of moving upward from a position on a road. I suggest that most people can use their understanding of space to work out how the appearance will change, just as you can use your understanding of arithmetic to work out the product of two numbers that you have never previously multiplied.
A some stage this project should include a survey of a wide range of special cases of such relationships between view location, view direction, and orientations of surfaces that we all take for granted in everyday life, and which current robots might have to learn through collecting observations as they move around in space, whereas humans can discover such correlations merely by thinking about them (although abilities to do this may vary across individuals, and with age and development within individuals).
That ability to make such discoveries merely by doing spatial reasoning (possibly with your eyes shut) requires use of sophisticated mathematical mechanisms in brains whose operation could be mimicked on computers by using cartesian coordinate representations of space and performing algebraic and trigonometric transformations of spatial information, although it is very unlikely that that is how brains derive the information about effects of change. On the other hand, as far as I know such abilities cannot yet be explained by known physical features of brains.
In particular, these competences use neither statistical correlations between discrete categories (which would not provide the required kind of mathematical necessity), nor exact functional relationships between metric spaces which neural mechanisms don't seem capable of computing.
A machine used to implement these capabilities will need fairly rich non-metrical topological structures and reasoning powers. E.g. thinking about a plane containing a straight (i.e. symmetrical) line passing through the interior of a convex closed curve in the plane, with no straight portions, e.g. a circle or ellipse, we can see that there must be exactly two points where the line crosses the curve, i.e. connects the interior and the exterior of the curve. If there are more than two the curve cannot be convex. (Why?) If the curve is fixed, the line can be continuously moved "sideways" in the shared plane, relative to the curve until it no longer passes through the interior of the curve, though it will remain co-planar with the curve. As the line moves the points of intersection with the curve will move, eventually merge and then no longer exist.
What can you infer about how the distance between the points of intersection changes while the line moves, given that the curve is closed and convex? How do you do it without knowing the exact shape of the curve?
That's a very special case, but I think ordinary visual (and tactile?) perception involves a vast collection of cases with all sorts of partial orderings that can be used for intelligent control of behaviour using abilities to reason about how as one thing increases another must also increase, or must decrease, or may alternate between increasing and decreasing, etc. (E.g. think of a planar trajectory curving towards obstacle A and away from obstacle B then changing curvature to avoid a collision with A). (See also the "changing affordances" document, mentioned above.)
These mechanisms will be connected to spatial perception mechanisms, e.g. visual, tactile, haptic, auditory, and vestibular mechanisms, but the connections will be complex and indirect, often directly linked to uses of perception in controlling action, as opposed to merely contemplating spatial structures and processes.
Some of my ideas about this are based on ideas in Arnold Trehub's book Trehub(1991), which postulates a central, structured, multi-scale, multi-level dynamically changing store of different aspects of spatial structures and processes in the environment. Visual perception will be one of the sources. (As far as I know he did not discuss examples like mine, nor attempt to explain mathematical cognition.) On this view the primary visual cortex (V1) is best viewed as part of a rapid device for sampling information from what Gibson (1979) called "The optic array". Among other things Trehub's mechanism can explain why the blind spot is normally invisible, even though that was not Trehub's intention!
The tendency in Robotics, and AI generally, to use metrical spatial information, rather than multiple, changing, partial ordering relationships with inexact spatial measures, leads to excessive reliance on complex, but unnecessary, probabilistic reasoning, where online control using partial orderings could suffice, as suggested in Sloman(2007-14). But there are many details still to be filled in.
Further progress on requirements will require consideration of the following
Instead of the TM's discrete, linear tape, the STM has some still unknown number of overlapping stretchable movable transparent membranes on which 2-D structures can (somehow?) be projected and then slid around, stretched, and rotated and compared.
E.g. this could combine visually perceived surface structures
and the same structures perceived using haptic and tactile sensing.
The TM tape reader that can only move one step left or right and recognize one of a fixed set of discrete symbols would have to be replaced in the STM by something much more sophisticated that can discover structures and processes that result from those superimposed translation and deformation operations, and relative motions between contents of different membrane layers. (E.g. 'watching' a triangle move across a circle, with and without stretching and rotation.)
In ways that are still unknown, the machine needs to be able to detect that whereas some consequences of those transformations are contingently related to previous states, in other cases the structural relationships make the consequences inevitable: especially topological consequences such as possibility or impossibility of routes between two locations that don't cross a particular curve in the same surface.
Such a machine should also be able to use still unknown kinds of exhaustive analysis to reach conclusions about the *impossibility* of some process producing a certain kind of result.
In some organisms there would be Meta-Cognitive Layers inspecting those processes and, among other things, noting the differences between possible, impossible, contingent, and inevitable consequences of structural relationships (the "alethic" modalities that are central to mathematical discovery).
In TMs and digital computers, that sort of detection can be done by exhaustive analysis of possible modifications of certain symbols, e.g. truth tables, or chains of logical formulae matched against deduction rules; but it's very hard to see exactly how to generalise such meta-cognitive abilities to detect impossibility or necessity in the STM, where things can vary continuously in two dimensions.
Finally, the machine-table of a TM would have to be replaced by something much more complex that reacts to combinations of detected structures and processes in the STM by selecting what to do next. In addition, the biological STM would have to be linked to sensors and effectors, where some of the sensors can project new structures onto the membranes.
The general design would have to accommodate different architectures. For example, I am pretty certain that vertebrate vision would have started without stereo overlap, as in many birds, and many non-carnivorous mammals.
There would need to be something like left and right STMs and mechanisms for transferring information between them as an organism changes direction and what's visible only in one eye becomes visible in the other eye. The same mechanism could be used with greater precision as evolution pushed eyes in some organisms towards the front, producing partly overlapping projections of scenes, making new kinds of stereo vision possible.
(Unfortunately, Julesz' random dot stereograms have fooled some people into
thinking that *only* that low-level "pixel based" mechanism is used for
biological stereo vision, whereas humans (and I suspect many other animals) can
obviously make good use of larger image structures, e.g. perceived vertical
edges of large objects, in achieving stereo vision.
I suspect evolution also eventually discovered the benefits of meta-cognitive layers, without which certain forms of intelligent reasoning (e.g. debugging failed reasoning processes) would be impossible.
The new kinds of machine table corresponding to a TM's machine table would have to be able to manipulate information about continuously varying structures, e.g. comparing two such processes and discovering how their consequences differ. Later meta-meta-...cognitive mechanisms, might add a collection of new forms of intelligence.
The central machine inspecting and changing the Super-Turing membrane, cannot be discrete insofar as it has to encode recognition of both non-discrete static differences (e.g. a narrowing gap between two lines as measured increasingly close to a intersection point), and recognition of continuous changes in time, including in some cases comparisons of rates of change.
For example, the deforming triangle example in
involves consideration of what happens as a vertex of a triangle moves along a straight line, steadily increasing its distance from the opposite side of the triangle. The machine needs to be able to detect steady increase or decrease in length or size of angle, but does not require use of numerical measures or probabilities, or changing probabilities.
So computation of derivatives, i.e. numerical rates of change, etc. need not be relevant except in very special cases, including cases of online control of actions, discussed by Gibson. Many examples of mathematical discovery seem to arise from offline uses of these mechanisms, to reason about possible actions and consequent changes without actually performing the actions. And many such cases will make use categorisations and partial orderings of structures and processes rather than measures.
So the machine's "brain" mechanisms, need to be able to make use of
categorisations like 'static', 'increasing', 'decreasing', 'a is closer to b
than to c' etc., including making use of partial orderings of structures and
processes (i.e. spatial and temporal orderings). Illustrations of these ideas
can be found in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/changing-affordances.html (or pdf)
What kind of brain machinery makes it possible to reason that there must be such
a circle? This is a problem from ancient geometry, and there is a construction
for finding the circle that passes through A and B and meets L as a
tangent, known as Apollonius' construction (also mentioned in the next section).
As shown there, if the line L is not parallel to AB there will be two circles passing through points A and B that meet L as a tangent. One of the circles has a center above the line AB and one has a centre below.
I shall later add some thoughts here about how the need to be able to perform these spatial reasoning tasks suggests requirements for our Super-Turing machinery.
When a maze program actually finds a continuous route between its start location and a specified target location then that shows that the maze has at least one solution. But if it fails to find a continuous route that may simply be due to limitations of the search strategy used, including the possibility that it has missed a very narrow gap.
This illustrates a general point: proving that a certain type of entity is possible is, in many cases, much easier than discovering impossibility, i.e. that there cannot be any instances of that type. That's because finding any instance of the type proves possibility, whereas proving impossibility (or necessity) requires more powerful cognitive resources: i.e. some way of exhaustively specifying locations in a possibility space so that they can all be shown to satisfy or not to satisfy some condition.
This is sometimes easy for simple, discrete, possibility spaces, e.g. the space of combinations of truth-values for an expression in propositional calculus with a fixed set of boolean variables. Although the number of combinations expands exponentially with the number of variables, it is always finite, whereas the set of continuous paths between two points in a 2D or 3D space is typically infinite, unless some special constraint is specified (e.g. being straight, or being a circle with centre at a third specified point).
Note that although finding an instance conclusively proves possibility, there are branches of mathematics, engineering and science where finding an instance may be very difficult. E.g. although every mathematical proof using standard logic, algebra and arithmetic is finite, the space of possible proofs is unbounded, so finding a proof that actually exists may require a very long search. If there is no such proof the search will continue forever.
This is also true in geometry, since some simply described spatial
configurations may require complex constructions, e.g. the statement above that,
given two points A and B and a line L distinct from the line AB, find a circle C
such that C passes through points A and B, and has L as a tangent.
For details see
A different example involves answering a question about these two figures that can be thought of as requiring consideration of an infinite collection of possibilities, without going through infinitely many steps:
If the routes are thought of as arbitrarily thin paths linking the two points, the ability to detect when the answer is "No" is much harder to explain, as it requires an ability to survey completely a potentially infinite space of possible routes, What sort of brain mechanism, or simulated brain mechanism can provide that ability, or an equivalent ability avoiding explicit consideration of an infinite set of possibilities?
All of this is crucial to some of the uses of visual sensing (or other spatial sensing) in more or less fine-grained online control (emphasised by James Gibson), as opposed to the use of vision to categorise, predict, explain, etc. Some examples involving affordance detection going beyond Gibsonian online control are discussed in Sloman (2007-14).
For now I wish to focus mainly on the role of impossibility detection in mathematical reasoning. The difference between existence and non-existence of a route linking two blue dots without ever entering a red area is a mathematical difference. I expect most readers will not have much difficulty deciding whether such a route exists in Figure A or Figure B.
What sort of brain mechanisms can perform an exhaustive search of all possible routes from A that do not enter any red space and discover no such route reaches location B in one of the pictures? Does it really involve checking infinitely many possible routes starting fom one of the blue dots?
How can brain mechanisms implement such an exhaustive checking process, covering an enormous, possibly infinite, variety of cases. For now I'll leave that question for readers to think about.
It is also not a statistical correlation found in collections of data by statistical analysis. It's a perceived feature of a process in which two things necessarily change together.
Moreover mathematical consciousness involves seeing why such relationships must hold.
I have a large, and growing, collection of examples. Many more, related to perception of possibilities and impossibilities, are collected in
and in documents linked from there.
It is not too hard to think of mechanisms that can observe such correspondences in perceived processes, e.g. using techniques from current AI vision systems. These are relatively minor extensions of mechanisms that can compare length, area, orientation, or shape differences in static structures without using numerical measurements.
What is much harder is explaining how such a Super-Turing mechanism can detect a necessary connection between two structures or processes.
The machine needs to be able to build bridges between the two detected processes that "reveal" an invariant structural relationship.
In Euclidean geometry studied and taught by human mathematicians, construction-lines often build such bridges, for example in standard proofs of the triangle sum theorem, or Pythagoras' theorem. (A taxonomy of cases is needed.)
But, it is important to stress that these mechanisms are not infallible. For example this document explains how I was misled at first by the stretched triangle example, because I did not consider enough possible lines of motion for the triangle's vertex. http://www.cs.bham.ac.uk/research/projects/cogaff/misc/deform-triangle.html
The proposed machine will almost certainly generate the sorts of mistakes that Lakatos documented (in his Proofs and Refutations), and much simpler mistakes that can occur in everyday mathematical reasoning).
But it must also have the ability to detect and correct such errors in at least some cases -- perhaps sometimes with help of social interactions that help to expand the ideas under consideration, e.g. by merging two or more lines of thought, or sets of examples explored by different individuals.
I expect some computer scientists/AI theorists would not be happy with such imperfections: they would want a mathematical reasoning/discovery machine to be infallible.
But that's clearly not necessary for modelling/replicating human mathematical minds. Even the greatest mathematicians can make mistakes: they are not infallible.
(Incidentally this dispenses with much philosophical effort on attempting to account for infallibility, e.g. via 'self-evidence'.)
All this needs to be put into a variety of larger (scaffolding) contexts:
But there are many unknown details, including which (human and non-human) brain mechanisms can do such things and (a) how and when they evolved and (b) how and when they develop within individuals.
The conjectured virtual membrane will have various ways of acquiring "painted" structures, including the following (a partial, provisional, list):
NB: regarding the last point: I am not suggesting that evolution actually produced perfectly thin, perfectly straight, structures, or mechanisms that could create such things. Rather the mechanisms would have the ability to (implicitly or explicitly) postulate such limiting case features, represent them, and reason about their relationships, by extrapolating from the non-limiting cases. (Is this what Kant was saying in 1781?)
For example, the assumption that when two perfectly thin lines cross the intersection is a point with zero diameter would depend on an ability to extrapolate from what happens if two lines with non-zero thickness cross, and then they both become narrower and narrower, so that the overlap location becomes smaller in all directions. (Yet another "theorem" lurks there!)
It should not be assumed that any of this produces high precision projections, though a conjectured learning process (possibly produced by evolution across many stages of increasing sophistication) may generate mechanisms that can "invent" limiting cases, e.g. perfectly thin lines, perfectly straight lines, etc. perhaps starting with simpler versions of the constructions presented by Dana Scott in Scott(2014).
I have not attempted to answer the question whether the proposed membrane mechanisms (still under-specified) require new kinds of information processing (i.e. computation in a general sense) that use physical brain mechanisms that are not implementable on digital computers, perhaps because they rely on a kind of mixture of continuity and discreteness found in many chemical processes in living organisms.
It could turn out that everything required is implementable in a suitable virtual machine implemented on a digital computer. For example, humans looking at a digital display may perceive the lines, image boundaries and motions as continuous even though they are in fact discrete. This can happen when a clearly digital moving display is viewed through out of focus lenses, or at a distance, or in dim light, etc. In that case the blurring or smoothing is produced by physical mechanisms before photons hit the retina.
But it is also possible to treat a digital display as if it were discrete, for example assigning sub-pixel coordinates to parts of visible lines, or motion trajectories. That sort of blurring loses information, but may sometimes make information more useful, or more tractable. It could be useful to build visual systems for robots with the ability to implement various kinds of virtual de-focusing mechanisms for internal use when reasoning about perceived structures, or controlling actions on perceived structures, e.g. moving a hand to pick up a brick.
Insofar as human retinas have concentric rings of feature detectors with higher resolution detectors near a central location (the fovea) and lower resolution detectors further from the centre, it can be viewed as a mechanism that gives perceivers the ability to change the precision at which certain features in the optic array are focused. It may have other applications.
Compare the challenges to conventional thinking about brains implicit in Schrödinger(1944) and explicit in Grant(2010), Gallistel&Matzel, 2012 and Trettenbrein(2016), suggesting that important aspects of natural information processing are chemical, i.e. sub-neural.
It is possible that such molecular-level forms of information processing could be important for the sorts of information processing brain functions postulated in Trehub(1991), though molecular level implementation would require significant changes to Trehub's proposed implementation of his ideas.
Turing's 1952 paper on chemistry-based morphogenesis Turing (1952) at first sight appears to be totally unconnected with his work on computation (except that he mentioned using computers to simulate some of the morphogenesis processes). But if Turing had been thinking about requirements for replicating geometrical and topological reasoning used by ancient mathematicians and learners today, then perhaps he thought, or hoped, the chemical morphogenesis ideas would be directly relevant to important forms of computation, in the general sense of information processing. In that case his ideas might link up with the ideas about sub-neural computation referenced above, which might perhaps play a role in reasoning mechanisms conjectured in Sloman(2007-14), which draws attention to kinds of perception of topology/geometry based possibilities and impossibilities that were not, as far as I know, included in the kinds of affordance that Gibson considered.
The membrane (or multi-membrane) machine needs several, perhaps a very large number, of writeable-readable-sketchable surfaces that can be used for various purposes, including perceiving motion, controlling actions, and especially considering new possibilities and impossibilities (proto-affordances).
The idea also needs to be generalised to accommodate inspectable 3D structures and processes, like nuts rotating on bolts, as discussed another document. (Something about this may be added here later.)
The brain mechanisms to be explained are also likely to have been used by ancient mathematicians who made the amazing discoveries leading up to publication of Euclid's Elements, and later. http://www.gutenberg.org/ebooks/21076
I think there are deep connections between the abilities that made those ancient mathematical discoveries possible, and processes of perception, action control, and reasoning in many intelligent organisms, as suggested in the workshop web page mentioned above.
One consequence of the proposal is that Euclid's axioms, postulates and constructions are not arbitrarily adopted logical formulae in a system that implicitly defines the domain of Euclidean geometry. Neither are they mere empirical/statistical generalisations capable of being refuted by new observations.
Rather, as Kant suggested, they were all mathematical discoveries, using still unknown mechanisms in animal brains originally produced (discovered) by evolution with functions related to reasoning about perceived or imagined spatial structures and processes, in a space supporting smoothly varying sets of possibilities, including continuously changing shapes, sizes, orientations, curvature and relationships between structures, especially partial orderings (e.g. of size, containment, angle, curvature, etc.)
Despite all the smooth changes, the space also supports many interesting emergent discontinuities and invariants, that the ancients discovered and discussed, many of which seem to be used unwittingly(?) by other intelligent species and pre-verbal children. (Adult humans normally have additional meta-cognitive mechanisms.)
Examples used in much every-day visual perception, action control and planning are discussed in Sloman(2007-14)). The same general principles are involved in learning to add and subtract numbers, though specific types of information and information change are different. See Seely Brown, et. al., 1977
The location at which the size of a vertex of a triangle peaks when moved along a straight line that does not pass through the base of the triangle is another, surprisingly complicated, example, discussed in another document.
I conjecture that Turing may have been thinking about these issues when he wrote his paper on morphogenesis, published two years before he died: Turing (1952). For a useful summary for non-mathematicians, see Ball (2015)