INCOMPLETE DRAFT
(Work in progress)

A Super-Turing (Multi) Membrane Machine for Geometers
(Also for toddlers, and other intelligent animals)

(DRAFT: Liable to change)

Aaron Sloman
http://www.cs.bham.ac.uk/~axs/
School of Computer Science, University of Birmingham

Any theory of consciousness that does not include and explain
ancient forms of mathematical consciousness is seriously deficient.

29 Apr 2018

Parts of a paper on deforming triangles have been moved into this paper.

Installed: 30 Oct 2017
Last updated: 11 Jan 2018; 6 Apr 2018; 29 Apr 2018; 3 May 2018
2 Nov 2017; 10 Nov 2017; 21 Nov 2017; 29 Dec 2017;
This paper is: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/super-turing-geom.html
A PDF version may be added later.

A partial index of discussion notes in this directory is in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html
This is part of the Turing-inspired Meta-Morphogenesis project
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
Which is part of the Birmingham Cogaff (Cognition and Affect) project
http://www.cs.bham.ac.uk/research/projects/cogaff/


CONTENTS


Different philosophical and scientific goals

Life is riddled through and through with mathematical structures, mechanisms, competences, and achievements, without which evolution could not have produced the riches it has produced on this planet. That's why I regard evolutionary mechanisms as constituting a Blind Mathematician.

Being a "Blind Watchmaker" in Richard Dawkins' sense is a side effect of this.

If AI researchers wish to produce intelligent organisms they will need to understand the deep, pervasive, and multi-faceted roles of mathematics in the production of all organisms, in addition to many and varied uses of mathematical mechanisms and competences in the information processing of individual organisms.

A consequence is that many forms of consciousness involve deep mathematical (e.g. topological) competences. The ideas of theorists like Immanuel Kant, von Helmholtz, Jean Piaget, Richard Gregory, Max Clowes, James Gibson, David Marr and others, all contribute fragments towards a deep theory of consciousness. And all have errors or omissions.

Any theory of consciousness that says nothing about mathematical consciousness, e.g. the forms of consciousness involved in ancient mathematical discoveries by Archimedes, Euclid, Zeno and others (including pre-verbal human toddlers), must be an incomplete, and generally also incorrect, theory of consciousness.

Types of (meta-) theory about mathematics

In discussing the nature of mathematics and the mechanisms that make mathematical discoveries and their use possible, we need to distinguish the following (order not significant yet): Those goals are interconnected in various ways. For now, my primary focus is on trying to understand the information processing mechanisms (the forms of computation, in a generalised sense of "computation") that make it possible for some individual organisms (and perhaps future human-made machines) to make such mathematical discoveries and apply them in achieving increasingly complex practical (e.g. engineering and scientific) goals.

There is a partial form of mathematical discovery, that involves noticing a regularity without understanding why that regularity exists and cannot be violated. Pat Hayes once told me he had encountered a conference receptionist who liked to keep all the unclaimed name cards in a rectangular array. However she had discovered that sometimes she could not do it, which she found frustrating and blamed it on her own lack of intelligence. She had unwittingly discovered empirically that some numbers are prime, but without understanding why some are prime and some not, or what the mathematical implications are. http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html#primes

A child with mathematical talents may discover that some collections of cards of the same size and shape can be rearranged on a flat surface to form a rectangular array of rows and columns, with 2 or more rows and 2 or more columns and later discover that some collections of cards cannot, although every collection can be arranged in a single row or column.

However, there is a difference between merely noticing a regularity or lack of regularity (like the receptionist) and understanding why the regularity does or does not exist, in this case understanding why not all numbers are prime. This follows obviously from the fact that for any two numbers N1 and N2 (possibly the same number) a regular array of objects can have N1 columns and N2 rows. Equivalently, it is possible to form a pile of N1 objects and then make copies of that pile until there are N2 piles.

A mathematical mind can grasp not merely those facts but also that the sizes of the piles or arrays that are possible are not constrained by what size pile of cards he or she or anyone else can handle, or whether they could fit on this planet, or within this solar system.

But understanding how such mathematical minds are possible will involve understanding how various mathematical exploration and discovery processes in evolution are possible.

Whether such minds can exist in this universe will depend on mathematical features of the physical universe that make the required evolutionary processes and the required physical and chemical facts that make the required brains possible.

I suspect that because most physicists do not attend to such questions about how physics makes biology possible (although Schroedinger did in 1944) most work on fundamental physics ignores relevant constraints on required explanatory power of physical reality.

For example, it could turn out that the vast networks of numerical relationships between numerical values that characterise modern physics (e.g. as presented by Max Tegmark, 2014 Our mathematical universe, my quest for the ultimate nature of reality, among others) have deep structural gaps that physicists will not notice until they try to explain more of the fine details of life, including development, reproduction, evolution, and ever increasing sophistication of information processing of organisms, especially mathematically minded organisms.

Varied forms of representation and reasoning in mathematics

Over many centuries (or perhaps millennia) humans have found means by which they can derive new mathematical (e.g. numerical) truths that they can explicitly teach to students, and some of which they can use to design machines that reach the same results more quickly and more reliably than humans do.

There are two different views of ancient mathematical reasoning using diagrams and words. One regards the diagrams as mere useful aids (dispensable props?) supporting a kind of diagram-free thinking using logical, arithmetical and algebraic forms of representation identified and studied in great depth centuries after the original geometrical discoveries were made.

The other, older, view (found in Kant 1781, for example) developed with some new details here, is that diagrammatic forms of reasoning play a crucial role in some ancient and modern forms of reasoning and they are as much parts of mathematical reasoning as numerical, logical and algebraic reasoning developed in the last two centuries. In part that is because of the requirement for mathematical discovery to produce information about what is possible, impossible, and necessarily the case, and not merely information about what actually has or has not occurred and the relative frequencies of various occurrences (i.e. statistical information or derived probabilities).

NOTE:
Most philosophers of mind and theorists who write about consciousness seem to ignore mathematical consciousness. This is probably because they don't realise (as Kant did) that mathematical discoveries since ancient times are deeply connected with perception of spatial structures and processes and abilities to reason about what to do and what can and cannot be done in a physical environment. So mathematical competences are part of the intelligence of pre-verbal humans and many other species including corvids, squirrels, elephants, octopuses, orangutans, ... and many more. Any theory of consciousness that ignores all this is seriously deficient.
For an incomplete discussion of "conscious" as a polymorphous concept see the section on consciousness here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/family-resemblance-vs-polymorphism.html

Discrete and continuous domains

For discrete domains such as propositional calculus and the domain of proofs derivable from some finite set of axioms using a finite set of derivation rules, there are mechanisms that can determine necessity and impossibility by exhaustive analysis of discrete sets of cases (sometimes supplemented by inductive reasoning to deal with unbounded collections of cases).

But for domains like Euclidean geometry and its topological basis, variations in size, shape, and relationships are continuous, not discrete, e.g. variations in curvature of a spiral, or variations in shape of a polygon as one vertex moves while the rest are fixed, or variations in relationships as one geometric structure rotates relative to another, or distances between parts of structures vary. The sets of possibilities generated by those continuous variations are inherently unbounded and therefore cannot be exhaustively examined in order to determine that within some variety of cases a particular relationships will always hold (necessity) or that it cannot hold (impossibility).

That means that reasoning machinery in such domains needs to be able to find discrete subdivisions between subsets of continuously varying classes of cases. The ability to do that sort of thing seems to have been used repeatedly in the ancient diagrammatic modes of reasoning discovered and or recorded by Archimedes, Euclid, Zeno, Pythagoras and others. They were also part of my own experience learning (and enjoying) Euclidean geometry at school about 60 years ago.

My 1962 DPhil thesis ) was an attempt to expound and defend the ideas about such ancient mathematical modes of reasoning that I encountered in Kant's Critique of Pure Reason (1781) as a graduate student, which I hoped could be given a deeper, more precise, justification using AI techniques after Max Clowes introduced me to AI in 1969.
(Compare Sloman 1971, and 1978 Chapter 7.)

For many years I suspected that the required forms of reasoning, and the forms of spatial perception on which they are based, could be implemented in digital computers using suitably designed virtual machinery.

See this discussion of Virtual machine functionalism (not understood by most philosophers who discuss computational models of mind):
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
In Sloman 1978 I argued that digital computers must suffice for implementing human-like minds, since any continuous process can be simulated as accurately as required by a discrete process. At that stage I think I was unaware of chaotic systems in which arbitrarily small differences can have arbitrarily large consequences very rapidly.

Another important fact is that there are ways of thinking about continuous structures and processes that yield deep insights. For example, one of the assumptions made implicitly by Euclid, but not formulated as an axiom was an assumption of "completeness". For example, if C is a closed continuous curve in a plane and L a straight line segment in the same plane, and part of L is in the interior of L and part not, then there must be a point P that is common to C and L. This might not be true in all possible spaces, (for example, if straight lines are approximated in a rectangular grid then there could be a pair of lines that cross over each other without sharing a point of intersection, e.g. if one line occupies a diagonal collection of points, the other line could cross it without the two lines sharing a common point. But in Euclidean geometry that's not possible: if two lines in the same plane cross over each other then there must be a point common to both lines, in that plane.

Non-discrete computation

However, since working on the Turing-inspired Meta-Morphogenesis project (begun around the year of Turing's centenary, 2012) I have started taking seriously the suggestion that we need to explore alternatives to digitally implemented forms of computation, especially as chemistry-based mechanisms are so important for all forms of life and especially in brains. Chemical mechanisms provide a deep blend of discreteness and continuity, demonstrated for example in Turing (1952).

All of this lends support to the conjecture that there are forms of information processing, and especially mathematical discovery, that have not yet been fully understood and may not be easy (and perhaps not even possible) to implement on digital computers.

This paper is an early attempt to specify a (speculative, and still incompletely defined) alternative to digital computers (e.g. Turing machines), that could provide a basis for many hard to explain aspects of animal perception, reasoning, and problem solving, and which could also explain how the deep discoveries made by ancient mathematicians were possible.

I have been calling the new type of machine (provisionally) the Super-Turing Membrane machine, or Super-Turing Geometry machine. There have been previous proposals for Super-Turing computing machines, but not, as far as I know, in the context of producing a robot mathematician able to make discoveries in Euclidean geometry.

It is sometimes forgotten that the axioms of Euclidean geometry were not arbitrary assumptions assembled to specify a form system that can be explored using logical inference methods.

Those axioms were all important discoveries, which seem to require mechanisms of reasoning that we don't yet understand. I don't think current neuroscience can explain them and they are not yet included in AI reasoning systems. So a suggested role for the new type of machine is as part of an explanation of the early forms of reasoning and discovery that led to Euclidean geometry, long before there was a logic-based specification using cartesian coordinate representations of geometrical structures and processes.

The examples below are merely illustrative of the possible roles for the previously unnoticed type of machine, for which I cannot yet give a precise and detailed specification. These are still early, tentative, explorations.

Dana Scott on Geometry without points

Deep and challenging examples of the kind of reasoning I am trying to explain are in this superb (but not always easy to follow) lecture by Dana Scott at Edinburgh University in 2014
Dana Scott, 2014, Geometry without points. (Video lecture, 23 June 2014,University of Edinburgh)
https://www.youtube.com/watch?v=sDGnE8eja5o

He assembles and develops some old (pre-20th century) ideas (e.g. proposed by Whitehead and others) concerning the possibility of basing Euclidean geometry on a form of topology that does not assume the existence of points, lines, and surfaces, but constructs them from more basic notions: regions in a point-free topology.

Although it may be possible to produce a formal presentation of the ideas using standard logical inferences from axioms, his presentation clearly depends on his ability, and the ability of his audience, to take in non-logical, spatial forms of reasoning, supported diagrams, hand motions, and verbal descriptions of spatial transformations.

Perhaps the ancient geometers could have discovered that mode of presenting geometry, but they did not. It seems to be a fundamental feature of geometry (the study of spatial structures and processes) that there is no correct basis for it: once the domain is understood we can find different "starting subsets" from which everything else is derived.

And sometimes surprises turn up, like the discovery of the Neusis construction that extends Euclidean geometry in such a way that trisection of an arbitrary angle becomes easy, whereas in pure Euclidean geometry it is provably impossible. For more on that see
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trisect.html

The non-logical (but not illogical!) forms of representation, that seem to be involved in the original discovery processes and much current human reasoning about spatial structures and processes, seem to have used what I called "analogical" rather than "Fregean" forms of representation in Sloman 1971 and in chapter 7 of Sloman 1978 (not to be confused with representations that make use only of isomorphism).

Further examples of this essentially spatial rather than logical kind of reasoning and discovery are presented in a collection of web pages on this site, including, most recently (Nov 2017, onwards):
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/deform-triangle.html
and these older explorations (some of which are available in both html and pdf formats):
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-theorem.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/torus.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-sum.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trisect.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/p-geometry.html

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/rubber-bands.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/shirt.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ijcai-2017-cog.html
Sloman (2007-14) (explaining why AI vision and action control mechanisms could benefit from such mechanisms),
and this video presentation (for a workshop at IJCAI 2017, August 2017):
http://www.cs.bham.ac.uk/research/projects/cogaff/movies/ijcai-17/ai-cogsci-bio-sloman.webm


First draft requirements for such a Super-Turing spatial reasoning (STSR) engine

A Turing machine uses a linear tape of discrete locations, each of which can contain at most one symbol from a fixed set of possible symbols. The STSR machine does not have separate spaces occupied by discrete "atomic" symbols, but an undifferentiated (initially) 2D "perceptual space" in which structured "symbols" (information contents) of many kinds can be co-located, with possibilities of smoothly varying interaction.

Shapes in this space can change their location, their size, their orientation, and relative speeds of motion. Moreover, two non-overlapping shapes can move so that they overlap, for example causing new shapes to be formed through intersections of structures. Two initially separate lines can move in such a way as to form a structure with a point of intersection and three or four branches from that point. If the point of intersection moves, then relative lengths of parts of the structure will change: e.g. one getting smaller and the other larger.

The structures and processes are not restricted to points, lines, circles and polygons: arbitrary blobs, including blobs that move and change their shape while they move can occur in the space. If two of them move they can pass through each other, producing continuously changing boundary points and regions of overlap and non-overlap.

Groups of items, e.g. regular or irregular arrays of blobs and other shapes can exist and move producing new shapes and processes when they interact. How they move should correspond to various physical situations: e.g. the visible silhouette of a complex 3-D object may go through complex changes as the orientation of the object to the line of sight changes. Compare hand shadow art, in which interpretations of shadows of hands vary enormously: https://www.youtube.com/watch?v=4drz7pTt0gw.

The visual space in which percepts move is NOT required to have a metric -- partial orderings suffice for many biological functions, and many other purposes, as illustrated in Sloman (2007-14), although very precise metrics are required for some activities, e.g. playing darts, trapeze displays, and death-defying leaps between tree branches used by spider monkeys, squirrels and others. That paper shows why the explanatory relevance of the ideas presented here extends far beyond mathematical competences.

The mechanisms seem to be required for many aspects of human and non-human intelligence, including, for example, the ability of a crawling baby to work out how to move to shut a door with his legs after passing through the doorway, as depicted in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html#door-closing and also in much ancient mathematical reasoning, along with many applications in painting, architecture, engineering design, etc.

The (still unknown) conjectured implementation machine is used both for many practical activities in which physical actions are considered, selected, intended, performed and controlled, and also used (at a much later stage in biological evolution, followed by cultural evolution) in ancient mathematical discoveries.

The underlying machine seems to have some features partly analogous to data-structures used in some computer-based sketch-pads, either using materials supporting arbitrary resolution (most unlikely!), or else uses to which resolution of images is irrelevant, with additional meta-cognitive apparatus as suggested below.

For example if you look at a scene, e.g. a long straight road, and imagine how its appearance will change as your viewpoint changes, e.g. if you move vertically upwards while still looking along the road, the bounding edges of the appearance of the road may increase in length and the perceived angle at which they meet in the distance will vary in ways that can be discovered by experimenting with examples. However, there is no need to experiment with actual examples, since this is a type of correspondence between two spatial changes that can be discovered merely by thinking about the process. E.g. suppose you are standing on a straight road with parallel edges and looking along the road at some fixed location on the road. If the road lies in a planar surface and your viewpoint starts moving upward perpendicular to the surface, while you continue to focus on the same distant part of the road, you may, with a little practice be able to work out how the appearance of the left and right edges of the road will change as you move up. This imaginative ability is probably not innate, but neither is it based entirely on experience of moving upward from a position on a road. I suggest that most people can use their understanding of space to work out how the appearance will change, just as you can use your understanding of arithmetic to work out the product of two numbers that you have never previously multiplied.

A some stage this project should include a survey of a wide range of special cases of such relationships between view location, view direction, and orientations of surfaces that we all take for granted in everyday life, and which current robots might have to learn through collecting observations as they move around in space, whereas humans can discover such correlations merely by thinking about them (although abilities to do this may vary across individuals, and with age and development within individuals).

That ability to make such discoveries merely by doing spatial reasoning (possibly with your eyes shut) requires use of sophisticated mathematical mechanisms in brains whose operation could be mimicked on computers by using cartesian coordinate representations of space and performing algebraic and trigonometric transformations of spatial information, although it is very unlikely that that is how brains derive the information about effects of change. On the other hand, as far as I know such abilities cannot yet be explained by known physical features of brains.

In particular, these competences use neither statistical correlations between discrete categories (which would not provide the required kind of mathematical necessity), nor exact functional relationships between metric spaces which neural mechanisms don't seem capable of computing.

A machine used to implement these capabilities will need fairly rich non-metrical topological structures and reasoning powers. E.g. thinking about a plane containing a straight (i.e. symmetrical) line passing through the interior of a convex closed curve in the plane, with no straight portions, e.g. a circle or ellipse, we can see that there must be exactly two points where the line crosses the curve, i.e. connects the interior and the exterior of the curve. If there are more than two the curve cannot be convex. (Why?) If the curve is fixed, the line can be continuously moved "sideways" in the shared plane, relative to the curve until it no longer passes through the interior of the curve, though it will remain co-planar with the curve. As the line moves the points of intersection with the curve will move, eventually merge and then no longer exist.

What can you infer about how the distance between the points of intersection changes while the line moves, given that the curve is closed and convex? How do you do it without knowing the exact shape of the curve?

That's a very special case, but I think ordinary visual (and tactile?) perception involves a vast collection of cases with all sorts of partial orderings that can be used for intelligent control of behaviour using abilities to reason about how as one thing increases another must also increase, or must decrease, or may alternate between increasing and decreasing, etc. (E.g. think of a planar trajectory curving towards obstacle A and away from obstacle B then changing curvature to avoid a collision with A). (See also the "changing affordances" document, mentioned above.)

These mechanisms will be connected to spatial perception mechanisms, e.g. visual, tactile, haptic, auditory, and vestibular mechanisms, but the connections will be complex and indirect, often directly linked to uses of perception in controlling action, as opposed to merely contemplating spatial structures and processes.

Some of my ideas about this are based on ideas in Arnold Trehub's book Trehub(1991), which postulates a central, structured, multi-scale, multi-level dynamically changing store of different aspects of spatial structures and processes in the environment. Visual perception will be one of the sources. (As far as I know he did not discuss examples like mine, nor attempt to explain mathematical cognition.) On this view the primary visual cortex (V1) is best viewed as part of a rapid device for sampling information from what Gibson (1979) called "The optic array". Among other things Trehub's mechanism can explain why the blind spot is normally invisible, even though that was not Trehub's intention!

The tendency in Robotics, and AI generally, to use metrical spatial information, rather than multiple, changing, partial ordering relationships with inexact spatial measures, leads to excessive reliance on complex, but unnecessary, probabilistic reasoning, where online control using partial orderings could suffice, as suggested in Sloman(2007-14). But there are many details still to be filled in.

Implications for the Super-Turing machine (STM)

NOTE: The label "STM" is often used to refer to a "Short term memory" mechanism (or, more appropriately, a variety of types of short term memory mechanism, since there are clearly several kinds). I use it here for "Super Turing Machine" with only minor qualms, because if the conjectures presented here are substantiated that will transform our ideas about functions and mechanisms of short term memory in humans and other intelligent animals.

Further progress on requirements will require consideration of the following points:
Instead of the TM's discrete, linear tape, the STM has some still unknown number of overlapping stretchable movable transparent membranes on which 2-D structures can (somehow?) be projected and then slid around, stretched, and rotated and compared.
     E.g. this could combine visually perceived surface structures
     and the same structures perceived using haptic and tactile sensing.

The TM tape reader that can only move one step left or right and recognize one of a fixed set of discrete symbols would have to be replaced in the STM by something much more sophisticated that can discover structures and processes that result from those superimposed translation and deformation operations, and relative motions between contents of different membrane layers. (E.g. 'watching' a triangle move across a circle, with and without stretching and rotation.)

In ways that are still unknown, the machine needs to be able to detect that whereas some consequences of those transformations are contingently related to previous states, in other cases the structural relationships make the consequences inevitable: especially topological consequences such as possibility or impossibility of routes between two locations that don't cross a particular curve in the same surface.

Such a machine should also be able to use still unknown kinds of exhaustive analysis to reach conclusions about the *impossibility* of some process producing a certain kind of result.

In some organisms there would be Meta-Cognitive Layers inspecting those processes and, among other things, noting the differences between possible, impossible, contingent, and inevitable consequences of structural relationships (the "alethic" modalities that are central to mathematical discovery).

In TMs and digital computers, that sort of detection can be done by exhaustive analysis of possible modifications of certain symbols, e.g. truth tables, or chains of logical formulae matched against deduction rules; but it's very hard to see exactly how to generalise such meta-cognitive abilities to detect impossibility or necessity in the STM, where things can vary continuously in two dimensions.

Finally, the machine-table of a TM would have to be replaced by something much more complex that reacts to combinations of detected structures and processes in the STM by selecting what to do next. In addition, the biological STM would have to be linked to sensors and effectors, where some of the sensors can project new structures onto the membranes.

The general design would have to accommodate different architectures. For example, I am pretty certain that vertebrate vision would have started without stereo overlap, as in many birds, and many non-carnivorous mammals.

There would need to be something like left and right STMs and mechanisms for transferring information between them as an organism changes direction and what's visible only in one eye becomes visible in the other eye. The same mechanism could be used with greater precision as evolution pushed eyes in some organisms towards the front, producing partly overlapping projections of scenes, making new kinds of stereo vision possible.

(Unfortunately, Julesz' random dot stereograms have fooled some people into thinking that *only* that low-level "pixel based" mechanism is used for biological stereo vision, whereas humans (and I suspect many other animals) can obviously make good use of larger image structures, e.g. perceived vertical edges of large objects, in achieving stereo vision.
https://en.wikipedia.org/wiki/Random_dot_stereogram)

I suspect evolution also eventually discovered the benefits of meta-cognitive layers, without which certain forms of intelligent reasoning (e.g. debugging failed reasoning processes) would be impossible.

The new kinds of machine table corresponding to a TM's machine table would have to be able to manipulate information about continuously varying structures, e.g. comparing two such processes and discovering how their consequences differ. Later meta-meta-...cognitive mechanisms, might add a collection of new forms of intelligence.

What replaces the Turing Machine table?

A Turing machine has a central mechanism that can be specified by a finite state transition graph with labelled links specifying actions to be performed on the tape. The proposed Super-Turing machine will need something a lot more complex, not restricted to discrete states, allowing some intrinsic (i.e. not simulated) parallelism so that some processes of change can observe other processes as they occur.

The central machine inspecting and changing the Super-Turing membrane, cannot be discrete insofar as it has to encode recognition of both non-discrete static differences (e.g. a narrowing gap between two lines as measured increasingly close to a intersection point), and recognition of continuous changes in time, including in some cases comparisons of rates of change.

For example, the deforming triangle example in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/deform-triangle.html
involves consideration of what happens as a vertex of a triangle moves along a straight line, steadily increasing its distance from the opposite side of the triangle. The machine needs to be able to detect steady increase or decrease in length or size of angle, but does not require use of numerical measures or probabilities, or changing probabilities.

So computation of derivatives, i.e. numerical rates of change, etc. need not be relevant except in very special cases, including cases of online control of actions, discussed by Gibson. Many examples of mathematical discovery seem to arise from offline uses of these mechanisms, to reason about possible actions and consequent changes without actually performing the actions. And many such cases will make use categorisations and partial orderings of structures and processes rather than measures.

So the machine's "brain" mechanisms, need to be able to make use of categorisations like 'static', 'increasing', 'decreasing', 'a is closer to b than to c' etc., including making use of partial orderings of structures and processes (i.e. spatial and temporal orderings). Illustrations of these ideas can be found in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/changing-affordances.html (or pdf)

Possible changes to a spatial configuration: The Apollonius problem

Figure: Points Lines and Circles
XX
Consider the two points indicated by A and B and the line L. Two circles are shown that pass through points A and B. It is clear that there are many more such circles -- infinitely many of them if space is infinitely divisible. Some circles through A and B do not intersect L, e.g. the blue circle. Other circles through A and B intersect L in two places, e.g. the red circle. Is there a circle passing through points A and B that touches L at only one point, so that L is a tangent to the circle? How do you know?

What kind of brain machinery makes it possible to reason that there must be such a circle? This is a problem from ancient geometry, and there is a construction for finding the circle that passes through A and B and meets L as a tangent, known as Apollonius' construction (also mentioned in the next section).
See http://www.cs.bham.ac.uk/research/projects/cogaff/misc/apollonius.html

As shown there, if the line L is not parallel to AB there will be two circles passing through points A and B that meet L as a tangent. One of the circles has a center above the line AB and one has a centre below.

I shall later add some thoughts here about how the need to be able to perform these spatial reasoning tasks suggests requirements for our Super-Turing machinery.

Impossible routes/trajectories

It is not very difficult to write a program that solves mazes, e.g. by always turning left at a junction (provided the maze is fully connected). However if the maze terrain is implemented as a digitized very high resolution image then such a maze-searching program might miss very small gaps in walls, or very narrow channels if its motion uses discrete steps that are larger than the smallest gaps.

When a maze program actually finds a continuous route between its start location and a specified target location then that shows that the maze has at least one solution. But if it fails to find a continuous route that may simply be due to limitations of the search strategy used, including the possibility that it has missed a very narrow gap.

This illustrates a general point: proving that a certain type of entity is possible is, in many cases, much easier than discovering impossibility, i.e. that there cannot be any instances of that type. That's because finding any instance of the type proves possibility, whereas proving impossibility (or necessity) requires more powerful cognitive resources: i.e. some way of exhaustively specifying locations in a possibility space so that they can all be shown to satisfy or not to satisfy some condition.

This is sometimes easy for simple, discrete, possibility spaces, e.g. the space of combinations of truth-values for an expression in propositional calculus with a fixed set of boolean variables. Although the number of combinations expands exponentially with the number of variables, it is always finite, whereas the set of continuous paths between two points in a 2D or 3D space is typically infinite, unless some special constraint is specified (e.g. being straight, or being a circle with centre at a third specified point).

Note that although finding an instance conclusively proves possibility, there are branches of mathematics, engineering and science where finding an instance may be very difficult. E.g. although every mathematical proof using standard logic, algebra and arithmetic is finite, the space of possible proofs is unbounded, so finding a proof that actually exists may require a very long search. If there is no such proof the search will continue forever.

This is also true in geometry, since some simply described spatial configurations may require complex constructions, e.g. the statement above that, given two points A and B and a line L distinct from the line AB, find a circle C such that C passes through points A and B, and has L as a tangent. For details see
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/apollonius.html

A different example involves answering a question about these two figures that can be thought of as requiring consideration of an infinite collection of possibilities, without going through infinitely many steps:

-----------------------------------------------------------------------------------------------------------------------------------
Figure: Spirals
XX
Is there a continuous route through the white space in A joining blue dots at a1 and a2?
Is there a continuous route through the white space in B joining blue dots at b1 and b2?
-----------------------------------------------------------------------------------------------------------------------------------
Consider the questions about existence of possible routes in Fig: Spirals A, and B. In both cases the route should not include any of the red space. The answer "yes" is easy to defend if a route has been found.

If the routes are thought of as arbitrarily thin paths linking the two points, the ability to detect when the answer is "No" is much harder to explain, as it requires an ability to survey completely a potentially infinite space of possible routes, What sort of brain mechanism, or simulated brain mechanism can provide that ability, or an equivalent ability avoiding explicit consideration of an infinite set of possibilities?

All of this is crucial to some of the uses of visual sensing (or other spatial sensing) in more or less fine-grained online control (emphasised by James Gibson), as opposed to the use of vision to categorise, predict, explain, etc. Some examples involving affordance detection going beyond Gibsonian online control are discussed in Sloman (2007-14).

For now I wish to focus mainly on the role of impossibility detection in mathematical reasoning. The difference between existence and non-existence of a route linking two blue dots without ever entering a red area is a mathematical difference. I expect most readers will not have much difficulty deciding whether such a route exists in Figure A or Figure B.

What sort of brain mechanisms can perform an exhaustive search of all possible routes from A that do not enter any red space and discover no such route reaches location B in one of the pictures? Does it really involve checking infinitely many possible routes starting fom one of the blue dots?

How can brain mechanisms implement such an exhaustive checking process, covering an enormous, possibly infinite, variety of cases. For now I'll leave that question for readers to think about.

Further requirements for intelligent, mathematical perception

A deep, and difficult, requirement for the proposed machine is that it needs to be able to detect that direction of change of one feature, e.g. increasing height of a fixed base triangle, seems to be correlated with direction of change of another, e.g. decreasing size of the angle. This requires use of partial ordering relations (getting bigger, getting smaller) and does not require use of numerical measurements.

It is also not a statistical correlation found in collections of data by statistical analysis. It's a perceived feature of a process in which two things necessarily change together.

Moreover mathematical consciousness involves seeing why such relationships must hold.

I have a large, and growing, collection of examples. Many more, related to perception of possibilities and impossibilities, are collected in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html
and in documents linked from there.

It is not too hard to think of mechanisms that can observe such correspondences in perceived processes, e.g. using techniques from current AI vision systems. These are relatively minor extensions of mechanisms that can compare length, area, orientation, or shape differences in static structures without using numerical measurements.

What is much harder is explaining how such a Super-Turing mechanism can detect a necessary connection between two structures or processes.

The machine needs to be able to build bridges between the two detected processes that "reveal" an invariant structural relationship.

In Euclidean geometry studied and taught by human mathematicians, construction-lines often build such bridges, for example in standard proofs of the triangle sum theorem, or Pythagoras' theorem. (A taxonomy of cases is needed.)

But, it is important to stress that these mechanisms are not infallible. For example this document explains how I was misled at first by the stretched triangle example, because I did not consider enough possible lines of motion for the triangle's vertex. http://www.cs.bham.ac.uk/research/projects/cogaff/misc/deform-triangle.html

The proposed machine will almost certainly generate the sorts of mistakes that Lakatos documented (in his Proofs and Refutations), and much simpler mistakes that can occur in everyday mathematical reasoning).

But it must also have the ability to detect and correct such errors in at least some cases -- perhaps sometimes with help of social interactions that help to expand the ideas under consideration, e.g. by merging two or more lines of thought, or sets of examples explored by different individuals.

I expect some computer scientists/AI theorists would not be happy with such imperfections: they would want a mathematical reasoning/discovery machine to be infallible.

But that's clearly not necessary for modelling/replicating human mathematical minds. Even the greatest mathematicians can make mistakes: they are not infallible.

(Incidentally this dispenses with much philosophical effort on attempting to account for infallibility, e.g. via 'self-evidence'.)

All this needs to be put into a variety of larger (scaffolding) contexts:

But there are many unknown details, including which (human and non-human) brain mechanisms can do such things and (a) how and when they evolved and (b) how and when they develop within individuals.


Properties of the Super Turing membrane machine
21 Nov 2017: moved here from another document.

The conjectured virtual membrane will have various ways of acquiring "painted" structures, including the following (a partial, provisional, list):

NB: regarding the last point: I am not suggesting that evolution actually produced perfectly thin, perfectly straight, structures, or mechanisms that could create such things. Rather the mechanisms would have the ability to (implicitly or explicitly) postulate such limiting case features, represent them, and reason about their relationships, by extrapolating from the non-limiting cases. (Is this what Kant was saying in 1781?)

For example, the assumption that when two perfectly thin lines cross the intersection is a point with zero diameter would depend on an ability to extrapolate from what happens if two lines with non-zero thickness cross, and then they both become narrower and narrower, so that the overlap location becomes smaller in all directions. (Yet another "theorem" lurks there!)

It should not be assumed that any of this produces high precision projections, though a conjectured learning process (possibly produced by evolution across many stages of increasing sophistication) may generate mechanisms that can "invent" limiting cases, e.g. perfectly thin lines, perfectly straight lines, etc. perhaps starting with simpler versions of the constructions presented by Dana Scott in Scott(2014).

Monitoring mechanisms

In addition, a repertoire of "meta-membrane" monitoring mechanisms needs to be available that can detect potentially useful (or even merely interesting!) changes, including items coming into contact or moving apart, orderings being changed, new sub-structures created or disassembled, detection of new relations arising, or old relations being destroyed (contact, ordering, etc.), when membrane manipulation processes occur.

I have not attempted to answer the question whether the proposed membrane mechanisms (still under-specified) require new kinds of information processing (i.e. computation in a general sense) that use physical brain mechanisms that are not implementable on digital computers, perhaps because they rely on a kind of mixture of continuity and discreteness found in many chemical processes in living organisms.

It could turn out that everything required is implementable in a suitable virtual machine implemented on a digital computer. For example, humans looking at a digital display may perceive the lines, image boundaries and motions as continuous even though they are in fact discrete. This can happen when a clearly digital moving display is viewed through out of focus lenses, or at a distance, or in dim light, etc. In that case the blurring or smoothing is produced by physical mechanisms before photons hit the retina.

But it is also possible to treat a digital display as if it were discrete, for example assigning sub-pixel coordinates to parts of visible lines, or motion trajectories. That sort of blurring loses information, but may sometimes make information more useful, or more tractable. It could be useful to build visual systems for robots with the ability to implement various kinds of virtual de-focusing mechanisms for internal use when reasoning about perceived structures, or controlling actions on perceived structures, e.g. moving a hand to pick up a brick.

Insofar as human retinas have concentric rings of feature detectors with higher resolution detectors near a central location (the fovea) and lower resolution detectors further from the centre, it can be viewed as a mechanism that gives perceivers the ability to change the precision at which certain features in the optic array are focused. It may have other applications.

Sub-neural chemical computations?

In theory, no new kinds of physical machine would be needed if the membrane mechanisms can use new kinds of digitally implementable virtual machinery, using virtual membranes and membrane operations. However, even if that is theoretically possible it may be intractable if the numbers of new virtual machine components need to match not neural but sub-neural molecular/chemical computational resources in brains. The number of transistors required to model such a mechanism digitally might not fit on our planet.

Compare the challenges to conventional thinking about brains implicit in Schrödinger(1944) and explicit in Grant(2010), Gallistel&Matzel, 2012 and Trettenbrein(2016), suggesting that important aspects of natural information processing are chemical, i.e. sub-neural.

It is possible that such molecular-level forms of information processing could be important for the sorts of information processing brain functions postulated in Trehub(1991), though molecular level implementation would require significant changes to Trehub's proposed implementation of his ideas.

Turing's 1952 paper on chemistry-based morphogenesis Turing (1952) at first sight appears to be totally unconnected with his work on computation (except that he mentioned using computers to simulate some of the morphogenesis processes). But if Turing had been thinking about requirements for replicating geometrical and topological reasoning used by ancient mathematicians and learners today, then perhaps he thought, or hoped, the chemical morphogenesis ideas would be directly relevant to important forms of computation, in the general sense of information processing. In that case his ideas might link up with the ideas about sub-neural computation referenced above, which might perhaps play a role in reasoning mechanisms conjectured in Sloman(2007-14), which draws attention to kinds of perception of topology/geometry based possibilities and impossibilities that were not, as far as I know, included in the kinds of affordance that Gibson considered.

The membrane (or multi-membrane) machine needs several, perhaps a very large number, of writeable-readable-sketchable surfaces that can be used for various purposes, including perceiving motion, controlling actions, and especially considering new possibilities and impossibilities (proto-affordances).

The idea also needs to be generalised to accommodate inspectable 3D structures and processes, like nuts rotating on bolts, as discussed another document. (Something about this may be added here later.)

The brain mechanisms to be explained are also likely to have been used by ancient mathematicians who made the amazing discoveries leading up to publication of Euclid's Elements, and later.      http://www.gutenberg.org/ebooks/21076

I think there are deep connections between the abilities that made those ancient mathematical discoveries possible, and processes of perception, action control, and reasoning in many intelligent organisms, as suggested in the workshop web page mentioned above.

One consequence of the proposal is that Euclid's axioms, postulates and constructions are not arbitrarily adopted logical formulae in a system that implicitly defines the domain of Euclidean geometry. Neither are they mere empirical/statistical generalisations capable of being refuted by new observations.

Rather, as Kant suggested, they were all mathematical discoveries, using still unknown mechanisms in animal brains originally produced (discovered) by evolution with functions related to reasoning about perceived or imagined spatial structures and processes, in a space supporting smoothly varying sets of possibilities, including continuously changing shapes, sizes, orientations, curvature and relationships between structures, especially partial orderings (e.g. of size, containment, angle, curvature, etc.)

Despite all the smooth changes, the space also supports many interesting emergent discontinuities and invariants, that the ancients discovered and discussed, many of which seem to be used unwittingly(?) by other intelligent species and pre-verbal children. (Adult humans normally have additional meta-cognitive mechanisms.)

Examples used in much every-day visual perception, action control and planning are discussed in Sloman(2007-14)). The same general principles are involved in learning to add and subtract numbers, though specific types of information and information change are different. See Seely Brown, et. al., 1977

The location at which the size of a vertex of a triangle peaks when moved along a straight line that does not pass through the base of the triangle is another, surprisingly complicated, example, discussed in another document.


Is chemistry essential for some animal competences?

I suspect chemistry-based reasoning mechanisms are important in brains, as Turing suggested in his Mind 1950 paper, though the comment usually goes unnoticed. I think Kenneth Craik had related (under-developed) ideas in 1943, e.g. wondering how a tangle of neurons could represent straightness or triangularity... before digital computers and digitised image arrays had been invented.

I conjecture that Turing may have been thinking about these issues when he wrote his paper on morphogenesis, published two years before he died: Turing (1952). For a useful summary for non-mathematicians, see Ball (2015)


TO BE CONTINUED

This needs to be related to the theory of evolved construction kits:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/construction-kits.html
     Construction kits for evolving life
     (Including evolving minds and mathematical abilities.)
An older version of the construction kits paper (frozen mid 2016) was published in Cooper and Soskova (Eds) 2017.

REFERENCES AND LINKS


Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham