Why can't (current) machines reason like Euclid
or even human toddlers?

(And many other intelligent animals)

Aaron Sloman
http://www.cs.bham.ac.uk/~axs/
School of Computer Science, University of Birmingham

Notes for IJCAI Workshop Aug 2017
Architectures for Generality and Autonomy
http://cadia.ru.is/workshops/aga2017/
--------------------------------
This file is
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ijcai-2017-cog.html
Short URL: goo.gl/dB9WYz
PDF http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ijcai-2017-cog.pdf
Recorded 42 min video presentation partly based on this document available here:
http://www.cs.bham.ac.uk/research/projects/cogaff/movies/ijcai-17
Last updated 17 Aug 2017



Some deep, largely unnoticed, gaps in current AI,
(e.g. gaps between human and artificial mathematics)
and what Alan Turing might have done about them if he
had lived several more decades after publishing his 1952
paper on The Chemical Basis of Morphogenesis (Turing 1952)
(Now his most cited paper.)


There are deep gaps in Current AI, Psychology, Neuroscience, Biology and Philosophy

These gaps are concerned with things we don't understand about:

-- use of structured internal languages in many species, and in pre-verbal humans
-- grasp of mathematical features of spatial structures and processes, in humans
   and many other species
   (though only humans can reflect on what they know and talk about it)
-- gaps in the explanatory power of current neuroscience, psychology and AI
-- gaps in philosophical thinking about types and functions of consciousness
-- gaps in evolutionary theory regarding how current competences
   and mechanisms evolved
-- gaps in our thinking about the nature of mathematics and human/non-human
   mathematical competences

Some common errors lead some people to excessive optimism and others
to excessive pessimism about current state and future prospects of AI.

-- My work on these (and related problems) began over half a century ago
   and progress has been real but very slow.
-- It was accelerated when discussions of Alan Turing's 1952) paper on
   morphogenesis during his centenary in 2012 led me to start the
   Meta-Morphogenesis (M-M) project (https://goo.gl/9eN8Ks),
   mentioned again below.

I won't have time to go into related problems concerning the ability of fundamental physics to support all the known forms of life and the products of biological evolution that made them possible. (Including fundamental and derived construction kits mentioned below.)


14 Aug 2017: I have just learnt from Alexei Sharov about deep, closely related work by the Biosemiotics research community. E.g.
http://www.biosemiotics.org/biosemiotics-introduction/
https://en.wikipedia.org/wiki/Biosemiotics
https://en.wikipedia.org/wiki/Zoosemiotics

     Trying to understand intelligence by studying only human intelligence
     is as misguided as trying to understand life by studying only human life.

     So I'll squeeze in comments on evolution of human and non-human minds.
     (The M-M project says much more.)

I'll start with some video extracts from a BBC video showing weaver birds at various levels of competence building nests. The builders (mostly male) have to develop rich topological competences, e.g. concerning knots. The video is
https://www.youtube.com/watch?v=6svAIgEnFvw
Serious errors in wide-spread beliefs about human linguistic competences, how they evolved and how individuals acquire them will be challenged using the famous case of deaf children in Nicaragua, referenced below.

I'll also show a pre-verbal human child apparently able to think about 3-D topology, and do topological experiments, long before she she is able to talk about such matters. The video is accessible from this discussion of "Toddler theorems" illustrating mathematical competences used unwittingly by young humans [26a] Sloman, (2013c).)
Compare: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/rings.html

Both of these, and many more types of competence in pre-verbal humans and many other animal species give clues regarding evolution of mathematical competences, and how they relate to the mathematical structures in the environments in which animals behave.


Background: What is mathematical discovery? (Euclid, Kant and Einstein)
AI researchers, like many psychologists, neuroscientists and even philosophers tend to ignore the problem of explaining human mathematical intelligence, including features noted by Immanuel Kant in 1781.

Any adequate theory/model of intelligence must not ignore the kinds of discoveries made by Euclid, Zeno, Archimedes, Pythagoras, etc. over 2000 years ago.

These discoveries

(a) are not empirical -- though they may be triggered/awakened by experience
(but you don't need to have your eyes open when reasoning about geometry!);

(b) are not derivable from definitions using logic (i.e. they are not analytic, in Kant's sense -- and not necessarily made using modern logical/algebraic mechanisms, or starting from modern "foundations", discussed below, and listed in Sakharov (2003ff));

(c) they are discoveries of facts that are non-contingent (necessarily true)

[Analysing 'necessarily' here is non trivial. It has nothing to do with possible world semantics.
This was a main theme of my 1962 DPhil thesis.
It also has nothing to do with probability. Necessary truth and impossibility refer to structural constraints of various kinds: these are different concepts from 100% and 0% probability, which refer to ratios of measures of some sort.]

(d) These ancient discoveries are still in constant use by engineers, scientists and mathematicians all over the planet.
     (Though in the UK very few students now learn to make such discoveries
     by solving geometric problems and finding proofs, which I think is disgraceful.)

BUT none of this implies that mathematicians are infallible
(as Imre Lakatos illustrated in his Proofs and Refutations (1976))

NOTE

I have many examples related to reasoning in geometry, topology, one-to-one correspondences, and consequences for arithmetic, here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-theorem.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html
and various linked files, e.g. these
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/torus.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/rubber-bands.html
and the discussion of "toddler theorems" in [26a] A. Sloman, (2013c).)

It is well known (though not easy to prove) that although bisecting an arbitrary angle is easy in Euclidean geometry trisecting an arbitrary angle is impossible. However, there is a simple extension to Euclidean geometry, known to Archimedes, the "neusis" construction, that makes it possible to trisect an arbitrary angle easy, as explained here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trisect.html

The possibility of this extension was proved without modern logic, algebra, set theory, proof theory etc. As far as I know, there is no current AI reasoning system capable of discovering such a construct, or considering whether it is an acceptable extension to Euclid's straight-edge and compasses constructs, checking whether it does provide a way to trisect any angle.
One of the requirements for an adequate AI model or replica or theory of human intelligence is that it should be able to model those ancient discovery processes. Perhaps future AI will overcome current limitations.

Likewise any adequate future neuroscience should be able to explain which features of brains (neural/sub-neural ...) make those discovery processes possible, including how those mechanisms enable/support understanding the mathematical proofs that are involved.

That kind of understanding should go beyond blind reconstruction or "parroting" of the proofs.

E.g. that mode of "discovery" could be provided (eventually) by a systematic generator of all possible sequences of characters, or words in a human language (up to some maximum length). But that would not provide the ability to understand the proof, or to use it to solve a novel problem.

Can AI reasoning systems replicate the ancient discoveries?

For over 50 years there has been work on automated geometrical theorem proving, apparently followed only by a small subset of the AI research community.

I believe it started with H. Gelernter, 1964 (inspired by some "pencil-and-paper" simulations done by Marvin Minsky).

Very impressive more recent work, which I have not yet studied in detail can be found in Chou, et al., 1994, and other publications by the same authors. However, my understanding is that this work uses reasoning in a logical framework, starting from a variant of Euclid's axioms, supplemented with heuristic use of arithmetical models to block some searches and increase efficiency. As far as I can tell, there is no attempt to model or replicate human uses of vision or visual imagination in reasoning that demonstrates impossibility or necessity.

It is easy to create a set of axioms describing some portion of the world, e.g. the layout of a building. It may then be possible to use those axioms to prove that the only route from room A to room B goes past room C. But the fact that that is a "theorem" in that system does not make it true, let alone necessarily true. For example the map may be incorrect, missing out a door. Moreover there may be a wall that blocks an alternative route, but walls can be demolished. So the fact that the map includes the wall does not make it a necessary truth that there is no route through the location of the wall. What can easily be made false is not a necessary truth.

What makes something impossible is not that it cannot be depicted, as illustrated in pictures of impossible objects. See http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html

There have also been more philosophical and/or mathematical publications analysing, and in some cases defending, Euclid's use of diagrams, but without any attempt to automate reasoning with diagrams as a contribution to the cognitive science of mathematics. E.g. Miller (2007) and works by Ken Manders.

Replicating human understanding

What goes on when a human understands one of Euclid's axioms, e.g. one of the "axioms of congruence" specifying conditions for two triangles to be congruent.

These axioms were discoveries not arbitrarily adopted axioms. I don't know of any theory of learning/perception that can explain or model those ancient discoveries.

The discovery processes and the epistemological and semantic differences between mathematical and empirical discoveries pointed out by Kant are usually ignored by developmental psychologists, neuroscientists, and (most) AI theorists.

(Piaget was an exception, though it seems to me that he lacked the required conceptual tools, i.e. computational tools, required to say anything deep.)

In order to give my audience first hand experience of what I am talking about as missing from AI here are some examples that as far as I know are not included in Euclid's Elements or in standard textbooks on euclidean geometry.

I think they build on deep competences shared with other intelligent species (e.g. nest building birds, squirrels, elephants, orangutans, hunting mammals).

Some parts of these competences are already present in pre-verbal human toddlers though they are very hard to investigate because failure of a child to perform as an experimenter hopes says nothing about what that child can or cannot do. The problem may be a failure on the part of the experimenter to motivate the child or to trigger the right competences in the child, which might be triggered in a totally different social or non-social interaction.

(Propositional) Logical reasoning: easy for computers

A lot of work has been done on automated theorem proving using logic, arithmetic and algebra. A toy example is reasoning in propositional calculus, illustrated here:

Figure Logic
XX

Here the symbol ":-" can be read as "therefore", to indicate the conclusion of an inference to be evaluated.

In the first (upper) example, where only two propositions P and Q are involved there are only four possible combinations of truth values to be considered. And that makes it easy to discover that no combination makes both premises true and the conclusion false.

In the second case, for each of those four combinations R may be true or false, so the total number of possibilities is doubled. But it is still a finite discrete set and can be examined exhaustively to see whether it is possible for both premises to be true and the conclusion false. I assume the answer is obvious for anyone looking at this who understands "or" and "not". Checking for validity in propositional calculus involves exhaustive search through finite sets of discrete possibilities, so it it is easy to program computers to do this.

Things get more complex if the propositional variables, e.g. P, Q, etc. instead of being restricted to two possible truth values (T and F) can take other intermediate values, or even vary continuously. The simple discrete reasoning, based on truth-tables, using or and not etc., will have to be replaced by something mathematically much more complex.
     (a topic investigated by Tarski, Zadeh and others -- "Fuzzy Logic"
     https://en.wikipedia.org/wiki/Fuzzy_logic)

Geometrical/topological reasoning: much harder for computers

It is very much harder for computers to replicate the discoveries made by ancient mathematicians, over 2000 years ago, long before the discoveries in modern logic, formal reasoning, set theory, and Descartes' "translation" of geometry into arithmetic using coordinates.

Ancient geometrical discoveries, were not concerned only with collections of discrete possibilities. Euclidean geometry is concerned with smoothly varying sets of possibilities, including continuously changing shapes, sizes, orientations, curvature and relationships between structures.

Contrast the simple truth-table analysis with reasoning about what will happen if you start with a triangle, like the blue triangle in the figure, and move the top vertex away from the opposite side along a line going through the opposite side. Here the red triangle illustrates one of the possible new triangles that could result from moving the top vertex along the thin line.

Figure Stretch-internal
XX

One way to reason about the above figure is to consider what happens if you move the new red vertex downwards while keeping the two red lines with the same orientations, so that the angle between them remains fixed, and the two sides retain their lengths. In that case as the new vertex moves down towards the old position. both the new red sides at their bottom ends will pass through the base of the old blue triangle between the two bottom corners of the blue triangle. Getting them to pass through the two bottom corners will require widening of the angle.

So the angle made at the top of the new triangle must be smaller than the original angle at the top of the blue triangle.

This makes it obvious that as the top vertex is moved further and further up the thin line, while continuing to pass through the two bottom corners, the angle between the two red sides will continually get smaller.

(I am not claiming that this is the only way to understand why the angle must get smaller as the top vertex moves further up the line.)

As far as I know this is not one of the standard theorems in Euclid's Elements. I don't even know whether anyone else has ever formulated this fact about Euclidean triangles, but I hope all readers will find this inference as obviously valid as the first logical example using "or" and "not" above, despite the fact that unlike the logical example, the triangle example involves an infinite (smoothly varying) set of possible shapes rather than a discrete set of cases that can be examined exhaustively.

Humans clearly have (ill-understood) mechanisms for exhaustively examining a an infinite, smoothly varying collection of cases. How could evolution have achieved that? Can we replicate the required mechanisms on computers?

I am not claiming that this is the only (non-arithmetical) way to prove that the top angle must decrease in size as the vertex moves further up. It is a feature of Euclidean geometry that many theorems (e.g. Pythagoras' theorem) can be proved in a very wide variety of different ways, all, or most, of them essentially visual. 118 proofs are presented in [Pythag]. That is, in part, evidence of the power of the human visual system to discern mathematical properties and relationships of geometrical structures and processes.

As far as I know there is nothing known to neuroscience that explains those abilities, and nothing in AI, so far, that simulates them. I suspect that until we know how to build machines that are capable of supporting such reasoning we shall not be able to build robots with the same kinds of intelligence in dealing with spatial structures and processes as humans and many other intelligent species have.

Humans, in addition, have meta-cognitive processes able to reflect on those processes of reasoning and the relationships between the discoveries they lead to, whereas I suspect other species can merely use the abilities in considering, choosing and executing spatial actions, often in novel situations.

What will happen if you move the vertex along a line that crosses the opposite side outside the triangle, as in the Figure Stretch-external, below.

This is harder to reason about: why? It is left as an exercise for the reader, and as stimulation to investigate the differences between the two cases.

Figure Stretch-external
XX

Geometrical vs logical reasoning

Unlike the geometrical reasoning discussed here, the logical example involves consideration only of discrete alternatives (the possible truth-values for P, Q, and R, the propositions, and the resulting truth values of premises and conclusion). So it is easy to program a digital computer to examine all the possible cases and discover that the first inference, involving only P and Q is valid and the second is not, because the first premiss would be true if R is true, even if P is false.

On the other hand the reasoning presented here concerns continuous deformation through infinitely many locations (if the lines are infinitely thin, as assumed by Euclid) and it is therefore impossible to make a digital computer exhaustively explore all the possible configurations to ensure that none of them refutes the theorem. Yet it is obvious to us.

This suggests that if some brains have evolved an ability to do the kinds of reasoning I have just summarised in order to confirm the truth of a theorem in Euclidean geometry, this cannot be implemented on a discrete computer, unless there is a way to implement a new kind of virtual machine with the required kinds of continuity and abilities somehow to examine an infinite collection of possibilities.

What are foundations for mathematics?

I have tried to give a brief, oversimplified, introduction to Kant's ideas about mathematical knowledge, and I have tried to illustrate some of the ways in which the discoveries of ancient mathematicians (and related discoveries made by very young children and non-human animals), that seem to fit Kant's ideas, don't naturally fit in the space of mathematical forms of reasoning and discoveries so far made by AI systems running on computers.

In the last two centuries there has been a lot of research on foundations for mathematics, most of it focused on mathematical foundations for mathematics, i.e. trying to find some subset of mathematics from which all of the rest can be derived. For examples, see the links in Sakharov (2003ff)).

In a separate document, namely,
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/maths-multiple-foundations.html
I have recently been trying to distinguish additional types of foundation for mathematics:

For future AI systems that replicate human mathematical capabilities (and possibly more), all the above topics will have to be addressed.

The Meta-Morphogenesis (Self-informing universe) project

The Turing-inspired Meta-Morphogenesis project was proposed in the final commentary in Alan Turing - His Work and Impact, a collection of papers by and about Turing published on the occasion of his centenary[6].

The project defines a way of trying to fill gaps in our knowledge concerning evolution of biological information processing that may give clues regarding forms of computation in animal brains that have not yet been re-invented by AI researchers.

This may account for some of the enormous gaps between current AI and animal intelligence, including gaps between mathematical abilities of current AI systems and the abilities of ancient mathematicians whose discoveries are still being used all over world, e.g. Archimedes, Euclid, Pythagoras and Zeno.

Evolution of information processing capabilities and mechanisms is much harder to study than evolution of physical forms and physical behaviours, e.g. because fossil records can provide only very indirect evidence regarding information processing in ancient organisms. Moreover it is very hard to study all the internal details of information processing in current organisms. Some of the reasons will be familiar to programmers who have struggled to develop debugging aids for very complex multi-component AI virtual machines.

Because we cannot expect to find fossil records of information processing, or the mechanisms used, the work has to be highly speculative. But conjectures should be constrained where possible by things that are known. Ideally these conjectures will provoke new research on evolutionary evidence and evidence in living species. However, as often happens in science, the evidence may not be accessible with current tools. Compare research in fundamental physics (e.g. Tegmark (2014)).

The project presents challenges both for the theory of biological evolution by natural selection, and for AI researchers aiming to replicate natural intelligence, including mathematical intelligence. This is a partial progress report on a long term attempt to meet the challenges. A major portion of the investigation at this stage involves (informed) speculation about evolution of biological information processing, and the mechanisms required for such evolution, including evolved construction-kits. The need for which has not been widely acknowledged by evolutionary theorists.

A lot of work has been done on the project since then, some of it summarised below, especially the developing theory of evolved construction kits of various sorts Sloman[2017], but there are still many unsolved problems, both about the processes of evolution and the products in brains of intelligent animals.

I am not primarily interested in AI as engineering: making useful new machines. Rather I want to understand how animal brains work, especially animals able to make mathematical discoveries like the amazing discoveries reported in Euclid's Elements over 2000 years ago.

My interest in AI (which started around 1969) and my work on the The M-M project (since late 2011), arose out of my interest in defending Immanuel Kant's philosophy of mathematics in his (1781), and partly from my conjectured answer to the question: 'What would Alan Turing have worked on if he had not died two years after publication of his 1952 paper on Chemistry and Morphogenesis (Turing 1952). According to Google Scholar, this is now the most cited of his publications. though largely ignored by philosophers, cognitive scientists and AI researchers.

I suspect that if Turing had lived several decades longer, he would have tried to understand forms of information processing needed to control behaviour of increasingly complex organisms produced by evolution, starting from the very simplest forms produced somehow on a lifeless planet produced by condensed gaseous matter and dust particles, later followed, over many millions of years, by increasingly complex organisms, with increasingly complex forms of information processing, including the kinds that led to the ancient mathematical discoveries reported by Euclid, presumably building on earlier discoveries concerning good ways to to solve practical problems.

That is the M-M project.

Protoplanetary disk

[NASA artist's impression of a protoplanetary disk, from WikiMedia]

How could this come about?

I have nothing to add to conjectures by others about the initial, minimal forms of life, e.g. see Ganti (2003).

However, controlled production of complex behaving structures needs increasingly sophisticated information processing:
-- in processes of reproduction, growth and development (Schrödinger (1944) had some profound observations regarding mechanisms for storing and using information required for reproduction);
-- for control of behaviour of complex organisms reacting to their environment, including other organisms.

In simple organisms, control mainly uses presence or absence of sensed matter to turn things on or off or sensed scalar values to specify and modify other values (e.g. homeostasis and chemotaxis).

As organisms and their internal structures become more complex, the need for structural rather than metrical specifications increases.

Many artificial control systems are specified using collections of differential equations relating such measures. One of several influential attempts to generalise these ideas is the 'Perceptual Control Theory (PCT)' of William T Powers.

But use of numerical/scalar information is not general enough: It doesn't suffice for linguistic (e.g. grammatical or semantic) structures or for reasoning about topological relationships, or processes of structural change e.g. in building complex nests, in chemical reactions, in programming, or in engineering assembly processes -- or 'toy' engineering, such as playing with meccano sets, tinker toys, Lego, etc. It also cannot describe growth of organisms, such as plants and animals, in which new materials, new substructures, new relationships and new capabilities form -- including new information processing capabilities.

For example, the biologically important changes between an egg and a chicken cannot be described by changes in a state-vector. Why not?
(Left as an exercise for the reader: there are several reasons.)

Turing's Morphogenesis paper [31] also focused on mechanisms (e.g. diffusion of chemicals) representable by scalar (numerical) changes, but the results included changes of structure described in words and pictures. As a mathematician, a logician and a pioneer of modern computer science he was well aware that the space of information-using control mechanisms is not restricted to numerical control systems.

For example a Turing machine's operation involves changing linear sequences of distinct structures, not numerical measures.

In the last half century human engineers have discovered, designed and built additional increasingly complex and varied forms of control in interacting physical and virtual machines.

That includes control based on

grammars, parsers, planners, reasoners, rule interpreters, problem solvers and many forms of automated discovery and learning.
(Note: it is widely believed that these aspects of symbolic AI have been proved irrelevant to real intelligence. But that belief is an educationally harmful myth.)

Much progress has been made replicating aspects of those competences in computers, especially in the last 50 years, with enormous acceleration during the 21st Century, though the education of AI researchers does not achieve much insight into what is still missing from AI: teachers, funding agencies, future employees, and their employers like to focus on successes, i.e. on techniques and theories can be used successfully, however limited they may turn out to be in future decades.

Long before that, biological evolution produced and used increasingly complex and varied forms of information in construction, modification and control of increasingly complex and varied behaving mechanisms.

CONJECTURE:

If Turing had lived several decades longer, he might have produced new theories about many intermediate forms of information in living systems and intermediate mechanisms for information-processing: intermediate between the very simplest forms and the most sophisticated current forms of life.

This would fill gaps in standard versions of the theory of natural selection. E.g. , the theory does not explain what makes possible the many forms of life on this planet, and all the mechanisms they use, including the forms that might have evolved in the past or may evolve in the future.

It merely assumes such possibilities and explains how a subset of realised possibilities persist and consequences that follow.

For example, the noted biologist Graham Bell wrote in 'Living complexity cannot be explained except through selection and does not require any other category of explanation whatsoever'Bell(2008).

Only a few defenders of Darwinian evolution seem to have noticed the need to explain

(a) what mechanisms make possible all the options between which choices are made, and

(b) how what is possible changes, and depends on previously realised possibilities.

CONJECTURE: USES OF EVOLVED CONSTRUCTION KITS

A possible defence of Darwinian evolution would enrich it to include investigation of
(a) the Fundamental Construction Kit (FCK) provided by physics and chemistry before life existed,

(b) the many and varied 'Derived construction kits' (DCKs) produced by combinations of natural selection and other processes, including asteroid impacts, tides, changing seasons, volcanic eruptions and plate tectonics.

Figure FCK: Fundamental Construction Kit (FCK)
FCK

Figure DCK: Derived Construction Kits (DCKs)
DCK

As new, more complicated, life forms evolved, with increasingly complex bodies, increasingly complex changing needs, increasingly broad behavioural repertoires, and richer branching possible actions and futures to consider, their information processing needs and opportunities also became more complex.

Somehow the available construction kits also diversified, producing new, more complex derived construction kits, that allowed

construction not only of new biological materials and body mechanisms, supporting new more complex and varied behaviours

but also

construction of new more sophisticated information-processing mechanisms, enabling organisms, either alone or in collaboration, to deal with increasingly complex challenges and opportunities;

including both concrete and abstract construction kits.

For more on evolution of and use of construction-kits see Sloman[2017] (work in progress).

DEEP DESIGN DISCOVERIES

Many deep discoveries were made by evolution, including designs for DCKs that make possible new forms of information processing.

These have important roles in animal intelligence, including perception, conceptual development, motivation, planning, and problem solving, including

-- topological reasoning about properties of geometrical shapes and shape-changes.
-- reasoning about possible continuous rearrangements of material objects
     (much harder than planning moves in a discrete space).

Different species, with different needs, habitats and behaviours, use information about different topological and geometrical relationships, including

-- birds that build different sorts of nests,
-- carnivores that tear open their prey in order to feed,
-- human toddlers playing with (or sucking) body-parts, toys, etc.

Later on, in a smaller subset of species (perhaps only one species?) new meta-cognitive abilities gradually allowed previous discoveries to be noticed, reflected on, communicated, challenged, defended and deployed in new contexts.

Such 'argumentative' interactions may have been important precursors for chains of reasoning, including the proofs in Euclid's Elements.

WHY IS THIS IMPORTANT?

This is part of an attempt to explain how it became possible for evolution to produce mathematical reasoners.

New deep theories, explanations, and working models should emerge from investigation of preconditions, biological and technological consequences, limitations, variations, and supporting mechanisms for biological construction kits of many kinds.

For example, biologists have pointed out that specialised construction kits, sometimes called 'toolkits', supporting plant development were produced by evolution, making upright plants possible on land (some of which were later found useful for many purposes by humans, e.g. ship-builders).

Specialised construction kits were also needed by vertebrates and others by various classes of invertebrate forms of life.

INFORMATION PROCESSING

Construction kits for biological information processing have received less attention.

One of the early exceptions was Schrödinger's little 1944 book
What is life? (read by James Watson, before he worked with Crick on DNA).

More general construction kits that are tailorable with extra information for new applications can arise from discoveries of parametrisable sub-spaces in the space of possible mechanisms

e.g. common forms with different sizes, or different ratios of sizes, of body parts, different rates of growth of certain body parts, different shapes or sizes of feeding apparatus, different body coverings, etc.

Using a previously evolved construction kit with new parameters (specified either in the genome, or by some aspect of the environment during development) can produce new variants of organisms in a fraction of the time it would take to evolve that type from the earliest life forms.

Similar advantages have been claimed for the use of so-called Genetic Programming (GP) using evolved, structured, parametrised abstractions that can be re-deployed in different contexts, in contrast with Genetic Algorithms (GAs) that use randomly varied flat strings of bits or other basic units.

Evolution sometimes produces specifications for two or more different designs for different stages of the same organism, e.g. one that feeds for a while, and then produces a cocoon in which materials are transformed into a chemical soup from which a new very different adult form (e.g. butterfly, moth, or dragon fly) emerges, able to travel much greater distances than the larval form to find a mate or lay eggs.

These species use mathematical commonality at a much lower level (common molecular structures) than the structural and functional designs of larva and adult, in contrast with the majority of organisms, which retain a fixed, or gradually changing, structure while they grow after hatching or being born, but not fixed sizes or size-ratios of parts, forces required, etc.

Mathematical discoveries were implicit in evolved designs that support parametrisable variable functionalities, such as evolution's discovery of homeostatic control mechanisms that use negative feedback control, billions of years before the Watt centrifugal governor was used to control speed of steam engines.

Of course, most instances of such designs would no more require awareness of the mathematical principles being used than a Watt-governor, or a fan-tail windmill (with a small wind-driven wheel turning the big wheel to face the wind) does.

In both cases, one part of the mechanism acquires information about something (e.g. whether speed is too high or too low, or the direction of maximum wind strength) while another part does most of the work, e.g. transporting energy obtained from heat or wind power to a new point of application.

Such transitions and decompositions in designs could lead to distinct portions of genetic material concerned with separate control functions, e.g. controlling individual development and controlling adult use of products of development, both encoded in genetic material shared across individuals.

METACOGNITION EVOLVES

Very much later, some meta-cognitive products of evolution allowed individuals (humans, or precursors) to attend to their own information-processing (essential for debugging), thereby 'rediscovering' the structures and processes, allowing them to be organised and communicated -- in what we now call mathematical theories, going back to Euclid and his predecessors (about whose achievements there are still many unanswered questions).

If all of this is correct then the physical universe, especially the quantum mechanical aspects of chemistry discussed by Schrödinger provided not only

a construction kit for genetic material implicitly specifying design features of individual organisms,

but also

a 'Fundamental' construction kit (FCK) that can produce a wide variety of 'derived' construction kits (DCKs)

some used in construction of individual organisms, others in construction of new, more complex DCKs, making new types of organism possible.

Moreover, as Schrödinger and others pointed out, construction kits that are essential for micro-organisms developing in one part of the planet can indirectly contribute to construction and maintenance processes in totally different organisms in other locations, via food chains, e.g. because most species cannot synthesise the complex chemicals they need directly from freely available atoms or subatomic materials. So effects of DCKs can be very indirect.

Functional relationships between the smallest life forms and the largest will be composed of many sub-relations.

Such dependency relations apply not only to mechanisms for construction and empowerment of major physical parts of organisms, but also to mechanisms for building information-processors, including brains, nervous systems, and chemical information processors of many sorts.

(E.g. digestion uses informed disassembly of complex structures to find valuable parts to be transported and used or stored elsewhere.)

So far, in answer to Bell (quoted above), I have tried to describe the need for evolutionary selection mechanisms to be supported by enabling mechanisms.

Others have noticed the problem denied by Bell, e.g. Kirschner and Gerhart added some important biological details to the theory of evolved construction-kits, though not (as far as I can tell) the ideas (e.g. about abstraction and parametrisation) presented in this paper.

Work by Ganti and Kauffman is also relevant.

-- and probably others unknown to me!

BIOLOGICAL USES OF ABSTRACTION

As organisms grow in size, weight and strength, the forces and torques required at joints and at contact points with other objects change.

So the genome needs to use the same design with changing forces depending on tasks. Special cases include forces needed to move and manipulate the torso, limbs, gaze direction, chewed objects, etc. 'Hard-wiring' of useful evolved control functions with mathematical properties can be avoided by using designs that allow changeable parameters -- a strategy frequently used by human programmers.

Such parametrisation can both allow for changes in size and shape of the organism as it develops, and for many accidentally discovered biologically useful abstractions that can be parametrised in such designs -- e.g. allowing the same mechanism to be used for control of muscular forces at different stages of development, with changing weights, sizes, moments of inertia, etc.

Even more spectacular generalisation is achievable by re-use of evolved construction-kits

-- not only across developmental stages of individuals within a species,

-- but also across different species that share underlying physical parametrised design patterns,

-- with details that vary between species sharing the patterns

(as in vertebrates, or the more specialised variations among primates, or among birds, or fish species).

Such shared design patterns across species can result either from species having common ancestry or from convergent evolution 'driven' by common features of the environment,

e.g. re-invention of visual processing mechanisms might be driven by aspects of spatial structures and processes common to all locations on the planet, despite the huge diversity of contents.

Such use of abstraction to achieve powerful re-usable design features across different application domains is familiar to engineers, including computer systems engineers.

'Design sharing' explains why the tree of evolution has many branch points, instead of everything having to evolve from one common root node.

Symbiosis also allows combination of separately evolved features.

Similar 'structure-sharing' often produces enormous reductions in search-spaces in AI systems.

It is also common in mathematics: most proofs build on a previously agreed framework of concepts, formalisms, axioms, rules, and previously proved theorems. They don't all start from some fundamental shared axioms.

If re-usable abstractions can be encoded in suitable formalisms (with different application-specific parameters provided in different design contexts), they can enormously speed up evolution of diverse designs for functioning organisms.

This is partly analogous to the use of memo-functions in software design (i.e. functions that store computed values so that they don't have to be re-computed whenever required, speeding up computations enormously, e.g. in the Fibonacci function).

Another type of re-use occurs in (unfortunately named) 'object-oriented' programming paradigms that use hierarchies of powerful re-usable design abstractions, that can be instantiated differently in different combinations, to meet different sets of constraints in different environments, without requiring each such solution to be coded from scratch: 'parametric polymorphism' with multiple inheritance.

This is an important aspect of many biological mechanisms. For example, there is enormous variation in what information perceptual mechanisms acquire and how the information is processed, encoded, stored, used, and in some cases communicated. But abstract commonalities of function and mechanism (e.g. use of wings) can be combined with species specific constraints (parameters).

Parametric polymorphism makes the concept of consciousness difficult to analyse: there are many variants depending on what sort of thing is conscious, what it is conscious of, what information is acquired, what mechanisms are used, how the information contents are encoded, how they are accessed, how they are used, etc.

MATHEMATICAL CONSCIOUSNESS

Mathematical consciousness, still missing from AI, requires awareness of possibilities and impossibilities not restricted to particular objects, places or times -- as Kant pointed out.

Mechanisms and functions with mathematical aspects are also shared across groups of species, such as phototropism in plants, use of two eyes with lenses focused on a retina in many vertebrates, a subset of which evolved mechanisms using binocular disparity for 3-D perception.

That's one of many implicit mathematical discoveries in evolved designs for spatio-temporal perceptual, control and reasoning mechanisms, using the fact that many forms of animal perception and action occur in 3D space plus time, a fact that must have helped to drive evolution of mechanisms for representing and reasoning about 2-D and 3-D structures and processes, as in Euclidean geometry.

In a search for effective designs, enormous advantages come from (explicit or implicit) discovery and use of mathematical abstractions that are applicable across different designs or different instances of one design.

For example a common type of grammar (e.g. a phrase structure grammar) allows many different languages to be implemented including sentence generators and sentence analysers re-using the same program code with different grammatical rules.

Evolution seems to have discovered something like this.

Likewise, a common design framework for flying animals may allow tradeoffs between stability and maneouvreability to be used to adapt to different environmental opportunities and challenges.

These are mathematical discoveries implicitly used by evolution.

Evolution's ability to use these discoveries depends in part on the continual evolution of new DCKs providing materials, tools, and principles that can be used in solving many design and manufacture problems.

In recently evolved species, individuals e.g. humans and other intelligent animals, are able to replicate some of evolution's mathematical discoveries and make practical use of them in their own intentions, plans and design decisions, far more quickly than natural selection could.

Only (adult) humans seem to be aware of doing this.

Re-usable inherited abstractions allow different collections of members of one species, (e.g. humans living in deserts, in jungles, on mountain ranges, in arctic regions, etc.) to acquire expertise suited to their particular environments in a much shorter time than evolution would have required to produce the same variety of packaged competences 'bottom up'.

This flexibility also allows particular groups to adapt to major changes in a much shorter time than adaptation by natural selection would have required. This requires some later developments in individuals to be delayed until uses of earlier developments have provided enough information about environmental features to influence the ways in which later developments occur, as explained later.

This process is substantially enhanced by evolution of metacognitive information processing mechanisms that allow individuals to reflect on their own processes of perception, learning, reasoning, problem-solving, etc. and (to some extent) modify them to meet new conditions.

Later, more sophisticated products of evolution develop meta-meta-cognitive information processing sub-architectures that enable them to notice their own adaptive processes, and to reflect on and discuss what was going on, and in some cases collaboratively improve the processes,

-- e.g. through explicit teaching

-- at first in a limited social/cultural context, after which the activity was able to spread

-- using previously evolved learning mechanisms.

As far as I know only humans have achieved that, though some other species apparently have simpler variants.

These conjectures need far more research!

Human AI designs for intelligent machines created so far seem to have far fewer layers of abstraction, and are far more primitive, than the re-usable designs produced by evolution. Studying the differences is a major sub-task facing the M-M project (and AI).

This requires a deep understanding of what needs to be explained.

DESIGNING DESIGNS

Just as the designer of a programming language cannot know about, and does not need to know about, all the applications for which the programming language will be used, so also can the more abstract products of evolution be instantiated (e.g. by setting parameters) for use in contexts in which they did not evolve.

XX

Many discontinuities in physical forms, behavioural capabilities, environments, types of information acquired, types of use of information and mechanisms for information-processing are still waiting to be discovered.

EVOLUTION OF HUMAN LANGUAGE CAPABILITIES

One of the most spectacular cases is reuse of a common collection of language-creation competences in a huge variety of geographical and social contexts, allowing any individual human to acquire any of several thousand enormously varied human languages, including both spoken and signed languages.

A striking example was the cooperative creation by deaf children in Nicaragua of a new sign language because their teachers had not learned sign languages early enough to develop full adult competences. This suggests that what is normally regarded as language learning is really cooperative language creation, demonstrated in this video:

https://www.youtube.com/watch?v=pjtioIFuNf8

Re-use can take different forms, including

-- re-use of a general design across different species by instantiating a common pattern,

-- re-use based on powerful mechanisms for acquiring and using information about the available resources, opportunities and challenges during the development of each individual.

-- "recursive"(?) reuse: using a general mechanism to get information at one stage in development to provide "parameters" or other structured information required during expression of more abstract genetic information later on -- e.g. providing parameters that influence gene expression (not to be confused with acquisition of data in an already developed learning mechanism). The most striking evidence for this comes from language development, but there are other cases, some documented in Karmiloff-Smith(1992). I think mathematical development is full of unrecognized examples (including "toddler theorems", [26a] A. Sloman, (2013c).)

One of the implications of this is that since individuals cannot have any conception of the "rewards" that will come much later on from what they do, they need to have forms of motivation that are not reward-based. I call that "architecture-based motivation" (ABM vs RBM) Sloman(2009).

The first process happens across evolutionary lineages.

The second happens within individual organisms in their lifetime

Social/cultural evolution requires intermediate timescales.

Evolution seems to have produced multi-level design patterns, whose details are filled in incrementally, during creation of instances of the patterns in individual members of a species.

If all the members live in similar environments that will tend to produce uniform end results.

However, if the genome is sufficiently abstract, then environments and genomic structures may interact in more complex ways, allowing small variations during development of individuals to cascade into significant differences in the adult organism, as if natural selection had been sped up enormously.

A special case is evolution of an immune system with the ability to develop different immune responses depending on the antigens encountered. Another dramatic special case is the recent dramatic cascade of social, economic, and educational changes supported jointly by the human genome and the internet!

CHANGES IN DEVELOPMENTAL TRAJECTORIES

As living things become more complex, increasingly varied types of information are required for increasingly varied uses.

The processes of reproduction normally produce new individuals that have seriously under-developed physical structures and behavioural competences.

Self-development requires physical materials, but it also requires information about what to do with the materials, including disassembling and reassembling chemical structures at a sub-microscopic level and using the products to assemble larger body parts, while constantly providing new materials, removing waste products and consuming energy.

Some energy is stored and some is used in assembly and other processes.

The earliest (simplest?) organisms can acquire and use information about (i.e. sense) only internal states and processes and the immediate external environment, e.g. pressure, temperature, direction of gravity, presence of chemicals in the surrounding soup, and perhaps arrival of photons, with all uses of information taking the form of immediate local reactions, e.g. allowing a molecule through a membrane.

Changes in types of information, types of use of information and types of biological mechanism for processing information have repeatedly altered the processes of evolutionary morphogenesis that produce such changes: a positive feedback process.

An example is the influence of mate selection on evolution in intelligent organisms: mate selection is itself dependent on previous evolution of cognitive mechanisms. Hence the prefix 'Meta-' in 'Meta-Morphogenesis'.

This is a process with multiple feedback loops between new designs and new requirements (niches), as suggested in

ONLINE VS OFFLINE INTELLIGENCE

As the previous figure suggests, evolution constantly produces new organisms that may or may not be larger than predecessors, but are more complex both in the types of physical action they can produce and also the types of information and types of information processing required for selection and control of such actions.

Some of that information is used immediately and discarded (online perceptual intelligence) while other kinds are stored, possibly in transformed formats, and used later, possibly on many occasions (offline perceptual intelligence) -- a distinction often mislabelled as 'where' vs 'what' perception.

This generalises Gibson's theory that perception mainly provides information about 'affordances' rather than information about visible surfaces of perceived objects.

These ideas, like Karmiloff-Smith's Beyond Modularity suggest that one of the effects of biological evolution was fairly recent production of more or less abstract construction kits that come into play at different stages in development, producing new more rapid changes in variety and complexity of information processing across generations as explained below (See fig 2)

It's not clear how much longer this can continue: perhaps limitations of human brains constrain this process. But humans working with intelligent machines may be able to stretch the limits.

At some much later date, probably in another century, we may be able to make machines that do it all themselves -- unless it turns out that the fundamental information processing mechanisms in brains cannot be modelled in computer technology developed by humans.

Species can differ in the variety of types of sensory information they can acquire, in the variety of uses to which they put that information, in the variety of types of physical actions they can produce, in the extent to which they can combine perceptual and action processes to achieve novel purposes or solve novel problems, and the extent to which they can educate, reason about, collaborate with, compete against conspecifics, and prey or competitor species.

As competences become more varied and complex, the more disembodied must the information processing be, i.e. disconnected from current sensory and motor signals (while preserving low level reflexes and sensory-motor control loops for special cases).

This may have been a precursor to mathematical abilities to think about transfinite set theory and high dimensional vector spaces or complex modern scientific theories.

E.g. Darwin's own thinking about ancient evolutionary processes. was detached from his particular sensory-motor processes at the time! This applies also to affective states, e.g. compare being startled and being obsessed with ambition.
The fashionable emphasis on "embodied cognition" may be appropriate to the study of organisms such as plants and microbes, and perhaps insects, but evolved intelligence increasingly used disembodied cognition, most strikingly in the production of ancient mathematical minds. This led to new complexities in processes of epigenesis (gene-influenced development).


Epigenesis in organisms and in species

Epigenetics is the study of processes and mechanisms by which genes influence the development of individual members of a species.
Waddington's view of epigenesis
A ball rolling (passively) down a fixed landscape
XX
Figure WAD:


Continued ...

A more recent picture of epigenesis (beyond Waddington)
XX
Figure EPI:
Cascaded, staggered, developmental trajectories, with later processes influenced by results of earlier processes in increasingly complex ways. Proposed by Chappell and Sloman 2007[3]

Early genome-driven learning from the environment occurs in loops on the left.
Downward arrows further right represent later gene-triggered processes during
individual development modulated by results of earlier learning via feedback on left.

(Chris Miall suggested the structure of the original diagram.)

Later I'll try to show that a similar diagram represents physical/chemical processes in biological evolution.


VARIATIONS IN EPIGENETIC TRAJECTORIES

The description given so far is very abstract and allows significantly different instantiations in different species, addressing different sorts of functionality and different types of design, e.g. of physical forms, behaviours, control mechanisms, reproductive mechanisms, etc.

At one extreme the reproductive process produces individuals whose genome exercises a fixed pattern of control during development, leading to 'adults' with only minor variations.

At another extreme, instead of the process of development from one stage to another being fixed in the genome, it could be created during development through the use of more than one level of design in the genome.

E.g. if there are two levels then results of environmental interaction at the first level could transform what happens at the second level. If there are multiple levels then what happens at each new level may be influenced by results of earlier developments.

In a species with such multi-stage development, at intermediate stages not only are there different developmental trajectories due to different environmental influences, there are also selections among the intermediate level patterns to be instantiated, so that in one environment development may include much learning concerned with protection from freezing, whereas in other environments individual species may vary more in the ways they seek water during dry seasons.

Then differences in adults come partly from the influence of the environment in selecting patterns to instantiate. E.g. one group may learn and pass on information about where the main water holes are, and in another group individuals may learn and pass on information about which plants are good sources of water.

If these conjectures are correct, patterns of development will automatically be varied because of patterns and meta-patterns picked up by earlier generations and instantiated in cascades during individual development.

So different cultures produced jointly by a genome and previous environments can produce very different expressions of the same genome, even though individuals share similar physical forms.

The main differences are in the kinds of information acquired and used, and the information processing mechanisms developed. Not all cultures use advanced mathematics in designing buildings, but all build on previously evolved understanding of space, time and motion.

Evolution seems to have found how to provide rich developmental variation by allowing information gathered by young individuals not merely to select and use pre-stored design patterns, but to create new patterns by assembling fragments of information during earlier development, then using more abstract processes to construct new abstract patterns, partly shaped by the current environment, but with the power to be used in new environments.

Developments in culture (including language, science, engineering, mathematics, music, literature, etc.) all show such combinations of data collection and enormous creativity, including creative ontology extension (e.g. the Nicaraguan children mentioned above.

Unless I have misunderstood her, this is the type of process Karmiloff-Smith called 'Representational Re-description' (RR).

Genome-encoded previously acquired abstractions 'wait' to be instantiated at different stages of development, using cascading alternations between data-collection and abstraction formation (RR) by instantiating higher level generative abstractions (e.g. meta-grammars), not by forming statistical generalisations.

This could account for both the great diversity of human languages and cultures, and the power of each one, all supported by a common genome operating in very different environments.

Jackie Chappell noticed the implication that instead of the genome specifying a fixed 'epigenetic landscape' (proposed by Waddington) it provides a schematic landscape and mechanisms that allow each individual (or in same cases groups of individuals) to modify the landscape while moving down it (e.g. adding new hills, valleys, channels and barriers).

Though most visible in language development, the process is not unique to language development, but occurs throughout childhood (and beyond) in connection with many aspects of development of information processing abilities, construction of new ontologies, theory formation, etc.

This differs from forms of learning or development that use uniform statistics-based methods for repeatedly finding patterns at different levels of abstraction.

Instead, Figure 2 indicates that the genome encodes increasingly abstract and powerful creative mechanisms developed at different stages of evolution, that are 'awakened' (a notion used by Kant) in individuals only when appropriate, so that they can build on what has already been learned or created in a manner that is tailored to the current environment.

For example, in young (non-deaf) humans, processes giving sound sequences a syntactic interpretation develop after the child has learnt to produce and to distinguish some of the actual speech sounds used in that location.

In social species, the later stages of Figure 2 include mechanisms for discovering non-linguistic ontologies and facts that older members of the community have acquired, and incorporating relevant subsets in combination with new individually acquired information.

Instead of merely absorbing the details of what older members have learnt, the young can absorb forms of creative learning, reasoning and representation that older members have found useful and apply them in new environments to produce new results.

In humans, this has produced spectacular effects, especially in the last few decades.

The evolved mechanisms for representing and reasoning about possibilities, impossibilities and necessities were essential for both perception and use of affordances and for making mathematical discoveries, something statistical learning cannot achieve.

SPACE-TIME

An invariant for all species in this universe is space-time embedding, and changing spatial relationships between body parts and things in the environment.

The relationships vary between water-dwellers, cave-dwellers, tree-dwellers, flying animals, and modern city-dwellers.

Representational requirements depend on body parts and their controllable relationships to one another and other objects.

So aeons of evolution will produce neither a tabula rasa nor geographically specific spatial information, but a collection of generic mechanisms for finding out what sorts of spatial structures have been bequeathed by ancestors as well as physics and geography, and learning to make use of whatever is available (McCarthy[17]): that's why embodiment is relevant to evolved cognition.

Kant's ideas about geometric knowledge are relevant though he assumed that the innate apparatus was geared only to structures in Euclidean space, whereas our space is only approximately Euclidean.

Somehow the mechanisms conjectured in Figure 2 eventually (after many generations) made it possible for humans to make the amazing discoveries recorded in Euclid's Elements, still used world-wide by scientists and engineers.

If we remove the parallel axiom we are left with a very rich collection of facts about space and time, especially topological facts about varieties of structural change, e.g. formation of networks of relationships, deformations of surfaces, and possible trajectories constrained by fixed obstacles.

If we can identify a type of construction-kit that produces young robot minds able to develop or evaluate those ideas in varied spatial environments, we may find important clues about what is missing in current AI.

Long before logical and algebraic notations were used in mathematical proofs, evolution had produced abilities to represent and reason about what Gibson called 'affordances', including possible and impossible alterations to spatial configurations

Example:
The (topological) impossibility of solid linked rings becoming unlinked, or vice versa.
See also this rubber-band example:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/rubber-bands.html

I suspect brains of many intelligent animals make use of topological reasoning mechanisms that have so far not been discovered by brain scientists or AI researchers.

Addition of meta-cognitive mechanisms able to inspect and experiment with reasoning processes may have led both to enhanced spatial intelligence and meta-cognition, and also to meta-metacognitive reasoning about other intelligent individuals.

OTHER SPECIES

I conjecture that further investigation will reveal varieties of information processing (computation) that have so far escaped the attention of researchers, but which play important roles in many intelligent species, including not only humans and apes but also elephants, corvids, squirrels, cetaceans and others.

In particular, some intelligent non-human animals and pre-verbal human toddlers seem to be able to use mathematical structures and relationships (e.g. partial orderings and topological relationships) unwittingly. Mathematical meta-meta...-cognition seems to be restricted to humans, but develops in stages, as Piaget found, partially confirming Kant's ideas about mathematical knowledge in.

However, I suspect that (as Kant seems to have realised) the genetically provided mathematical powers of intelligent animals make more use of topological and geometric reasoning, using analogical, non-Fregean, representations, as suggested in than the logical, algebraic, and statistical capabilities that have so far dominated AI and robotics.

For example, even the concepts of cardinal and ordinal number are crucially related to concepts of one-one correspondence between components of structures, most naturally understood as a topological relationship rather than a logically definable relationship. http://www.cs.bham.ac.uk/research/projects/cogaff/crp/#chap8.html

(NB 'analogical' does not imply 'isomorphic' as often suggested. A typical 2D picture (an analogical representation) of a 3D scene cannot be isomorphic with the scene depicted. A projection is not an isomorphism if it removes some of the relationships. There is a deeper distinction between Fregean and Analogical forms of representation Sloman (1971), concerned with the relationships between representation and what is represented.

DISEMBODIMENT OF COGNITION EVOLVES
(Epigenesis of evolutionary mechanisms}
Update: 15 Aug 2017

All this shows why increasing complexity of physical structures and capabilities, providing richer collections of alternatives and more complex internal and external action-selection criteria, requires increasing disembodiment of information processing.

Such transitions occur both in individual development in intelligent species and in evolution of complex organisms.

The fact that evolution is not stuck with the Fundamental Construction Kit (FCK) provided by physics and chemistry, but also produces and uses new 'derived' construction-kits (DCKs), including abstract construction kits needed for intelligent organisms (e.g. grammar construction kits in humans), enhances both the mathematical and the ontological creativity of evolution, which is indirectly responsible for all the other known types of creativity.

Although I have not developed the idea in this paper, the work on construction kits and their essential role in evolution on this planet, suggests that there are weak but important analogies between epigenetic processes in individual humans illustrated in Figure EPI, and some evolutionary processes. In both cases, the development depends on discovery of powerful abstractions ("moving upwards") that can be instantiated in different ways in different species or the same species at different times ("moving downwards"), instead of all evolution being simply "sideways" movement at a fixed level of abstraction in design.

(This distinction is ignored by theories of mathematical discovery by humans that emphasise use of metaphor and analogy instead of use of abstraction and multiple re-instantiation.)

The fact that many evolved construction kits, and their products, depended on natural selection "discovering" enormously powerful re-usable mathematical abstractions, whose re-use involved not just copying, but instantiation of a generic schema with new parameters, illustrates a partial analogy between epigenesis in intelligent organisms and epigenesis in evolution. To that extent I am proposing that evolution needs intelligent design, but all the intelligence used in the design processes was previously produced by evolution.

This counters both the view that mathematics is a product of human minds, and a view of metaphysics as being concerned with something unchangeable.

The notion of 'Descriptive Metaphysics' presented by Strawson (1959) needs to be revised, to include 'Meta-Descriptive Metaphysics'.

DO WE NEED NON-TURING FORMS OF COMPUTATION?

I also conjecture that filling in some of the missing details in this theory (a huge challenge) will help us understand both the evolutionary changes that introduced unique features of human minds and why it is not obvious that Turing-equivalent digital computers, or even asynchronous networks of such computers running sophisticated interacting virtual machines, will suffice to replicate the human mathematical capabilities that preceded modern logic, algebra, set-theory, and theory of computation.

It will all depend on the precise forms of virtual information processing machinery that evolution has managed to produce, about which I suspect current methods of neuroscientific investigation cannot yield deep information.

Current AI mechanisms (including deep learning mechanisms) cannot produce reasoners like Euclid, Zeno, Archimedes, or even reasoners like pre-verbal toddlers, weaver birds and squirrels. Some of the reasons have been indicated above.

This indicates serious gaps in current AI, despite many impressive achievements. I see no reason to believe that uniform, statistics based learning mechanisms will have the power to bridge those gaps: in particular it is impossible for statistical reasoning to establish necessary truths.

WHAT ABOUT LOGIC?

Whether increased sophistication of logic-based reasoners will suffice (as suggested by McCarthy and Hayes)(1969) is not clear.

The discoveries made by ancient mathematicians preceded the discoveries of modern algebra and logic, and the arithmetisation of geometry by Descartes. So they were definitely not consciously using only logical/algebraic reasoning

Evolved mechanisms that use previously acquired abstract forms of meta-learning with genetically orchestrated instantiation triggered by developmental changes (as in the above diagram), may do much better.

Those mechanisms depend on rich internal languages that evolved for use in perception, reasoning, learning, intention formation, plan formation and control of actions before communicative languages.

This generalises claims made in Chomsky (1965), and his later works, focused only on development of human spoken languages, ignoring how much language and non-linguistic cognition develop with mutual support.

THE IMPORTANCE OF VIRTUAL MACHINERY

Building a new computer for every task was made unnecessary by allowing computers to have changeable programs.

Initially each program, specifying instructions to be run, had to be loaded (via modified wiring, switch settings, punched cards, or punched tape), but later developments provided more and more flexibility and generality, with higher level programming languages providing reusable domain specific languages and tools, some translated to machine code, others run on a task specific virtual computer provided by an interpreter.

Later developments provided time-sharing operating systems supporting multiple interacting programs running effectively in parallel performing different, interacting, tasks on a single processor.

As networks developed, these collaborating virtual machines became more numerous, more varied, more geographically distributed, and more sophisticated in their functionality, often extended with sensors of different kinds and attached devices for manipulation, carrying, moving, and communicating.

These developments suggest the possibility that each biological mind is also implemented as a collection of concurrently active nonphysical, but physically implemented, virtual machines interacting with one another and with the physical environment through sensor and motor interfaces.

Such 'virtual machine functionalism' could accommodate a large variety of coexisting, interacting, cognitive, motivational and emotional states, including essentially private qualia as explained by Sloman and Chrisley (2003).

Long before human engineers produced such designs, biological evolution had already encountered the need and produced virtual machinery of even greater complexity and sophistication, serving information processing requirements for organisms, whose virtual machinery included interacting sensory qualia, motivations, intentions, plans, emotions, attitudes, preferences, learning processes, and various aspects of self-consciousness.

THE FUTURE OF AI

We still don't know how to make machines able to replicate the mathematical insights of ancient mathematicians like Euclid e.g. with 'triangle qualia' that include awareness of mathematical possibilities and constraints, or minds that can discover the possibility of extending Euclidean geometry with the neusis construction. For discussion of roles of 'triangle qualia' in discoveries made by ancient mathematicians see
triangle-theorem.html http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ triangle-theorem.html
The use of the "neusis" construction to trisect an arbitrary angle is explained in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/neusis.html

NOTE
It is not clear whether we simply have not been clever enough at understanding the problems and developing the programs, or whether we need to extend the class of virtual machines that can be run on computers, or whether the problem is that animal brains use kinds of virtual machinery that cannot be implemented using the construction kits known to modern computer science and software engineering. As Turing hinted in his 1950 paper: aspects of chemical computation may be essential.

Biological organisms also cannot build such minds directly from atoms and molecules. They need many intermediate DCKs, some of them concrete and some abstract, insofar as some construction kits, like some animal minds, use virtual machines.

Evolutionary processes must have produced construction kits for abstract information processing machinery supporting increasingly complex multi-functional virtual machines, long before human engineers discovered the need for such things and began to implement them in the 20th Century.

Studying such processes is very difficult because virtual machines don't leave fossils (though some of their products do). Moreover details of recently evolved virtual machinery may be at least as hard to inspect as running software systems without built-in run-time debugging 'hooks'. This could, in principle, defeat all known brain scanners.

'Information' here is not used in Shannon's sense (concerned with mechanisms and vehicles for storage, encoding, transmission, decoding, etc.), but in the much older sense familiar to Jane Austen and used in her novels e.g. Pride and Prejudice, in which how information content is used is important, not how information bearers are encoded, stored, transmitted, received, etc. The primary use of information is for control.

Communication, storage, reorganisation, compression, encryption, translation, and many other ways of dealing with information are all secondary to the use for control. Long before humans used structured languages for communication, intelligent animals must have used rich languages with structural variability and compositional semantics internally, e.g. in perception, reasoning, intention formation, wondering whether, planning and execution of actions, and learning.

We can search for previously unnoticed evolutionary transitions going beyond the examples here (e.g. Figure 1), e.g. transitions between organisms that merely react to immediate chemical environments in a primaeval soup, and organisms that use temporal information about changing concentrations in deciding whether to move or not.

Another class of examples seems to be the new mechanisms required after the transition from a liquid based life form to life on a surface with more stable structures (e.g. different static resources and obstacles in different places), or a later transition to hunting down and eating mobile land-based prey, or transitions to reproductive mechanisms requiring young to be cared for, etc.? Perhaps we'll then understand how to significantly extend AI.

Compare Schrödinger's discussion in [19] of the relevance of quantum mechanisms and chemistry to the storage, copying, and processing of genetic information.26 I am suggesting that questions about evolved intermediate forms of information processing are linked to philosophical questions about the nature of mind, the nature of mathematical discovery, and deep gaps in current AI.27

NOTES:
Boden [2] distinguishes H-Creativity, which involves being historically original, and P-Creativity, which requires only personal originality. The distinction is echoed in the phenomenon of convergent evolution, illustrated in
https://en.wikipedia.org/wiki/List%20of%20examples%20of%20convergent%20evolution
The first species with some design solution exhibits H-creativity of evolution. Species in which that solution evolves independently later exhibit a form of P-creativity.

Why did Turing write in his his 1950 paper that chemistry may turn out to be as important as electricity in brains?

NOTES ON MATHEMATICAL DISCOVERY

(Skipped in recorded presentation.)
This work started before I heard about Artificial Intelligence or learnt to program. After a degree in mathematics and physics at Cape Town, I came to Oxford in October 1957, intending to do research in mathematics (after further general study). Because I did not like some of the compulsory mathematics courses (e.g. fluid dynamics) I transferred from mathematics to Logic with Hao Wang as my supervisor, and and became friendly with philosophy graduate students, with whom I used to argue. This eventually caused me to transfer to Philosophy. I am still trying to answer the questions about mathematical knowledge that drove me at that time.

The philosophers I met (mostly philosophy research students) were mistaken about the nature of mathematical discovery as I had experienced it while doing mathematics. E.g. some of them accepted David Hume's categorisation of claims to knowledge, which seemed to me to ignore important aspects of mathematical discovery.


  1. Hume's first category was "abstract reasoning concerning quantity or number", also expressed as knowledge "discoverable by the mere operation of thought". This was sometimes thought to include all "trivial knowledge" consisting only of relations between our ideas, for example, "All bachelors are unmarried". Kant labelled this category of knowledge "Analytic".

    It is sometimes specified as knowledge that can be obtained by starting from definitions of words and then using only pure logical reasoning, e.g.
    "No bachelor uncle is an only child".

  2. Hume's second category was empirical knowledge gained, and tested, by making observations and measurements i.e. "experimental reasoning concerning matter of fact and existence". This would include much common sense knowledge, scientific knowledge, historical knowledge, etc.

  3. His third category was everything that could not fit into either the first or second. He described the residue as "nothing but sophistry and illusion" urging that all documents claiming such knowledge should be "committed to flames". I assume he was thinking mainly of metaphysics and theology.
Warning: I am not a Hume scholar. For more accurate and more detailed summaries of his ideas search online. e.g.
     https://en.wikipedia.org/wiki/David_Hume
     https://plato.stanford.edu/entries/hume/
The philosophers I met seemed to believe that all mathematical knowledge was in Hume's first category and was therefore essentially trivial. (My memory is a bit vague about 60 year old details.)

But I knew from my own experience of doing mathematics that mathematical knowledge did not fit into any of these categories: it was closest to the first category, but was not trivial, and did not come only from logical deductions from definitions.

I then discovered that Immanuel Kant had criticised Hume for not allowing a category of knowledge that more accurately characterised mathematical knowledge, in his 1781 book, "Critique of Pure Reason".

But the philosophers thought Kant's ideas about mathematical knowledge being non-trivial and non-empirical were mistaken because he took knowledge of Euclidean geometry as an example. They thought Kant had been proved wrong when Einstein and Eddington showed that space was not Euclidean, by demonstrating the curvature of light rays passing close to the sun:
https://en.wikipedia.org/wiki/Euclidean_geometry#20th_century_and_general_relativity

This argument against Kant was misguided for several reasons. In particular it merely showed that human mathematicians could make mistakes, e.g. by thinking that 2D and 3D spaces were necessarily Euclidean.

In a Euclidean plane surface, if P is any point, and L any straight line that does not pass through P, there will exactly one straight line through P in the plane, that never intersects L. I.e. there is a unique line through P and parallel to L.

However, before Einstein's work, mathematicians had previously discovered that not all spaces are necessarily Euclidean and that there were different kinds of space in which the parallel axiom was false (elliptical and hyperbolic spaces). If Kant had known this, I am sure he would have changed the examples that assumed the parallel axiom. Removing it leaves enough rich and deep mathematical content to illustrate Kant's claims, including the mathematical discovery that a Euclidean geometry without the parallel axiom is consistent with both Euclidean and non-Euclidean spaces: as good an example of a non-analytic necessary truth as any Kant presented.

He could have used the discovery that Euclidean geometry without the parallel axiom could be extended in three different ways with very different consequences as one of his examples of a mathematical discovery that is not derivable from definitions by logic, and is a necessary truth, and can be discovered by mathematical thinking, and does not need empirical tests at different locations, altitudes, or on different planets, etc.

In 1962 I completed my DPhil thesis defending Kant, now online Sloman(1962)

I went on to become a lecturer in philosophy, but I was left feeling that my thesis did not answer all the questions, and something more needed to be done. So when Max Clowes, a pioneering AI vision researcher came to Sussex university and introduced me to AI and programming I was eventually persuaded to try to show how AI could support Kant, by demonstrating how to build a "baby robot" that "grows up" to make new mathematical discoveries in roughly the manner that Kant had described, including replicating some of the discoveries of ancient mathematicians like Archimedes, Euclid and Pythagoras.
---------------------------------------
     Max Clowes died in 1981. A tribute to him with annotated bibliography is here.
     http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#61
---------------------------------------

This would require a form of learning totally different from both

The latter methods are logically incapable of demonstrating truths of mathematics, which are concerned with necessities and impossibilities, not mere probabilities.

(Including some that human toddlers and intelligent non-human species seem able to discover, even if unwittingly, as I have tried to demonstrate, e.g. in a partial survey of what I now call "toddler theorems": [26a] A. Sloman, (2013c).)

Part of my argument in the thesis, inspired by Kant, was that intelligent robots, like intelligent humans, needed forms of mathematical reasoning that were not restricted to use of logical derivations from definitions, and were also different from empirical reasoning based on experiment and observation.

Encouraged by Max Clowes I published a paper (at IJCAI 1981) that challenged the "logicist" approach to AI proposed by John McCarthy, one of the founders of AI, as presented in McCarthy and Hayes (1969). My critique of logicism, emphasising the heuristic benefits of "analogical" representations is Sloman (1971).

As a result I was invited to spend a year (1972-3) doing research in AI at Edinburgh University. I hoped it would be possible to use AI to defend Kant's philosophical position by showing how to build a "baby robot" without mathematical knowledge, that could grow up to be a mathematician in the same way as human mathematicians did, including, presumably the great ancient mathematicians who knew nothing about modern logic, formal systems of reasoning based on axioms (like Peano's axioms for arithmetic) and did not assume that geometry could be modelled in arithmetic as Descartes had shown.

I published a sort of "manifesto" about this in 1978 (The Computer Revolution in Philosophy, freely available online, with additional notes and comments.)

The task turned out to be much more difficult than I had expected and now nearly 40 years later, after doing a lot of work in AI, including a lot of work on architectures for intelligent agents,
     http://www.cs.bham.ac.uk/research/projects/cogaff/
a toolkit for exploring alternative agent architectures,
     http://www.cs.bham.ac.uk/research/projects/poplog/packages/simagent.html
work on requirements for human-like vision systems, and many related topics, I am still puzzled about exactly what is missing from AI.

Since 2012, as explained later, I have been trying to fill the gaps by means of the Turing-inspired Meta-Morphogenesis project, a very difficult long term project, which I suspect Alan Turing was thinking about in the years before he died, in 1954.

In parallel with this I am trying to analyse the forms of reasoning required for the ancient mathematical discoveries in geometry and topology (illustrated below), with the aim eventually of specifying detailed requirements for a machine to make such discoveries. That may give new clues regarding how animal brains work.

RELATED

An extended abstract for a closely related invited talk at the AISB Symposium on computational modelling of emotions is also available online at:
http://www.cs.bham.ac.uk/research/projects/cogaff/aisb17-emotions-sloman.pdf


REFERENCES
To be re-formatted ... one day.

[1] Graham Bell, Selection The Mechanism of Evolution, OUP, 2008. Second Edition.

[2] M. A. Boden, The Creative Mind: Myths and Mechanisms, Weidenfeld & Nicolson, London, 1990. (Second edition, Routledge, 2004).

[Pythag] Alexander Bogomolny, (2017) Pythagorean Theorem and its many proofs from Interactive Mathematics Miscellany and Puzzles
http://www.cut-the-knot.org/pythagoras/index.shtml
Accessed 15 August 2017

[3] Jackie Chappell and Aaron Sloman, "Natural and artificial metaconfigured altricial information-processing systems", International Journal of Unconventional Computing, 3(3), 221-239, (2007). http://www.cs.bham.ac.uk/research/projects/cogaff/07.html#717

[4] N. Chomsky, 1965, Aspects of the theory of syntax, MIT Press, Cambridge, MA.

Shang-Ching Chou, Xiao-Shan Gao and Jing-Zhong Zhang, 1994, Machine Proofs In Geometry: Automated Production of Readable Proofs for Geometry Theorems, World Scientific, Singapore,
http://www.mmrc.iss.ac.cn/~xgao/paper/book-area.pdf

[5] Juliet C. Coates, Laura A. Moody, and Younousse Saidi, "Plants and the Earth system - past events and future challenges', New Phytologist, 189, 370-373, (2011).

[6] Alan Turing - His Work and Impact, eds., S. B. Cooper and J. van Leeuwen, Elsevier, Amsterdam, 2013. (contents list).

D. C. Dennett, 1978 Brainstorms: Philosophical Essays on Mind and Psychology. MIT Press, Cambridge, MA.

D. C. Dennett, 1995, Darwin's Dangerous Idea: Evolution and the Meanings of Life, Penguin Press, London and New York,

D.C. Dennett, 1996
Kinds of minds: towards an understanding of consciousness,
Weidenfeld and Nicholson, London, 1996,
http://www.amazon.com/Kinds-Minds-Understanding-Consciousness-Science/dp/0465073514

[7] T. Froese, N. Virgo, and T. Ikegami, Motility at the origin of life: Its characterization and a model', Artificial Life, 20(1), 55-76, (2014).

[8] Tibor Ganti, 2003 The Principles of Life, OUP, New York, Eds. Eors Szathmary & James Griesemer, Translation of the 1971 Hungarian edition.

H. Gelernter, 1964, Realization of a geometry-theorem proving machine, in Computers and Thought, Eds. Feigenbaum, Edward A. and Feldman, Julian, pp. 134-152, McGraw-Hill, New York, Re-published 1995 (ISBN 0-262-56092-5),
http://dl.acm.org/citation.cfm?id=216408.216418
Also at
https://pdfs.semanticscholar.org/2edc/8083073837564306943aab77d6dcc19d0cdc.pdf

[9] J. J. Gibson, The Ecological Approach to Visual Perception, Houghton Mifflin, Boston, MA, 1979.

[10] M. M. Hanczyc and T. Ikegami, 'Chemical basis for minimal cognition', Artificial Life, 16, 233-243, (2010).

[11] John Heslop-Harrison, New concepts in flowering-plant taxonomy, Heinemann, London, 1953.

[12] Immanuel Kant, Critique of Pure Reason, Macmillan, London, 1781. Translated (1929) by Norman Kemp Smith.
Various online versions are also available now.

[13] A. Karmiloff-Smith, Beyond Modularity: A Developmental Perspective on Cognitive Science, MIT Press, Cambridge, MA, 1992.

[14] S. Kauffman, At home in the universe: The search for laws of complexity, Penguin Books, London, 1995.

[15] M.W. Kirschner and J.C. Gerhart, The Plausibility of Life: Resolving Darwin's Dilemma, Yale University Press, Princeton, 2005.

[16] D. Kirsh, "Today the earwig, tomorrow man?', Artificial Intelligence, 47(1), 161-184, (1991).

I. Lakatos, 1976, Proofs and Refutations, Cambridge University Press, Cambridge, UK,

[17a] John McCarthy and Patrick J. Hayes, 1969, "Some philosophical problems from the standpoint of AI", Machine Intelligence 4, Eds. B. Meltzer and D. Michie, pp. 463--502, Edinburgh University Press,
http://www-formal.stanford.edu/jmc/mcchay69/mcchay69.html

[17] J. McCarthy, "The well-designed child', Artificial Intelligence, 172(18), 2003-2014, (2008).

[17a] Nathaniel Miller, 2007, Euclid and His Twentieth Century Rivals: Diagrams in the Logic of Euclidean Geometry, Center for the Study of Language and Information, Stanford Studies in the Theory and Applications of Diagrams,
https://web.stanford.edu/group/cslipublications/cslipublications/site/9781575865072.shtml

[18] W. T. Powers, Behavior, the Control of Perception, Aldine de Gruyter, New York, 1973.

[18a] Sakharov (2003ff)
Foundations of Mathematics (Online References)
Alexander Sakharov, with contributions by Bhupinder Anand, Harvey Friedman, Haim Gaifman, Vladik Kreinovich, Victor Makarov, Grigori Mints, Karlis Pdnieks, Panu Raatikainen, Stephen Simpson,
"This is an online resource center for materials that relate to foundations of mathematics (FOM). It is intended to be a textbook for studying the subject and a comprehensive reference. As a result of this encyclopedic focus, materials devoted to advanced research topics are not included. The author has made his best effort to select quality materials on www."
http://sakharov.net/foundation.html
NOTE: some of the links to other researchers' web pages are out of date, but in most cases a search engine should take you to the new location.

[19] Erwin Schrödinger, What is life?, CUP, Cambridge, 1944.
Commented extracts available here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/schrodinger-life.html

[20] A. Sloman, 1962, Knowing and Understanding: Relations between meaning and truth, meaning and necessary truth, meaning and synthetic necessary truth (DPhil Thesis), PhD. dissertation, Oxford University, (now online)
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-1962

[21] A. Sloman, 1971, "Interactions between philosophy and AI: The role of intuition and non-logical reasoning in intelligence", in Proc 2nd IJCAI, pp. 209--226, London. William Kaufmann. Reprinted in Artificial Intelligence, vol 2, 3-4, pp 209-225, 1971.
http://www.cs.bham.ac.uk/research/cogaff/62-80.html#1971-02
An expanded version was published as chapter 7 of Sloman 1978, available here.

[22] A. Sloman, 1978 The Computer Revolution in Philosophy, Harvester Press (and Humanities Press), Hassocks, Sussex.
http://www.cs.bham.ac.uk/research/cogaff/62-80.html#crp

[23] A. Sloman, (2000) "Interacting trajectories in design space and niche space: A philosopher speculates about evolution', in Parallel Problem Solving from Nature (PPSN VI), eds. M.Schoenauer, et al. Lecture Notes in Computer Science, No 1917, pp. 3-16, Berlin, (2000). Springer-Verlag.

[24] A. Sloman and R.L. Chrisley, (2003) "Virtual machines and consciousness', Journal of Consciousness Studies, 10(4-5), 113-172. [24a] A. Sloman, 2009, Architecture-Based Motivation vs Reward-Based Motivation, Newsletter on Philosophy and Computers, American Philosophical Association, 09,1, pp. 10--13, Newark, DE, USA
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/architecture-based-motivation.html

[25] A. Sloman,2013a "Virtual Machine Functionalism (The only form of functionalism worth taking seriously in Philosophy of Mind and theories of Consciousness)', Research note, School of Computer Science, The University of Birmingham.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-func.html

[26] A. Sloman,2013b "Virtual machinery and evolution of mind (part 3) Meta-morphogenesis: Evolution of information-processing machinery', in Alan Turing - His Work and Impact, eds., S. B. Cooper and J. van Leeuwen, 849-856, Elsevier, Amsterdam.
http://www.cs.bham.ac.uk/research/projects/cogaff/11.html#1106d

[26a] A. Sloman, (2013c), Meta-Morphogenesis and Toddler Theorems: Case Studies, Online discussion note, School of Computer Science, The University of Birmingham, http://goo.gl/QgZU1g

[27] A. Sloman (2015). What are the functions of vision? How did human language evolve? Online research presentation.
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk111

[27a] A. Sloman 2017, "Construction kits for evolving life (Including evolving minds and mathematical abilities.)" Technical report (work in progress)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/construction-kits.html

An earlier version, frozen during 2016, was published in a Springer Collection in 2017:
https://link.springer.com/chapter/10.1007%2F978-3-319-43669-2_14
in The Incomputable Journeys Beyond the Turing Barrier
Eds: S. Barry Cooper and Mariya I. Soskova
https://link.springer.com/book/10.1007/978-3-319-43669-2

[28] A. Sloman and David Vernon. A First Draft Analysis of some Meta-Requirements for Cognitive Systems in Robots, 2007. Contribution to euCognition wiki.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-requirements.html

[29] P. F. Strawson, Individuals: An essay in descriptive metaphysics, Methuen, London, 1959.

[29a] Max Tegmark, 2014, Our mathematical universe, my quest for the ultimate nature of reality, Knopf (USA) Allen Lane (UK), (ISBN 978-0307599803/978-1846144769)

[30] A. M. Turing, "Computing machinery and intelligence', Mind, 59, 433-460, (1950). (reprinted in E.A. Feigenbaum and J. Feldman (eds) Computers and Thought McGraw-Hill, New York, 1963, 11-35).

[31] A. M. Turing, (1952) "The Chemical Basis Of Morphogenesis", Phil. Trans. Royal Soc. London B 237, 237, 37-72.

Note: A presentation of Turing's main ideas for non-mathematicians can be found in
Philip Ball, 2015, "Forging patterns and making waves from biology to geology: a commentary on Turing (1952) `The chemical basis of morphogenesis'",
http://dx.doi.org/10.1098/rstb.2014.0218

[32] C. H. Waddington, The Strategy of the Genes. A Discussion of Some Aspects of Theoretical Biology, George Allen & Unwin, 1957.

[33] R. A. Watson and E. Szathmary, "How can evolution learn?', Trends in Ecology and Evolution, 31(2), 147-157, (2016).


Revised versions will be available here, and possibly also pdf later
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ijcai-2017-cog.html

A partial index of discussion notes is in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html


Installed:10 Aug 2017
Last updated:

Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham