School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project CogX project

An introduction to the Meta-Morphogenesis project
How can a cloud of dust give rise to a planet full of life and mental activity -- including mathematics?
Abstract for talk to Leeds logic seminar, Wed 31st October 2012
(DRAFT: Liable to change)

Aaron Sloman
School of Computer Science, University of Birmingham.
(Officially retired philosopher/cognitive scientist in a Computer Science department)

Installed: 14 Oct 2012
Last updated: 16 Oct 2012
This file is


I'll introduce a discussion of a subset of the following ideas, depending on the interests of the audience.
How can a cloud of dust, or a planet formed by collapse of a cloud of dust, produce
so much life, including the mental activities of microbes, mice, monkeys, musicians,
megalomaniacs and mathematicians, in as little as 4.54 billion years?

In his 1952 paper on morphogenesis Turing tried to explain how sub-microscopic
molecular patterns could produce visible patterns such as stripes and spots on a
developed organism. We can generalise his question by re-formulating the old
question: how can current life forms, and their activities, including human mental
processes be produced by initially lifeless matter? Many have tried to address this
by seeking evidence for changes in physical structures and physical/chemical
behaviours produced by natural selection acting on a vast amount of chemical chatter.
If Turing had lived longer he might have asked: What collection of changes in
information processing mechanisms would have been required and how could they
have come to exist? I suspect he would not have claimed that the processes could be
replicated on a single Turing machine. Moreover, it is clear that the mechanisms for
producing new forms of information processing have themselves been changed --
including new forms of reproduction, learning, development, cultural change, and
"unnatural selection" mechanisms such as mate-selection and animal and plant
breeding. The meta-morphogenesis project seeks to identify such changes and the
processes and mechanisms that drove them, as explained in:
I suspect some of the transitions require mechanisms that can "hallucinate" discrete
structures onto continuous structures and processes in order to reason about
infinitely many cases in a finite way, e.g. using partial ordering information.

Some of the major transitions in biological information processing seem to be closely
connected with deep philosophical problems raised by Immanuel Kant (Critique of Pure
Reason, 1781), including problems about how it is possible to acquire and use
information about:

 -- extended spatial structures on various scales (compare SLAM techniques in
 -- environmental contents not accessible by sensory mechanisms
    (and not definable in terms of sensory-motor statistics, e.g. properties of
    matter like rigidity, elasticity, liquidity, chemical composition)
 -- information about possibilities and impossibilities (necessities)
 -- meta-semantic information about information and information-users
 -- various kinds of self-knowledge, including meta-meta-knowledge
 -- the contents of and causal interactions within virtual machinery
    (apparently produced by evolution long before human engineers began to understand
    the need for and uses of virtual machinery with causal powers since mid 20th century)
 -- the particular forms of meta-cognition involved in mathematical discovery

(More examples will be added in:

I'll introduce the general project and focus on some conjectures about overlaps
between mechanisms originally used (pre-historically) to produce the mathematical
knowledge accumulated in Euclid's elements and mechanisms involved in non-human
animal intelligence and types of discovery pre-verbal children can make ("toddler
theorems"), which I think have unnoticed connections with J.J.Gibson's claim that a
major function of perception is discovery of affordances.

Some examples of actual or potential toddler theorems (and a bit beyond toddlers)
are presented here:
(discoveries about triangles based on playing with diagrams)
(discoveries about prime numbers, based on playing with blocks)
I suspect the mechanisms involved in discovering "toddler theorems" are closely
related to what neuro-developmental psychologist Annette Karmiloff-Smith refers to
as "Representational redescription" in "Beyond Modularity"

Some researchers mistakenly think that developments in physical simulation
mechanisms, e.g. used in game engines, give computers the required powers of
geometrical reasoning. It is easy to show that that is a mistake. Less obviously
developments in qualitative spatial reasoning (e.g. work in Leeds by Tony Cohn) seem
to be relevant, but I am not sure the required forms of reasoning about possibilities
and constraints have been modelled, especially exact (non-probabilistic)
reasoning about invariants of classes of processes, often requiring "controlled
hallucination", e.g. use of construction lines and trajectories in Euclidean proofs
(e.g. in the primes and triangle examples above). I see no reason to believe that
the forms of mathematical metacognition required need support from quantum
computation as suggested by Roger Penrose here

Many toddler theorems are discovered before children have developed the meta-semantic
and meta-cognitive capabilities required to be aware of what they have learnt or to
be able to communicate it to others. So investigating such learning is a task with
severe methodological problems, especially as the processes seem to be highly
idiosyncratic and unpredictable, ruling out standard experimental and statistical

[*] Some very sketchy theoretical ideas about the nature-nurture issues related to toddler
theorems are presented in this paper published in IJUC in 2007:
    Jackie Chappell and Aaron Sloman
    Natural and artificial meta-configured altricial information-processing systems

This file is also accessible as
Partial index of discussion notes:

Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham