School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project

For Invited talk at Symposium on
Computational Approaches to Representation Change During Learning and Development

Draft Schedule

At AAAI 2007 Fall Symposia
Call for participation: (Leaflet ) (Brochure)
November 9-11 2007

Diversity of Developmental Trajectories in Natural and Artificial Intelligence
Aaron Sloman
(In collaboration with Jackie Chappell and the CoSy project team.)
Last updated: 18 Oct 2007

The paper to go into the proceedings is available here (PDF)


Tabula Rasa or Something Else?
It may be of interest to see what can be done by giving a robot no
innate knowledge about its environment or its sensors or effectors and
only a totally general learning mechanism, such as reinforcement
learning, or some information-reduction algorithm, to see what it can
learn in various environments. However, it is clear that that is not how
biological evolution designs animals, as McCarthy states:

    Evolution solved a different problem than that of starting a baby
    with no a priori assumptions.


    Instead of building babies as Cartesian philosophers taking nothing
    but their sensations for granted, evolution produced babies with innate
    prejudices that correspond to facts about the world and babies'
    positions in it. Learning starts from these prejudices. What is the
    world like, and what are these instinctive prejudices?

    John McCarthy, 'The Well Designed Child' (1999)

Members of most species are born or hatched with all the competences
they will need (though they may be able to adjust minor parameters as a
result of feedback). Some grazing mammals can walk to the mother's
nipple and run with the herd very soon after birth, and chicks find
their way out of the egg unaided, and can peck for food and follow a
hen. This raises the question why other species, such as primates,
hunting mammals and nest-building birds, seem to start so helpless and
incompetent. This is especially puzzling in the case of species which,
as adults, seem to perform more cognitively sophisticated and varied
tasks, such as: hunting down, catching, tearing open, and eating another
animal; building stable nests made of fairly rigid twigs (as opposed to
lumps of mud) high in trees (a task you would find difficult if you
could only bring one twig at a time); leaping through treetops; using
hands to pick fruit in many different 3-D configurations -- and in the
case of humans far more.

Perhaps in those cases the appearance of starting totally incompetent
and ignorant is very deceptive. Perhaps the prior knowledge of the
environment provided by evolution in those cases is more subtle than
in the case of foals and chicks.

There is not just one problem with one solution
We conjecture that evolution discovered more design problems and more
design solutions than most learning researchers have so-far considered.
In [1] and [2] we proposed shifting the precocial/altricial distinction
from species to competences, arguing that within a species some
competences may be 'precocial' (i.e. preconfigured in the genome, for
instance, sucking in humans and some) while others are 'altricial' (i.e.
meta-configured, namely produced epigenetically by preconfigured
meta-competences interacting with the environment).

Innate knowledge can be knowledge about how to acquire more knowledge
Those meta-competences, far from being totally general learning
algorithms, are specifically tailored to finding things out about
environments containing 3-D configurations of objects and processes
involving them, where objects have 3-D spatial structures (i.e. shapes,
changeable shapes in the case of non-rigid objects) and can be made of
different kinds of material stuff with different properties, where not
all properties are detectable using available sensors. What is learnt
though the application of meta-competences includes what sort of
ontology is useful in the environment, as well as which laws using that
ontology work well for making predictions in the environment.

In addition, there are meta-competences which build on the early
acquired competences to produce new meta-competences that extend the
individual's learning ability. A university student studying theoretical
physics could not have learnt the same material soon after birth. The
ability to learn to learn can iterate, as indicated graphically in this

Cognitive Epigenesis
So, not just the individual's knowledge about the environment is
continually extended, but also its ability to learn new, more
sophisticated things.

Which layers of competence develop will depend not only on the learner's
innate meta-competences but also on the particular features of the
environment in which learning takes place. So, for example, a three-year
old child in our culture will learn many things about computers and
electronic devices that were not learnt by most of its ancestors.
They probably started with the same sort of learning potential but
developed it in different ways: bootstrapping can be highly context

Forms of representation for 'inner languages'
In [3], we suggested that in some species the kinds of perceptual,
planning, problem-solving, and plan execution competences that
develop require the use of internal forms of representation that
support structural variability and some form of compositional
semantics; features normally assumed to occur only in human
languages used for communication. But if some animals that do not
use human languages, and pre-linguistic human children use these
generalised languages (g-languages) for perceiving, thinking,
planning, formulating questions to be answered, etc. then those
representations must have evolved prior to the evolution of human

If semantically rich information structures are available prior to the
learning of human languages, that transforms the nature of the language
learning task: for the learner already has rich semantic contents
available to communicate, including possibly questions, and goals,
depending on what the internal language is used for. This contrasts with
more conventional theories of language learning, according to which the
child has to learn how to mean and what to mean at the same time as
learning how to communicate meanings.
[Ref Halliday: Learning how to mean ??]

There is no implication that all g-languages are restricted to linear
strings of symbols or to Fregean languages using a syntactic form
composed entirely of applications of functions to arguments. On the
contrary, in [4] it was suggested that analogical representations are
sometimes useful for representing and reasoning about spatial
configurations. Analogical representations, including diagrams and maps
are capable of supporting structural variability and (context sensitive)
compositional semantics since parts of diagrams can be interchanged, new
components added, etc. with clear changes in what is represented.

In [5] it is further claimed that the ability to manipulate
representations of spatial structures can be the basis of a kind of
causal competence that enables a reasoner to understand why a certain
event or process must have certain effects. This is the kind of
understanding of causation discussed by Kant, in opposition to the view
of Hume that the notion of 'cause' refers only to observed correlations,
the predominant analysis of causation among contemporary philosophers
for example. This Humean conception has recently been generalised to
include conditional probabilities, as represented in Bayesian nets.

We suggest that as progress in science typically starts with Humean
notions of causation in each new domain, and then as deep theories
regarding underlying mechanisms are developed the understanding of
causation in that domain becomes more Kantian, allowing reasoning about
structural interactions to be used, for example, to predict the effects
of new events in new situations. In contrast, Humean causation supports
only predictions concerning instances of previously observed types of

This Kantian understanding of causation is closely related, in humans,
to the ability to learn and do mathematics and to reason mathematically,
especially the ability to acquire and use competence in proving theorems
in Euclidean geometry. We don't know to what extent other animals are
capable of Kantian reasoning, but the creativity shown by some of them
suggests that they do have a Kantian understanding of causation in at
least some contexts. Moreover, it is clear that for robots to have the
same abilities as humans (or even nest-building birds, perhaps) they too
will need to be able to acquire kinds of ontologies, forms of
representation and theories, that allow them to use Kantian causal
understanding in solving novel problems.

For more on the problems of investigating causal understanding in
non-human animals, see the presentation by Jackie Chappell at WONAC

Ontology extension
Learning about the existence of new kinds of stuff, new properties, new
relationships, new events, and new processes that require the use of
concepts that are not definable in terms of the ontologies that are
genetically provided, requires learning mechanisms that support
substantive as opposed to mere definitional ontology extension.
(Which would be impossible if 'symbol-grounding' theory were true!)

For more on ontology extension during development,  see

[To be continued, possibly]

References [1] A. Sloman and J. Chappell, 2005, The Altricial-Precocial Spectrum for Robots, Proceedings IJCAI'05, pp. 1187--1192, [2] J. Chappell, A. Sloman, 2007, Natural and artificial meta-configured altricial information-processing systems, International Journal of Unconventional Computing, 3, 3, pp. 211--239, [3] Aaron Sloman, Jackie Chappell, 2007, Computational Cognitive Epigenetics (Commentary on Jablonka and Lamb: Evolution in Four Dimensions), Behavioral and Brain Sciences, [4] A. Sloman, 1971 Interactions between philosophy and AI: The role of intuition and non-logical reasoning in intelligence, Proc 2nd IJCAI, pp. 209--226,, [5] J. Chappell, A. Sloman, 2007 Presentations on causal competences, Kantian and Humean, in animals and robots International Workshop on Natural and Artificial Cognition Pembroke College, Oxford 2007

See also Abstract for invited talk on consciousness at AAAI Fall Symposium, 2007

Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham