These videos are used in several of the presentations listed
here.
and also in my talk given at this workshop in Paris Sept 2007:
COSY Meeting-Of-Minds Workshop
Paris 15-18 Sept 2007
The videos show different levels of competence in young children, and in some cases,
surprising incompetence, which was later overcome, though I don't think anyone knows how
the changes occur.
There are some conjectures in
the presentations, especially the presentations concerned
with mathematical development in young children, and its evolution.
The idea of architecture-based motivation (contrasted with reward-based motivation) is
important for understanding what's going on in some of these videos:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/architecture-based-motivation.html
My ideas have recently been reorganised and extended by reading Piaget's last two books,
and an important book by Karmiloff-Smith
These exploration processes must be partly genetically driven in some of their abstract
properties, while details of both the exploration processes and the discoveries they lead
to are largely driven by details of the learner's environment.
Compare the notion of "parametric polymorphism" in Object Oriented Programming,
where a generic method performs actions and builds structures in ways that
depend on the types of entities provided as parameters when the method runs.
The notion of an exploration domain is closely related to the old AI notion of a
"micro-world", to Karmiloff-Smith's notion of a "microdomain", and to the notion of a
class (or related set of classes linked by methods) in Object Oriented Programming.
N.B.
It is often assumed that what an intelligent animal or robot can learn must be closely
constrained by its sensory-motor morphology and functionality.This viewpoint ignores the role of the environment in presenting opportunities to learn
and providing examples and evidence regarding what can happen in the environment,
including processes initiated not by the learner but by wind, rain, gravity, other
animals, etc. Learning about your environment is not the same thing as learning about
your sensory motor signal patterns.The sensory-motor analysis also ignores the similarities between types of process that can
be performed using different morphologies: e.g. carnivores use jaws to carry, manoeuvre
and disassemble objects whereas primates (and some birds) use hands and feet.The animals that can produce a-modal information structures using exosomatic ontologies
(referring to things in the environment, not to sensory-motor signals) can learn things
that have nothing to do with their own sensory or motor mechanisms, e.g. about effects of
forces applied by one object to another, and effects of objects in constraining motions of
other objects.
They should also be far more flexible about transferring what they have learnt to new
configurations where the sensory motor signals are very different, but what happens in the
environment is very similar (e.g. a door is opened).
The comments on the videos relate to the notion of a "exploration domain" discussed in
some of my talks, e.g. in
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk90
Talk 90: Piaget (and collaborators) on Possibility and Necessity
And the relevance of/to AI/Robotics
(A related talk presented at Schloss Dagstuhl on 28th March 2011 is
here.
)
He achieves all his goals apart from joining two train components -- trying to join two
rings together instead of a hook and a ring. His understanding of possible ways of linking
two objects is clearly still under developed. (I was told a few weeks later that he had
mastered the problems, though nobody had observed steps in the transition.)
Compare this with the intelligence of Betty the hook-making and hook-using crow.
betty crow hook
What the researchers did not point out, and can easily be confirmed by looking at the
I have argued elsewhere that such competences (and others) imply that many non-human
species, and also pre-verbal children, must be able to use internal languages
(forms of representation) that support structural variability, variations in complexity,
compositional semantics, and use in making inferences.
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#43 The primacy of non-communicative language http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#glang Evolution of minds and languages. What evolved first and develops first in children: Languages for communicating, or languages for thinking (Generalised Languages: GLs)?
A different sort of question that I am interested in concerns competences that must be
prior to those motivations: what sort of cognitive mechanisms make it possible for a
pre-verbal child to tell what a person is trying and failing to do, and to work out what
could be done to provide assistance?
The child in the video seems to be able to work out that the man carrying books needs to
get the the door of the cabinet open, and he then adopts that motive for himself (the
altruistic step) and works out what to do to achieve the open door, then executes the plan.
Here the child seems to make use of a domain of possibilities that is not concerned with
what he, the observer, can do, but with possibilities available to another agent.
(I have called those "vicarious affordances" elsewhere: very important both for parents
and for young learners.)
What the child can do becomes relevant only after he has worked out what the adult wants
to do, or needs to do.
What motivates the child to use those cognitive abilities to provide help is an additional
question. Being able to think about motivation requires a domain of meta-semantic
competence: being able to represent and reason about things that can represent and reason.
That clearly starts to develop before a child can talk.
For both questions the answers are likely to involve a mixture of genetic mechanisms and
competences learnt partly under the influence of the environment and partly under the
influence of the genetic mechanisms.The learning mechanisms probably change during learning and development, since what is
learnable changes. But this is not merely a matter of what prior content is available for
each type of learning. Rather new ontologies, new forms of representation, new algorithms
and new information-processing architectures seem to be involved.
Compare videos in which a parrot perches on one leg holding a walnut in the other foot,
and alternating between holding it with foot and with beak rotates the walnut until it is
in a good position for use of beak to crack it open.
(Compare: you are holding a cold saucepan by its handle using one hand. You wish to get
into a position in which you are holding it by the rim of the pan using two hands, and
with the handle pointing away from you. What actions do you need to perform to achieve
that end state -- with or without being able to rest the saucepan on a flat table top
during the process?
What enables you, or the parrot, to work out a sequence of actions to
achieve a new state?)
For further discussion of the issues raised by these
videos see:
My slides for the 'Meeting of Minds'
workshop in 2007, and
the post-workshop notes on model-based semantics