Some Illustrative Videos Used in Several of my Presentations
Aaron Sloman
School of Computer Science,
University of Birmingham

This web site is:
It is part of the Birmingham CogAff (Cognition and Affect) web site.

These videos are used in several of the presentations listed here:
Some were also used in my talk at this "Meeting of Minds" workshop in Paris 15-18 Sept 2007,
as part of the Eu CoSy project:

The videos below include demonstrations of different levels of competence in young children, and in some cases, surprising incompetence, which was later overcome, though I don't think anyone knows how the changes occur. There are also some non-human performers.

There are some conjectures in the presentations, especially the presentations concerned with mathematical development in young children, and its evolution.

See also the web pages on 'Toddler Theorems' here:

NOTE: Updated 7 Oct 2020:
This video of kitten-gymnastics on a drying-frame will later be located on this file with a more detailed caption.
The link to the video will not be changed.

Can any current robot do this sort of thing?

NOTE: Added 6 Jul 2012:
The idea of architecture-based motivation (contrasted with reward-based motivation) is important for understanding what's going on in some of these videos: (also pdf).

NOTE: Added 2 Apr 2011

My ideas have recently been reorganised and extended by reading Piaget's last two books, and an important book by Annette Karmiloff-Smith

The videos listed here:

Notes added below on the individual videos, point out how they illustrate interleaved exploration of different "exploration domains" that can be "extracted" from a continuous world, by using different sorts of actions on items in the environment: a rug, a piano, a tub of yogurt and spoon, a toy train, etc.

These exploration processes must be partly genetically driven in some of their abstract properties, while details of both the exploration processes and the discoveries they lead to are largely driven by details of the learner's environment.

    Compare the notion of "parametric polymorphism" in Object Oriented Programming,
    where a generic method performs actions and builds structures in ways that
    depend on the types of entities provided as parameters when the method runs.

The notion of an exploration domain is closely related to the old AI notion of a "micro-world", to Karmiloff-Smith's notion of a "microdomain", and to the notion of a class (or related set of classes linked by methods) in Object Oriented Programming.


It is often assumed that what an intelligent animal or robot can learn must be closely constrained by its sensory-motor morphology and functionality.

This viewpoint ignores the role of the environment in presenting opportunities to learn and providing examples and evidence regarding what can happen in the environment, including processes initiated not by the learner but by wind, rain, gravity, other animals, etc. Learning about your environment is not the same thing as learning about your sensory motor signal patterns.

The sensory-motor analysis also ignores the similarities between types of process that can be performed using different morphologies: e.g. many carnivores use jaws to carry, manoeuvre and disassemble objects whereas primates (and some birds) use hands and/or feet.

The animals that can produce a-modal information structures using exosomatic ontologies (referring to things in the environment, not to sensory-motor signals) can learn things that have nothing to do with their own sensory or motor mechanisms, e.g. about effects of forces applied by one object to another, and effects of objects in constraining motions of other objects. They should also be far more flexible about transferring what they have learnt to new configurations where the sensory motor signals are very different, but what happens in the environment is very similar (e.g. a door is opened).

The comments on the videos relate to the notion of a "exploration domain" discussed in
    Talk 90: Piaget (and collaborators) on Possibility and Necessity
    And the relevance of/to AI/Robotics

    A related presentation at Schloss Dagstuhl on 28th March 2011

    And the discussions of domains in:

  1. Ghost walking in a garden in sunshine? Try to work out what's going on.
    Added 23 Jun 2020

  2. Noticing and grasping edge of rug (Age about 6 months)
    Noticing the edge of the rug seems to trigger a very deliberate and controlled, but inexpert, attempt to grasp the edge, with clear success eventually, after rolling over onto his side. It looks as if this baby is exploring a number of different domains, initially
    • Making different visual selections by moving head and eyes.
    • Looking for/detecting graspable visible edges.
    • Changing position and orientation of a figure or hand,
    • Changing the hand's location and orientation in order to be able to do the required grasping.
    • then iterate ...

    Why did he use his right hand, not his left hand to grasp the rug?

  3. Playing piano and piano parts (Age about 9 months)
    No sound in this video but it is clear that the child alternates between fairly random thumping and controlled exploration of effects of pressing individual notes. What are the mechanisms that produce the motivation to do that? What cognitive processes are involved in the selection between possible actions, and their execution? What exploration domains are being extracted from the interaction with the environment. Compare the later piano video below.

  4. Feeding brain and stomach with yogurt
    (Age about 10 months)
    The child manipulates a spoon in various ways, including getting yogurt out of a tub held in front of him, transferring some to his mouth, and some to carpet and his thigh. He seems to do a number of experiments of different kinds with spoon and yogurt, including at least two failed attempts to transfer yogurt to a flat surface, on carpet and on his thigh.
    The video ends with a successful complex process of transferring the end of the spoon from right hand to left hand, which requires coping with the fact that one hand is an obstacle to grasping by the other.
    (This is well beyond the capabilities of any robot I know about.)

  5. Playing piano and piano parts, recorded with sound. (Age about 11 months)
    A sequence of experiments concerned with piano keys, music holder, balancing on chair.
    The child discovers and explores separate domains of exploration, involving
    • One hand interacting with piano keys to make a sound
    • Two hands interacting
    • One or two fingers interacting with the keys -- and repeating a motion pattern that repeats a sound pattern.
    • Moving fingers to different parts of the same key and pressing, e.g. at far end.
    • Pressing keys with wrist lifted, or resting on piano.
    • Lifting and releasing (or banging) the music holder.
    • Standing up an sitting down on the stool.
    • Some slight social interaction with cameraman.
    • Right at the end apparently noticing that the same musical pattern can be repeated on different parts of the keyboard (same tune, and rhythm, but different pitch).

    The video ends just as he seems to be discovering a repeatable musical theme, or possibility a motor theme for his fingers, or perhaps a mapping between the two?

  6. Exploring materials: cloth (sweater) and mouse cable
    (Age about 11 months -- Doesn't quite strangle himself.)
    Compare this web site on blankets, string, and other things, and this web site on seeing and understanding knots.

  7. Pushing a broom (Age: about 15 months)
    A just about stable (well, upright anyway) dynamical system driven by an opportunistic cognitive explorer? Notice the various ways in which an action prepares for what is going to come later. This shows an understanding of constraints in the environment detected before the constraints operate. This cannot be achieved by purely reactive sensory-motor mechanisms where all internal processes are concerned with relating current inputs to current outputs.

  8. Failing to understand hook-and-eye mechanism (Age about 18 months)
    Failing to understand hook-and-eye mechanism (smaller version)
    This child clearly has an impressive collection of motor, perceptual, and cognitive skills, and has mastery of a variety of exploration domains including switching between crouching and sitting positions, stacking objects, poking objects into holes, noticing a gap in the distance and walking to it in order to insert an object.

    He achieves all his goals apart from joining two train components -- trying to join two rings together instead of a hook and a ring. His understanding of possible ways of linking two objects is clearly still under developed. (I was told a few weeks later that he had mastered the problems, though nobody had observed steps in the transition.)

    Compare this with the intelligence of Betty the hook-making and hook-using crow, below.

  9. Pre-verbal toddler (age around 17.5 months?) apparently using a pencil to explore 3-D topology and perhaps to test topological theories?
    (OGG format)
    (MP4 format)

  10. Toddler accumulating a collection of grasped pencils, using one hand:
    (MP4 format)

  11. Toddler playing with head-band and cool-bag (MP4)
    What does she know about what's going on out of sight above/around her head? How is the information represented? How did the ability to deal with such information develop: was it there are birth? In the genome somehow?

  12. Toddler moving backwards into seat (WEBM)
    Toddler moving backwards into seat (MP4)
    What information does she have about what's going on out of sight behind her? What information does she have about the benefit of pushing her chair up against the settee before she tries sitting in it (after the unsuccessful attempt)?

  13. Older videos showing development of sitting, crawling, etc.


  1. Several videos available at the Behavioural Ecology Research Group, Oxford
    These include the very famous 'Trial 7' video, which shows Betty, the New Caledonian Crow, very expertly making a hook from a straight piece of wire, and then very expertly using the hook to lift a bucket of food out of a vertical tube.

    She had no opportunity to learn anything about this from other crows or humans. Betty made headlines world-wide in June 2002. See the video link to trial 7 here, highlighted on the left under "Movie":
    To learn more, give to google:

        betty crow hook 2002
    What the researchers did not point out in the original publications, but can easily be confirmed by looking at the other videos on the web site, is that Betty spontaneously made hooks from straight pieces of wire in at least five different ways. Even after successfully making a hook one way she notices and uses other possible strategies with no flailing about using trial and error. In each case she seems to know as soon as the action begins what she is going to do.

    There are many idiotic comments online about Betty, including this one, which hints that acting naturally for one's species is incompatible with creative intelligence (e.g. using new materials to "act naturally").

    "Crow that bent wire to retrieve food was acting naturally, scientists discover.
    Oxford researchers initially thought Betty the New Caledonian crow had come up with idea on the spot but further study finds her breed adept at making tools"

    I have argued elsewhere that such competences (and others) imply that many non-human species, and also pre-verbal children, must be able to use internal languages (forms of representation) that support structural variability, variations in complexity, compositional semantics, and use in making inferences.
    The primacy of non-communicative language
    Evolution of minds and languages. What evolved first and develops first in children: Languages for communicating, or languages for thinking (Generalised Languages: GLs)?

  2. Warneken and Tomasello's videos of children and chimps being helpful:
    More recent videos (accessed 28 Jun 2019):
    These old videos may no longer be available:
    Warneken and Tomasello's videos of children and chimps being helpful
    See especially the cabinet opening task, included in new and old videos.

    Those researchers are mostly interested in the fact that very young children and some other animals spontaneously try to help someone else.

    A different sort of question that I am interested in concerns competences that must be prior to those motivations: what sort of cognitive mechanisms make it possible for a pre-verbal child to tell what a person is trying and failing to do, and to work out what could be done to provide assistance?

    A child in one of the videos seems to be able to work out that the man carrying books needs to get the the door of the cabinet open. How does he or she infer that from the observed behaviour, probably never seen before? The child then adopts the adult's motive for himself (the altruistic step) and works out what to do to achieve the open door, then executes the plan.

    Here the child seems to make use of a domain of possibilities that is not concerned with what he, the observer, can do, but with possibilities available to another agent. (I have called those "vicarious affordances" elsewhere: very important both for parents and for young learners.) What the child can do becomes relevant only after he has worked out what the adult wants to do, or needs to do.

    What motivates the child to use those cognitive abilities to provide help is an additional question. Being able to think about motivation requires a domain of meta-semantic competence: being able to represent and reason about things that can represent and reason. That clearly starts to develop before a child can talk.

    For both questions the answers are likely to involve a mixture of genetic mechanisms and competences learnt partly under the influence of the environment and partly under the influence of the genetic mechanisms, as explained in this introduction to the Meta-configured genome theory: (also pdf)

    The learning mechanisms probably change dramatically during learning and development, since what is learnable changes. But this is not merely a matter of what prior content is available for each type of learning. Rather new ontologies, new forms of representation, new algorithms and new information-processing architectures seem to be involved.

    The researchers, as far as I can tell, were not interested in the problem of explaining how the child is able to work out what he or she needs to do in order to provide useful assistance to the adult. That requires non-trivial reasoning abilities concerning spatial structures and processes and how they could relate to someone else's goal.

  3. Video of parrot scratching neck with feather.
    What sort of ontology does the parrot need in order to be able to select and control this action, including combining action with beak and foot in order to change grip location? Do you think it knows what it is doing? What does that mean?

    Compare videos in which a parrot perches on one leg holding a walnut in the other foot, and alternating between holding it with foot and with beak rotates the walnut until it is in a good position for use of beak to crack it open.

    (Compare: you are holding a cold saucepan by its handle using one hand. You wish to get into a position in which you are holding it by the rim of the pan using two hands, and with the handle pointing away from you. What actions do you need to perform to achieve that end state -- with or without being able to rest the saucepan on a flat table top during the process? What enables you, or the parrot, to work out a sequence of actions to achieve a new state?)

  4. Boston Dynamics Big Dog (March 2008)
    Go to the Boston Dynamics web site to get more information.
    Do you think this robot knows what it is doing?
    Does it know anything about what it could do in the future?
    Does it know what it has done in the past?
    Does it know why it did that and not something else?
    Do the baby on the rug, or the baby playing with yogurt, or the toddler pushing a broom know what they are doing or not doing, or what they can do, before they actually do it?
    What sorts of questions should biologists ask about robots?
    What sorts of questions should roboticists ask about animals?
    (Distinguishing online and offline cognition.)
See also

For further discussion of the issues raised by these videos see:
My slides for the 'Meeting of Minds' workshop in 2007, and the post-workshop notes on model-based semantics

Originally installed: 10 Nov 2010

23 Jun 2020: Added "garden ghost" video.
20 Sep 2015 Extended and re-formatted.
Last Updated: 28 Jun 2019 (new Tomasello/Warneken videos, and additional minor changes)
10 Nov 2010; 7 Jul 2012; 9 Jan 2013;
Maintained by: Aaron Sloman