CoSy project

COSY MEETING-OF-MINDS WORKSHOP
PARIS 16-18 SEPT 2007

Aaron Sloman
School of Computer Science, University of Birmingham, UK
http://www.cs.bham.ac.uk/~axs/

Slide presentation (PDF) (about 3MB)
Title:
    Understanding causation in robots, animals and children: Hume's way
    and Kant's way.

    (Expanded version of presentation at the conference, ending with a
    proposal for a new form of research collaboration between people in
    psychology and AI).

Videos mentioned in the talk, and a few more.

    Showing infant-behaviour, toddler-behaviour and crow-behaviour
    with some comments.

Abstract:

    For Hume, causation was just a matter of correlation, and current
    Bayesian theories of causation fit that general idea, adding
    conditional probabilities. For Kant, causation implied a kind of
    necessity, analogous to the necessity in mathematical reasoning.
    There have been disputes as to who was right, whereas I'll argue
    that both notions of causation are needed and are used by
    intelligent systems. Roughly Humean causation is all you have when
    you merely have strong evidence regarding what causes what, whereas
    in some cases you know *why* something causes something, e.g. why
    going round a house in one direction produces one series of
    experiences and going round in the opposite direction produces
    another, and why when two centrally pivoted gear wheels made of
    rigid impenetrable material are meshed if one turns clockwise
    clockwise the other *must* turn counter-clockwise. The history of
    science is full of cases where Humean causation is replaced by
    Kantian causation as a result of deeper understanding. Kantian
    causal understanding, when available, is more powerful, e.g. because
    it can be used to deal with novel situations. Currently no robots
    that I know of have any Kantian understanding, and this is a very
    serious deficiency. Human children acquire Kantian understanding in
    a piecemeal and idiosyncratic way. It is not clear whether any other
    animals have this ability, but there is prima-facie evidence that
    some do. I shall base my presentation on a subset of the topics
    presented by Jackie Chappell and myself at a recent workshop on
    natural and artificial cognition.
    Our slides are available here:
    http://www.cs.bham.ac.uk/research/projects/cogaff/talks/wonac/



Added later: Background notes on model-based semantics and symbol tethering A tutorial presentation on why symbol-grounding theory, like its ancestor concept-empiricism, is mistaken. These slides were produced after the workshop because I found that I had been making some assumptions about models of axiom systems that some people at the workshop had never encountered.

Background notes on virtual machines and causation Added: 16 Oct 2007 Presentation at opening of COR-Lab Bielefeld, 10th October 2007 making explicit some of what I had been taking for granted about varieties of virtual machine, including virtual machines that monitor themselves.

http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#glang Added: 20 Oct 2007 What evolved first: Languages for communicating, or languages for thinking (Generalised Languages: GLs) Talk at Birmingham Language and Cognition seminar, 19 Oct 2007 (Slides likely to be expanded and clarified.) -------------------------------------------------------------------

Updated: 21 Jan 2009 Maintained: Aaron Sloman