parrot robot

CONFERENCES WORKSHOPS SYMPOSIA TUTORIALS
ON NATURAL AND ARTIFICIAL INTELLIGENCE
(Whole systems, not just mechanisms.)
Since July 2005

Last updated: 24 Mar 2008

DRAFT
(Assembled hastily by A.Sloman: to be revised and extended)

CONTENTS

INTRODUCTION

Until recently most of the work on combining natural and artificial
computation (or intelligence) has focused on trying first to
identify the mechanisms used in animals and groups of animals
and secondly to implement artificial versions of those mechanisms in
order to solve engineering problems. Examples of the mechanisms
include artificial neural nets, evolutionary algorithms, swarming
and flocking mechanisms, artificial immune systems, and various
mechanisms thought to be used in natural vision, language, learning,
problem-solving, cooperating, and so on.

In July 2005 Members of the CoSy robot project organised a
two day tutorial on Representation and Learning in Robots and Animals,

    http://www.cs.bham.ac.uk/research/projects/cosy/conferences/

which had a different aim, namely getting researchers in animal
(including human) behaviours of various kinds to talk to designers
of AI systems, not about biological mechanisms, but about
some of the interesting kinds of competences that have been
found in animals. This tutorial was partly inspired by the UKCRC
Grand Challenge No 5: 'Architecture of Brain and Mind', summarised
in
    http://www.cs.bham.ac.uk/research/projects/cogaff/gc/

Since then, partly inspired by that meeting and partly
independently, other workshops and conferences have been organised
bringing designes of robots and other machines together with
researchers on animal cognition, and in some cases, brain
researchers.

I have the impression that this is a growing phenomenon and will
have the very important function of contributing to our
understanding of requirements for human-like, or animal-like
robots, countering the widespred tendency to assume that we know
what is required and merely lack knowledge of how to meet the
requirements.

A paper written with Ron Chrisley published in 2005 enlarged on this
and referred to the difficulty of identifying requirements as
'ontological blindness':

    http://www.cs.bham.ac.uk/research/projects/cogaff/04.html#cogsys
    More things than are dreamt of in your biology:
    Information processing in biologically-inspired robots.

The need to identify unobvious requirements, and tendency not to
notice them can sometimes lead to grossly inadequate proposals for
designs for working systems.

It is an interesting fact that when AI researchers hear about an
animal able to perform some task that would be commonplace for
humans they are sometimes inspired to ask: How could it do that?
not noticing that the same question arises for humans.

So the growing number of meetings in which researchers on animal and
robotic intelligence confront each other may have the important
effect of drawing attention to detailed requirements for intelligent
robots. See also the euCognition research roadmap activity:
    http://www.eucognition.org/wiki/index.php?title=Research_Roadmap

What follows is an incomplete list of events of the sort described
above. If you know of any that should be added to this list, please
send me details.

-------------------------------------------------------------------

RELEVANT EVENTS (A SUBSET)
(In chronological order -- recent ones at the end)

================
2005
================

o IJCAI-05 GC5-tutorial in Edinburgh

  REPRESENTATION AND LEARNING IN ROBOTS AND ANIMALS

  (funded by BT, IBM, Infermed, SSAISB, organised by CoSy)

http://www.cs.bham.ac.uk/research/projects/cosy/conferences/

================
2006
================

o AISB'06 GC-5 symposium Bristol April 2006, .
  (funded by euCognition)

http://www.cs.bham.ac.uk/research/projects/cogaff/gc/aisb06/


================
o AAAI Cognitive Robotics Workshop, Boston July 2006
    http://www.aaai.org/Library/Workshops/ws06-03.php

(also other things at AAAI'06, including the Fellows' symposium.)

================
o  COGRIC workshop funded by NSF and EU, August 2006

    http://www.cogric.reading.ac.uk/

================

o The euCognition research roadmap meeting in Munich in January 2007
was also directly relevant:

    http://www.eucognition.org/wiki/index.php?title=Roadmap_Kick-off_Meeting

Also other euCognition events.

================

o International Symposium on Creating Brain-Like Intelligence
at Honda Research Institute Frankfurt, Feb 2007

    http://www.hri-europe.de/

Unfortunately presentations and workgroup summaries are password
protected.

================
2007
================

o. Intelligent and cognitive systems working group at the
EU-funded Interlink Opening Workshop Nice, 10-12 May 2007

    http://www.ercim.org/interlinkworkshops
    (wrongly says 2006)

    http://interlink.ics.forth.gr/central.aspx?sId=84I240I746I323I344319
    (Cognition Working group, led by Rüdiger Dillman.)

The web site is rather thin: no talk titles, abstracts or
presentations.

There was an interesting contrast between people who were
committed to the key role of embodiment in cognition and could
demonstrate only fairly trivial systems, and Jim Crowley, working on
an 'intelligent room' including an intelligent system composed of TV
camera and projector (plus computer) that could treat a sheet of
paper on the table as a screen and track it as it moved round,
ensuring that what was projected was suitable for normal
interactions, and interpreting hand movements/gestures as equivalent
to mouse/keyboard actions.

It was crucial to Crowley's system that the location and orientation
of the paper and requirements for projecting on to it were
represented in terms of location in the room, not in terms of
sensory and motor relationships (which could be learnt, or derived,
and used).

================
o. BBSRC Workshop organised in Birmingham by Dietmar Heinke

    Closing the gap between neurophysiology and behaviour:
    A computational modelling approach
    31st May-2nd June
    http://comp-psych.bham.ac.uk/workshop.htm


================

o. In mid June the final workshop of the EU COSPAL project was held
in Aalborg
    http://www.cospal.org/
    Cognitive Systems: Perception, Action, Learning


================

o. On 24th-26th June 2007 there was a NSF/EU-funded workshop on
Natural and Artificial Cognition (wonac) in Oxford. Details are
here:

    http://tecolote.isi.edu/~wkerr/wonac/

It was explicitly a sequel to both the GC-5 tutorial at IJCAI in
2005, and the symposium at AISB in 2006 (The 2005 tutorial had been
attended by managers from both funding agencies, and the 2006
symposium by the EU funders.)

WONAC was all organised by Paul Cohen and Alex Kacelnik, both of
whom had been speakers at the IJCAI'05 tutorial.
Most of the presentations were excellent (though too short).

(Abstracts are online and some of the presentations - select the
Programme link on 'agenda' page. Things under review are
password-protected unfortunately. Not all the presentations are
there yet.)

(We learnt that the largest living organism on earth is a fungus.)

Jackie Chappell and Aaron Sloman were asked to give the first two
talks on understanding causation (Kantian and Humean) and on the
altricial precocial distinction (better: nature/nurture tradeoffs).

Our contributions, expanded after the workshop, including an
unfinished post-workshop analysis of varieties of causal competence
are here (all pdf):

    http://www.cs.bham.ac.uk/research/projects/cogaff/talks/wonac/

[Quite a lot of people not at that meeting are doing related work on
learning affordances, e.g. Peter Gorniak MIT, the Stanford
'grasping' group, Yiannis de Miris, Manuela Viezzer, .... But
all such systems are still far behind parrots, or human toddlers.]

================
o The recent euCognition meeting on architectures is relevant:
    29th June 2007

    http://www.eucognition.org/six_monthly_meeting_3.htm
    Cognitive Architectures

Six of the presenters had been involved in the AISB'06 meeting we
organised.

================

o A conference with a strong philosophical component in Bristol 1-3
July 2007 (mixed funding, I think.)

    http://www.bris.ac.uk/philosophy/department/events/PAC_conference/index.html/Conference.htm
    Perception, Action and Consciousness:
    Sensorimotor dynamics and dual vision

The list of speakers and talks spanning philosophy, psychology,
neuroscience and AI is here:

    http://www.bris.ac.uk/philosophy/department/events/PAC_conference/index.html/Conference.htm/Conference_Programme.pdf

This conference attempted to bring together

 A: people (e.g. Milner and Goodale) who thought there were two
    main visual streams (ventral and dorsal) only one of which is
    associated with visual consciousness, though the other is
    involved in control of action (a distinction that has been very
    confusingly/inaccurately described in terms of 'what vs where',
    and 'what vs how', misleading hordes of people who don't read
    the originals and/or don't think hard about what could actually
    work.)

 B: people (e.g. O'Regan and Noë) who argue that perceptual
    consciousness is very closely related to sensorimotor
    contingencies, and therefore the action subsystems (dorsal) must
    be involved.

Plus a bunch of others commenting on the disagreement and related
issues.

A poster presented by A.Sloman, modified after the meeting is here:

    http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#pac07

Nobody addressed the special problems of perceiving 3-D structures,
and understanding the causal relations between shape, spatial
relations, materials, and forces that could be applied and physical
changes. Consequently their theories did not have the generality
they were aiming for.

Kevin O'Regan presented his latest theory of colour, developed with
Philipona, which he presented as being a theory about sensorimotor
contingencies, i.e. how the responses of the colour receptors change
for a given illumination profile according to how the illuminated
surface is rotated or moved in space.

It soon became clear that he needed to distinguish two things:

 a. colours of surface as objective (exosomatic) properties defined
    not by sensory/motor relationships, but by illumination/
    reflectance relationships (i.e. relations between inputs and
    outputs of the *surface*, not the organism)

 b. the ability to learn how to identify the properties of such
    surfaces by manipulating them in various ways in various
    illumination conditions. I.e. what we perceive is something out
    there, and the sensorimotor relationshiops are just part of a
    measurement device.

    (Likewise electrical resistance is (roughly) a relation between
    voltage and current. How you measure or apply voltages or
    currents is a separate issue: and that can change over time.)

That's a very important general distinction, which has been
in the background of much of our discussions. It can now be made
clear and explicit.

Many of the properties of shape, material, and spatial relations
will be of type (a), whereas the robot or animal has to use
information of type (b) to drive the process of generating and
testing explanatory theories and extending the ontology.

The sensorimotor model probably fits parrot competences only for
fine-grained posture control and low-level visual control of flying,
but all their manipulation of 3-D objects uses properties,
relationships and causal interactions between objects independently
of how they are sensed and altered. The sensorimotor contingencies
relevant to particular configurations may need to be learnt at first
but may later be derived from an understanding of geometry, motion,
etc.

================

o The CoSy project organised a MeetingOfMinds workshop in
Paris, 16-18 September (closed unfortunately).

The invited speakers, from psychology and biology, the CoSy
speakers, and other information can be found here:
    http://www.cs.bham.ac.uk/research/projects/cosy/conferences/mofm-paris-07/

Papers, presentations and other materials provided after the event
can be found here

    http://www.cs.bham.ac.uk/research/projects/cosy/conferences/mofm-paris-07/latest.html

================
2008
================

(To be added.)

Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham