Peterhouse College, Trumpington Street, Cambridge, CB2 1RD
Workshop Programme for SGAI 2015
Registration and charges
SGAI-Workshop Stream 1 : Tuesday 15th December
Afternoon (15.15-16.45 and 17.00-18.30 Upper Hall)
School of Computer Science, University of Birmingham
The presentation will be highly interactive.
[NASA artist's impression of a protoplanetary disk, from WikiMedia]
Despite all the successes of AI there remain deep gaps between what AI systems can do and the competences of humans and other animals, for example nest building birds, such as weaver birds, and squirrels that defeat "squirrel-proof" bird-feeders. Current AI language learning systems are nothing like the young deaf children who created a new sign language, as reported in https://www.youtube.com/watch?v=pjtioIFuNf8 . Humans don't merely learn languages: they create languages, collaboratively. But for that, there could be no human languages since initially there were none to learn.
There are many human competences that current AI systems are not even close to replicating, for example the processes that led to the mathematical (geometrical, topological, arithmetical) discoveries known to Euclid over two thousand years ago (long before modern logical notations and theories had been thought of), and the processes by which a young human who is unable to understand any such mathematical content can develop into a mathematical student who not only understands but who can also discover theorems and proofs without being told about them. Profoundly important discoveries in geometry, topology and arithmetic leading up to Euclid's Elements must have started before there were any mathematics teachers. How? How did the first engineers manage without teachers?
Modern AI theorem provers based on developments in logic since Boole, Peano,
Cantor, Frege, Russell, etc. can certainly outperform most humans but only in
finding theorems and proofs naturally expressible using logic. Deriving a
theorem in a logical axiomatisation of geometry is completely different from
making the original discoveries, including finding a way to extend Euclidean
geometry so that arbitrary angles can be trisected -- impossible in standard
Euclid, but already known to Archimedes and others:
I am not saying that replicating these biological achievements is beyond the
scope of AI. The problem is to identify the biological competences (e.g.
visual competences, mathematical competences, bootstrapping competences) in more
detail, to help us work out what's missing in AI/Robotics, so that we can
attempt to bridge the gaps. I don't think the gaps are easy to describe
For example, what would convince you that a robot sees these garden movies (or the original scenes) in something like the same way as you do:
Moreover, it may also be the case that our understanding of forms of computation has serious gaps.
A key hypothesis is that a major theme throughout biological evolution is production of new derived construction-kits (DCKs) all ultimately derived from the fundamental construction kit (FCK) provided by physics and chemistry.
In addition to production of new physical materials, new physical designs, and new physical behaviours, derived construction kits also provide ever more complex and varied forms of information processing.
An outline theory will be presented: concrete, abstract and hybrid
(concrete+abstract) construction kits produced by evolution and development can
help to explain the variety of types of information processing in living things,
and help to draw attention to forms of information processing (computation) that
have not yet been studied or replicated but which may play important roles in
animal intelligence. Some preliminary ideas about the main features of the
Fundamental Construction Kit provided by (Quantum) Physics and Chemistry and the
Derived/Evolved construction kits are assembled in this draft book chapter
(work in progress):
The tutorial will give an introduction to the ideas in this project and some preliminary results. One of the topics will be the inadequacy of current theories and models of visual perception, which cannot explain the role of vision in mathematical discoveries (especially topological and geometric discoveries) leading up to the monumental work by Euclid around 2500 years ago. Those discoveries seem to require abilities to use visual perception (and other forms of perception) to acquire information about possibilities for change in the environment and constraints on those possibilities (impossibilities). This generalises J.J. Gibson's theory of vision as primarily concerned with information about affordances, not information about structures in the environment, of sorts acquired by most current AI vision systems Gibson (1979).
Development and extension of Gibson's ideas seems to require forms of biological
information processing and types of information-processing architectures (with
layers of meta-cognition) that have so far not been developed in AI or Robotics.
Perception of possibilities and impossibilities is totally different from
acquisition of information about probabilities: the focus of much current
research. Moreover, perception of possibilities and impossibilities is
intimately connected with abilities to make the sorts of discoveries in geometry
and topology that allowed proofs to be constructed and communicated long before
modern logic-based ideas about proof had been developed. I suspect that those
ancient mathematical discovery mechanisms are still part of the learning about
spatial structures and processes that goes on in pre-verbal children (including
many unrecorded discoveries of "toddler-theorems") concerning things that are
possible and impossible in a child's environment:
Some examples are presented in these draft notes on perception of
possibilities and impossibilities, extending Gibson's ideas about perception
Another area in which
It is hoped that some of those attending will develop an interest in this large and complex (but currently unfunded) project and help to speed up progress.
The presentation will be highly interactive, with opportunities for participants to contribute ideas as well as questions.
Anyone interested in this project, whether planning to attend the tutorial or
not, is welcome to contribute comments and links to available online documents
or presentations. Please send them to:
a.sloman @ cs.bham.ac.uk
Anyone who requires more information is welcome to write to the same address. If you plan to attend the tutorial, please feel free to send me details about yourself. Formal registration details are on the conference web site: http://www.bcs-sgai.org/ai2015/?section=registration
This work is, in part, a sequel to my 1978 book, now available in a slightly revised free re-packaged electronic edition, with a link to a draft "Afterthoughts" paper:
Aaron Sloman interviewed by Adam Ford at AGI 2013, St Anne's College Oxford.
School of Computer Science
The University of Birmingham