School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project CogX project

PAST, RECENT AND PENDING PRESENTATIONS
By Aaron Sloman
School of Computer Science
The University of Birmingham, UK.

This is http://www.cs.bham.ac.uk/research/projects/cogaff/talks/

These are presentations on topics arising in the Birmingham Cognition and Affect Project and in the Birmingham Cosy Project, including: consciousness, emotions and other affective states and processes, reasoning, evolution (trajectories in design space and niche space), information-processing, artificial intelligence, cognitive science, biology, physics, philosophy of mind, supervenience, philosophy of mathematics, epistemology, virtual machines, implementation, vision and other forms of perception -- especially visual perception of affordances, architectures for intelligent systems, forms of representation, software tools for exploring architectures and designing intelligent agents, and to some extent also about neuroscience and psychology.

I found some interesting comments by Orestes Chouchoulas on my presentation style including some speculation about my reasons for not using Microsoft software, and chiding me for obstinately not using powerpoint even though he thought it would serve my needs better than Latex. So I wrote a response to his comments here, explaining why I do what I do, and a few other things.

CONTENTS

Note added 25 Sep 2010: I have decided to try to add some more structure to this list. The main list is in roughly reverse chronology, but I have started to build a list of pointers to talks on particular topics. This will take some time, so some of the pointers are just stubs, for now.

There is more information organised by topic in my "DOINGS" list


CONTENTS: MAJOR TOPICS (a sort of index, to be extended).

Some of these sub-headings will be revised.


CONTENTS: ROUGHLY REVERSE CHRONOLOGY

Below is a summary list of presentations in (roughly) reverse chronological order, followed by more details on each presentation, in (roughly) chronological order. The summary has links to the details.

The order is only "roughly" chronological since many of the older talks have been revised recently, and some have also been presented recently.

WARNING:
Any of my pdf slides found at any other location are likely to be out of date.
I try to keep the versions on slideshare.net up to date, but sometimes forget to
upload a new version.

Google Scholar publications list,


DOWNLOADABLE VIEWERS

The slide presentations listed below are all available in either postscript or PDF formats
or both. (Only the older versions are available in postscript.) Browsers for both are
freely available on the internet. See the information in this file
http://www.cs.bham.ac.uk/~axs/browsers.html

The diagrams in the slides were all produced using the excellent (small, fast, versatile,
portable, reliable, and free) tgif package, available for linux and unix systems from here:

http://bourbon.cs.umd.edu:8001/tgif/

The slides are all composed in Latex, using home-grown macros, importing
eps files produced by tgif, previously converted to postscript then pdf
using dvips and ps2pdf. More recent versions were created directly by
pdflatex.

From about talk 5 (May 2001) I started preparing the slides in a format
more suited to fill a typical computer screen which is wider than it is
tall. These need to be viewed in "Landscape" or "Seascape" mode (rotated
90 degrees to the left). Your pdf/postscript viewer
should provide such an option.


Talk 109: ARTIFICIAL INTELLIGENCE AND PHILOSOPHY
How AI (including robotics) relates to philosophy and in some ways Improves on Philosophy
Available HERE (PDF).
(Subject to change: please keep links not copies.)

Presented at:
CNCR Journal Club Meeting on Monday 7th October 2013 University of Birmingham.
Installed here: 25 Nov 2013

Abstract (To be added).


Talk 108: Why is it so hard to make human-like AI (robot) mathematicians?
Especially Euclidean geometers.
DRAFT Available HERE (PDF).
(To be revised.)

Presented at:
http://www.pt-ai.org/2013
Philosophy and Theory of Artificial Intelligence
21 Sep 2013
Installed: DRAFT PDF will be installed 21 or 22 Sep 2013
(To be revised later.)

Abstract (As originally submitted).

I originally got involved in AI many years ago, not to build new useful machines, nor to build working models to test theories in psychology or neuroscience, but with the aim of addressing philosophical disagreements between Hume and Kant about mathematical knowledge, in particular Kant's claim that mathematical knowledge is both non-empirical (apriori, but not innate) and non-trivial (synthetic, not analytic) and also concerns necessary (non-contingent) truths.

I thought a "baby robot" with innate but extendable competences could explore and learn about its environment in a manner similar to many animals, and learn the sorts of things that might have led ancient humans to discover Euclidean geometry.

The details of the mechanisms and how they relate to claims by Hume, Kant, and other philosophers of mathematics, could help us expand the space of philosophical theories in a deep new way.

Decades later, despite staggering advances in automated theorem proving concerned with logic, algebra, arithmetic, properties of computer programs, and other topics, computers still lack human abilities to think geometrically, despite advances in graphical systems used in game engines and scientific and engineering simulations. (What those do can't be done by human brains.)

I'll offer a diagnosis of the problem and suggest a way to make progress, illuminating some unobvious achievements of biological evolution.


Three closely related talks on Meta-Morphogenesis:

Expanded and reorganised versions of slides originally prepared for a tutorial presentation at the
2013 conference on AGI at St Anne's College Oxford.
Video of tutorial
Video recording of the tutorial, made by Adam Ford:
http://www.youtube.com/watch?v=BNul52kFI74
- - - (about 2 hrs 30 mins -- audio problem fixed on 14 June 2013):
Medium resolution version also available on the CogAff web site:
http://www.cs.bham.ac.uk/research/projects/cogaff/movies#m-m-tut

Adam Ford also made available two related interviews recorded at the conference:

These slides are still under construction. Please do not save or send anyone copies - instead keep a link
to this location and send that if necessary. http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk107

The PDF files grew too long and may later be split into smaller pieces.

For more on the Meta-Morphogenesis project see: Abstract for the tutorial


Talk 106: Thinking Architecturally
Talk at Architectural Thinking Workshop Cambridge 28-9 Nov 2012
Available HERE (PDF).

Still under construction. Please do not save copies - instead keep a link to this location.
My answers to questionnaire circulated before the meeting: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/archthink-sloman.html


Talk 105: What is computational thinking? Who needs it?
Why? How can it be learnt? (Can it be taught?)
Available HERE (PDF).
Also on Slideshare.net (Flash)
Note: Like all my online presentations this is likely to be updated in response to comments, criticisms and afterthoughts.
So please store links rather than copies.
Alternative link to this presentation: http://tinyurl.com/BhamCoTh
Because of a projector problem I gave the talks without slides, recorded on video, link to
Youtube version below. The video is also linked on the slideshare site (above).

Added: 1 Dec 2012: Video of presentation at the conference (without slides: projector not working!)

Invited talk at: ALT 2012 Conference Manchester 11 Sept 2012
Also presented at Computing at School (CAS) Conference, July 2013, University of Birmingham

Installed: 20 Sep 2012; Updated: 21 Sep 2012; 24 Jan 2013

Abstract

As an example, the presentation attempts to show that current debates about whether to use phonics or look-and-say methods for teaching reading cannot be resolved sensibly without thinking computationally about the nature of reading, learning, thinking, speaking, understanding, and how all of these depend on multi-layered information-processing architectures that are still growing in different ways while children are learning to read.
Michael Morpurgo mounted a campaign of criticism of rigid use of and testing of phonics in 2012, e.g. in BBC talks http://www.bbc.co.uk/programmes/b01hxh6w
Compare: Andrew Davis
A Monstrous Regimen of Synthetic Phonics: Fantasies of Research-Based Teaching `Methods' Versus Real Teaching
in Journal of Philosophy of Education, Vol. 46, No. 4, 2012, pp 560--573.
https://www.dur.ac.uk/education/staff/profile/?mode=pdetail&id=617&sid=617&pdetail=82425
Those criticisms would be strengthened by use of computational thinking about processes of education and multiple functions and mechanisms that need to be integrated in advanced reading (e.g. fast silent reading).

Note added 21 Sep 2012
Perhaps the slides should have referred to a 2007 ACM paper by Peter J. Denning (much better than his earlier work on "Great Principles"):

Computing is a Natural Science
Information processes and computation continue to be found abundantly in the deep structures of many fields. Computing is not--in fact, never was--a science only of the artificial.

COMMUNICATIONS OF THE ACM July 2007/Vol. 50, No. 7, pp 13--18.
http://cs.gmu.edu/cne/pjd/PUBS/CACMcols/cacmJul07.pdf


Talk 104: Biological, computational and robotic connections with Kant's theory
of mathematical knowledge
Available here (PDF).

Invited talk at: ECAI 2012 Turing and Anniversary session, August 2012 Installed: 4 Dec 2012

Abstract

In my research I meander through various disciplines, using fragments of AI that I
regard as relevant to understanding natural and artificial intelligence, willing to
learn from anyone.

As a result, all my knowledge of work in particular sub-fields of AI is very patchy,
and rarely up to date. This makes me unfit to write the history of European
collaboration on some area of AI research as originally intended for this panel session.

However, by interpreting the topic rather loosely, I can (with permission from the
event organisers) regard some European philosophers who were interested in Philosophy
of mathematics as early AI researchers from whom I learnt much, such as Kant and Frege.
Hume's work is also relevant.

Moreover, more recent work by neuro-developmental psychologist Annette Karmiloff-Smith,
begun in Geneva with Piaget then developed independently, helps to identify important
challenges for AI (and theoretical neuroscience), that also connect with philosophy
of mathematics and the future of AI and robotics, rather than the history.

I'll present an idiosyncratic, personal, survey of a subset of AI stretching back in
time, and deep into other disciplines, including philosophy, psychology and biology,
and possibly also deep into the future, linked by problems of explaining human
mathematical competences. The unavoidable risk is that someone in AI has done very
relevant work on mathematical discovery and reasoning, of which I am unaware.

I'll be happy to be informed, and will extend these slides if appropriate.

See online paper http://tinyurl.com/BhamCog/12.html#1205

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-theorem.html
Theorems About Triangles, and Implications for Biological Evolution and AI
The Median Stretch, Side Stretch, Triangle Sum, and Triangle Area Theorems


Talk 103: Meta-morphogenesis and the Creativity of Evolution
Mechanisms for bootstrapping biological minds
Available HERE (PDF).

Presented at:
ECAI 2012 Workshop on Computational Creativity, Concept Invention, and General Intelligence
Montpellier, 27th August 2012
http://www.cogsci.uni-osnabrueck.de/~c3gi
Workshop proceedings: http://www2.lirmm.fr/ecai2012/images/stories/ecai_doc/pdf/workshop/W40_c3gi_pre-proceedings_20120803.pdf
Installed: 28 Aug 2012

Abstract

Whether the mechanisms proposed by Darwin and others suffice to explain all details of the achievements of biological evolution remains open. Variation in heritable features can occur spontaneously, and Darwinian natural selection can explain why some new variants survive longer than others. But that does not satisfy Darwin's critics and also worries supporters who understand combinatorial search spaces. One problem is the difficulty of knowing exactly what needs to be explained: Most research has focused on evolution of physical form, and physical competences and behaviours, in part because those are observable features of organisms. What is much harder to observe is evolution of information-processing capabilities and supporting mechanisms (architectures, forms of representation, algorithms, etc.). Information-processing in organisms is mostly invisible, in part because it goes on inside the organism, and in part because it often has abstract forms whose physical manifestations do not enable us to identify the abstractions easily. Compare the difficulty of inferring thoughts, percepts or motives from brain measurements, or decompiling computer instruction traces. Moreover, we may not yet have the concepts required for looking at or thinking about the right things: we may need more than the vast expansion of our conceptual tools for thinking about information processing capabilities and mechanisms in the last half century. However, while continually learning what to look for, we can collaborate in attempting to identify the many important transitions in information processing capabilities, ontologies, forms of representation, mechanisms and architectures that have occurred on various time-scales in biological evolution, in individual development (epigenesis) and in social/cultural evolution -- including processes that can modify later forms of evolution and development: meta-morphogenesis. Conjecture: The cumulative effects of successive phases of meta-morphogenesis produce enormous diversity among living information processors, explaining how evolution came to be the most creative process on the planet. Progress in AI depends on understanding the products of this process.

Latest version of the workshop paper: http://www.cs.bham.ac.uk/research/projects/cogaff/12.html#1203


Talk 102: Meta-Morphogenesis: of virtual machinery with "physically indefinable" functions
Draft available here (PDF)
Overlaps with Talk 101.

Installed: 16 Jun 2012
Updated: 1 Jul 2012

Abstract

Online Abstract
This is the latest version of the presentation given at the Workshop "The Incomputable":
http://www.mathcomp.leeds.ac.uk/turing2012/inc/
Royal Society Kavli Centre, Chicheley: 11-15 June 2012
Abstract for talk.

Talk 101: Meta-Morphogenesis: Evolution of mechanisms for producing minds
OR
Evolution, development and learning, producing new mechanisms of evolution, development and learning.

Talk at Cambridge University Computing and Technology Society www.cucats.org
Available HERE (PDF).

Invited talk at:
Cambridge University Computing and Technology Society (Tuesday 8th May 2012)
NB: The pdf slides are still being expanded/updated. Criticisms and suggestions welcome.
Installed: 14 May 2012

Abstract

See the abstract posted at:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/cucats-abstract.html

See also


Talk 100: Architectures for more or less intelligent life
How to turn philosophers of mind into engineers -- to help them solve old philosophical problems
Available HERE (PDF).

Guest lecture:
for Philosophy of Cognitive Science Students, Birmingham Feb 2012.
Installed/revised:21 Apr 2012 (NB: Still being revised, likely to change)

Abstract

We can integrate philosophy of mind with other fields, and turn vague insoluble problems into problems about what sorts of information processing architectures make different sorts of minds possible, including minds that grow and change their architectures. By considering different evolutionary and developmental trajectories in different species and in different sorts of future machines and robots we can understand each case much better, including understanding what human minds are, and how they grow and change. It's important not only to consider different sorts of minds, but also whole architectures with different sorts of components performing different functions, since the nature of each function depends on the others it interacts with.


Talk 99: Meta-Morphogenesis - An introductory overview
Two talks in Birmingham October 2011, and elsewhere in 2012
Preliminary notes available HERE (HTML).

Talks at:
University of Birmingham, Language and Cognition, 21st October 2011
and School of Computer Science 31st October 2011.
Also: variants at: University of Aberystwyth; Royal Society meeting on Animal Minds, Chichely Hall; EuCognition Meeting Oxford; University of Nottingham.

Abstract

All the presentations were informal and based on portions of these three web sites (different portions): Slides will be added here later. See also Slides on toddler theorems below.


Talk 98: The deep, barely noticed, consequences of embodiment.
(Ignored by most embodiment theorists)
Extended Abstract HERE (HTML)

Invited talk for PT-AI Conference, Thessaloniki, 3 & 4 October 2011 Philosophy and Theory of Artificial Intelligence
http://www.pt-ai.org/


Talk 97: How to combine science and engineering to solve philosophical problems
Based on Notes for AAAI Tutorial.

Invited talk at:
"Barcelona Cognition, Brain and Technology summer school - BCBT" http://bcbt.upf.edu/bcbt11 September 7, 2011

Abstract

I first learnt about AI in 1969 when I was a lecturer in philosophy, and soon became convinced that the best way to make progress in solving a range of philosophical problems (e.g. in philosophy of mathematics, philosophy of mind, philosophy of language, philosophy of science, epistemology, philosophy of emotions, and some parts of metaphysics) was to produce and analyse designs for successively larger working fragments of minds. I think that project can be enhanced by using it to pose new questions about transitions in the evolution of biological information-processing systems. I shall try to explain these relationships between AI, biology and philosophy and show how they can yield major new insights, while also inspiring important (and difficult) new research. I hope to make the presentation interactive.

I shall post relevant reading matter on the web site being prepared for a closely related tutorial in August at AAAI, here: http://www.cs.bham.ac.uk/research/projects/cogaff/aaaitutorial/


Talk 96: Philosophy as AI and AI as philosophy [STILL BEING EXTENDED]
PDF file

Slides for Tutorial presented at AAAI-20011 8 Aug 2011
(Still being reorganised and expanded - please do not store copies anywhere else.).
The tutorial overview is available here:
http://www.cs.bham.ac.uk/research/projects/cogaff/aaaitutorial/

A subset of these slides will be used for a talk at
The Barcelona Cognition, Brain and Technology summer school BCBT2011
"How to combine science and engineering to solve philosophical problems"


Talk 95: Evolution of mind as a feat of computer systems engineering
Lessons from decades of development of virtual machinery, including self-monitoring virtual machinery.
Varieties of Self-Awareness and Their Uses in Natural and Artificial Systems
Available here (PDF).

Invited talk at SPS Workshop on Philosophy of Artificial Intelligence
Nancy 19 July 2011
DRAFT -- Liable to change. Comments welcome.
Installed 19 Jul 2011: Part of this version was presented at the workshop. Further revisions are likely.
The conference paper is here
See also


Talk 94: Varieties of Self-Awareness and Their Uses in Natural and Artificial Systems
Metacognition and natural cognition
Towards a conceptual framework
Available here (PDF).

Talk at EPICS workshop Birmingham 2011.
DRAFT -- Liable to change. Comments welcome.
Update 19 Jul 2011: Part of this version was presented at the workshop. Further revisions are likely.
Also posted on slideshare
by the AWARE EU project http://www.aware-project.eu/
See also


Talk 93: What's vision for, and how does it work?
From Marr (and earlier)
to Gibson and Beyond
(With some potted, rearranged, history)
Available here (PDF)
Also on Slideshare.net here (FLASH)

Presented at Birmingham Vision Club (School of Psychology), 17th June 2011, and a Vision/Robotics workshop (Sheffield University Psychology dept.) 23rd June 2011.
DRAFT -- Liable to change. Comments welcome.

Abstract

Very many researchers assume that it is obvious what vision (e.g. in humans) is for, i.e. what functions it has, leaving only the problem of explaining how those functions are fulfilled.

So they postulate mechanisms and try to show how those mechanisms can produce the required effects, and also, in some cases, try to show that those postulated mechanisms exist in humans and other animals and perform the postulated functions.

The main point of this presentation is that it is far from obvious what vision is for - and J.J. Gibson's main achievement is drawing attention to some of the functions that other researchers had ignored.

I'll present some of the other work, show how Gibson extends and improves it, and then point out much more there is to the functions of vision and other forms of perception than even Gibson had noticed.

In particular, much vision research, unlike Gibson, ignores vision's function in on-line control and perception of continuous processes; and nearly all, including Gibson's work, ignores meta-cognitive perception, and perception of possibilities and constraints on possibilities and the associated role of vision in reasoning.

If we don't understand that we cannot understand how biological mechanisms arising from requirements for being embodied in a rich, complex and changing 3-D environment underpin human mathematical capabilities, including the ability to reason about topology and Euclidean geometry.
See discussions of "Toddler theorems" below.


Talk 93a: What's vision for?
Modified version of Talk 93.
Available here (PDF)

Guest Lecture for Birmingham UG students. 31 Jan 2012.
Part of Philosophy of Cognitive Science Lectures.

Abstract

To Be Added


Talk 92: Computing: The Science of Nearly Everything. Including Biology!
Talk for CAS "TeachShare" presentation, 8 Jun 2011
Available here (PDF).
In Flash format, on slideshare.net.

Prepared for online presentation for Computing At School (CAS)
(Using elluminate conference tool.)

Background notes: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/teach-share.html

Recording of the presentation: http://bit.ly/iVhp0i
Requires JavaWebStart (javaws). Set up and test your system here first: http://www.elluminate.com/support


Talk 91: LIFE and INFORMATION
Self-modifying information-processing architectures
Available here (PDF).

Talk for 2nd Year CS students, Birmingham, 28 Jan 2011

Abstract

I contrast the evolution of physical forms, and observable behaviours with evolution of types of information processing in organisms of various kinds.
This could be described as "invisible evolution": hard to identify but essential to understand if we want to understand the achievements of biological evolution and the nature of what it produced -- including ourselves.


Talk 90: Piaget (and collaborators) on Possibility and Necessity
And the relevance of/to AI/Robotics/mathematics (in biological evolution and development)
Presented 21 Feb 2011, Birmingham, 28th March Dagstuhl, 6th April Oxford.
Version presented at Dagstuhl workshop 28th March 2011, and Oxford CIAO/Automatheo workshop 6th April (after revision).
Available HERE, (PDF).
Available HERE, in 2-UP PDF Format.

Previous version presented Feb 2011 available HERE, (PDF).
Available HERE, in 2-UP PDF Format.

Presented at University of Birmingham 21 Feb 2011 (Computer Science and Developmental Psychology).

Videos relevant to the talk: http://www.cs.bham.ac.uk/research/projects/cogaff/movies/vid/

Note added 7 Mar 2011

Since the talk I have been looking at (among other things) Annette Karmiloff-Smith's work on Representational Redescription, in her 1992 book Beyond Modularity
There is much overlap in our ideas, which I am attempting to document here.

I have also expanded the slides to include a rational reconstruction of what I think Piaget was studying expressed in terms of the concept of "Exploration Domain" (close to "micro-worlds" in AI and "microdomains" in Karmiloff-Smith).

Humans and some other animals seem to have the ability first to learn patterns of phenomena in an exploration domain, then, in some cases, to reorganise (unwittingly) the empirical information into something like a deductive system in which previous patterns (sometimes corrected) become either "examples" or "theorems". (Hence Possibility and Necessity.)

This process will depend on both features of the environment and mechanisms produced by evolution to help animals cope with various sorts of environment. Some of the features of this process are: different exploration domains are explored and learnt about in parallel, sometimes domains can be combined to form more complex domains, many, though not all, domains are closely related to the structure of space, time and matter, most animals do not also have the metacognitive ability to learn to make their own learning an exploration domain, though humans do. Well known transitions in human language learning seem to be based on late evolutionary developments of the above mechanisms in humans. The processes of reorganisation depend on architectural growth, sometimes combined with use of new special purpose forms of representation.

The processes of construction of deductive revisions, and the processes of deployment of the new systems are sometimes buggy (as is mathematical theorem proving (Lakatos: Proofs and Refutations). Also for each learner the trajectory (development+learning) may be unique and depend on genetic, social and physical environmental opportunities.

There seem to be deep implications for biology, developmental psychology, neuroscience, comparative cognitive science, education, AI/Robotics and philosophy (e.g. epistemology, philosophy of language and philosophy of mathematics).

Results of the human genome project cannot be understood until much more is known about what the genome (or genomes) contributes to these processes and how.

Original Abstract
It is not widely known that shortly before he died Jean Piaget and his collaborators produced a pair of books on Possibility and Necessity, exploring questions about how two linked sets of abilities develop:
(a) The ability to think about how things might be, or might have been, different from the way they are.
(b) The ability to notice limitations on possibilities, i.e. what is necessary or impossible.

I believe Piaget had deep insights into important problems for cognitive science that have largely gone unnoticed, and are also important for research on intelligent robotics, or more generally Artificial Intelligence (AI), as well as for studies of animal cognition and how various animal competences evolved and develop.
The topics are also relevant to understanding biological precursors to human mathematical competences and to resolving debates in philosophy of mathematics, e.g. between those who regard mathematical knowledge as purely analytic, or logical, and those who, like Immanuel Kant, regard it as being synthetic, i.e. saying something about reality, despite expressing necessary truths that cannot be established purely empirically, even though they may be initially discovered empirically (as happens in children).

It is not possible in one seminar to summarise either book, but I shall try to present an overview of some of the key themes and will discuss some of the experiments intended to probe concepts and competences relevant to understanding necessary connections.

In particular, I hope to explain: (a) The relevance of Piaget's work to the problems of designing intelligent machines that learn the things humans learn. (Most researchers in both Developmental Psychology and AI/Robotics have failed to notice or have ignored most of the problems Piaget identified.) (b) How a deep understanding of AI, and especially the variety of problems and techniques involved in producing machines that can learn and think about the problems Piaget explored, could have helped Piaget describe and study those problems with more clarity and depth, especially regarding the forms of representation required, the ontologies required, the information processing mechanisms required and the information processing architectures that can combine those mechanisms in a working system -- especially architectures that grow themselves.

That kind of computational or "design-based" understanding of the problems can lead to deeper clearer specifications of what it is that children are failing to grasp at various stages in the first decade of life, and what sorts of transitions can occur during the learning. I believe the problems, and the explanations, are far more complex than even Piaget thought. The potential connection between his work and AI was appreciated by Piaget himself only very shortly before he died.

One of the key ideas implicit in Piaget's work (and perhaps explicit in something I have not read) is that the learnable environment can be decomposed into explorable domains of competence that are first investigated by finding useful, reusable patterns, describing various fragments.

Then eventually a large scale reorganisation is triggered (per domain) which turns the information about the domain into a more economical and more powerful generative system that subsumes most of the learnt patterns and, through use of compositional semantics in the internal representation, allows coping with much novelty -- going far beyond what was learnt.

(I think this is the original source of human mathematical competences.)

Language learning seems to use a modified, specialised, version of this more general (but not totally general) mechanism, but the linguistic mechanisms were both a later product of evolution and also get turned on later in young humans than the more general domain learning mechanisms. The linguistic mechanisms also require (at a later stage) specialised mechanisms for learning, storing and using lots of exceptions to the general rules induced (the syntactic and semantic rules).

The language learning builds on prior learning of a variety of explorable domains, providing semantic content to be expressed in language. Without that prior development, language learning must be very shallow and fragmentary -- almost useless.

When two or more domains of exploration have been learnt they may be combinable, if their contents both refer to things and processes in space time. Space-time is the the great bed in which many things can lie together and produce novelty.

I think Piaget was trying to say something like this but did not have the right concepts, though his experiments remain instructive.

Producing working demonstrations of these ideas in a functional robot able to manipulate things as a child does will require major advances in AI, though there may already be more work of this type than I am aware of.

See also http://www.cs.bham.ac.uk/research/projects/cogaff/11.html#1101
Evolved Cognition and Artificial Cognition: Some Genetic/Epigenetic Trade-offs for Organisms and Robots


Talk 89: Genomes for self-constructing, self-modifying information-processing architectures
Available HERE (PDF)
(NB: Work in progress. Liable to change.)

Slides for invited talk at SGAI 2010 Workshop on Bio-inspired and Bio-Plausible Cognitive Robotics

Pre-Workshop Abstract


Talk 88: A Multi-picture Challenge for Theories of Vision
Including a sketch of a specification for dynamical systems required.
Available HERE (PDF).
Also on Slideshare.net here, in flash fortmat:
http://www.slideshare.net/asloman/a-multipicture

Originally presented at a BBSRC workshop and 'vision club' meetings in Birmingham, 2007.
Moved here 2008.

Abstract

One of the amazing facts about human vision is how fast a normal adult visual system can respond to a complex optic array with rich 2-D structure representing complex 3-D structures and processes, e.g. turning a corner in a large and unfamiliar town.

This has implications for the mechanisms required, which I try to spell out. See also:

Aaron Sloman,
Architectural and Representational Requirements for Seeing Processes, Proto-affordances and Affordances,
In Logic and Probability for Scene Interpretation, Eds. Anthony G. Cohn, David C. Hogg, Ralf Moeller and Bernd Neumann,
Dagstuhl Seminar 08091 Proceedings, 2008, Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany,
http://drops.dagstuhl.de/opus/volltexte/2008/1656


Talk 87: What does AI have to do with Biology?
Talk for first year Introduction to AI students, 9th Nov 2010
School of Computer Science, University of Birmingham

Available HERE (PDF)

Added: 9 Nov 2010 --- Modified: 11 Nov 2010; 19 Nov 2010

The talk was presented on 9th Nov, showing these slides and some videos. I may later extend the slides. Suggestions welcome.

Abstract:

A task for AI is to work with biologists, not just learning from them, but also providing them with new AI-informed concepts, formalisms, questions, suggestions for experiments, theories, and working explanatory models.

The videos shown in the lecture, and a few more, are available here.

See also:


Talk 86: Supervenience and Causation in Virtual Machinery
sloman-virtuality-causation.pdf

Added: 30 Sep 2010; Modified 15 Oct 2010; 22 Nov 2010 29 Nov 2010; 5 Dec 2010 (Ongoing)

Related presentations:

Abstract

This unfinished, still somewhat disorganised, draft attempts to explain what running virtual machines are, in terms of kinds of dynamical system whose behaviours and competences are not best described in terms of physics and chemistry, even though they have to be fully implemented in physical mechanisms in order to exist and operate. This attempts to explain in more detail than my earlier papers how "sideways causation" and how "downward causation" can occur in running virtual machines, i.e. non-physical things causally influencing one another and also influencing physical events and processes -- without any magic, mysticism, quantum mechanics etc. needed, just sufficiently tangled webs of true counterfactual conditionals supported by sophisticated machinery designed or evolved for that purpose.

Two notions of real existence are proposed (a) being able to cause or be caused by other things (existence in our world) and (b) being an abstraction that is part of a system of constraints and implications (mathematical existence). Some truths about causal connections between things with the first kind of existence can be closely related to mathematical connections between things of the second kind. (I think that's roughly Immanuel Kant's view of causation, in opposition to Hume.)

Some of the problems are concerned with concurrent interacting subsystems within a virtual machine, including co-operation, conflict, self-monitoring, and self-modulation. The patterns of causation involving interacting information are not well understood. Existing computer models seem to be far too simple to model things like conflicting tastes, principles, hopes, fears, ...
In particular physical opposing forces and other well understood interacting physical mechanisms are very different from these interactions in mental machinery, even though they are fully implemented in physical machinery. This is likely to be "work in progress for some time to come."

This presentation is intended to provide background supporting material for other presentations and papers on virtual machinery, consciousness, qualia, introspection and the evolution of mind, including Talk 84 below explaining how Darwin could have answered some of his critics regarding evolution of mind and consciousness.

I don't believe the ideas are clear enough yet.


Talk 85: Daniel Dennett on Virtual Machines
Available HERE (PDF).

Installed: 20 Sep 2010; Revised 20 Nov 2010; 29 Nov 2010;

Related presentations:

Abstract

This is one of a collection of presentations regarding virtual machines and their causal powers. See also
Talk 86, Talk 84, Talk 71, and some older talks on supervenience and virtual machinery.

This presentation provides a few notes on Dennett's views on virtual machines, extracted from Talk73 with some revisions, including a criticism of what he says about "centres of narrative gravity" and "centres of gravity" and "point masses", e.g. in his paper "Real Patterns".

His occasional reluctance to be a realist about virtual machinery and his reluctance to be a realist about mental states and processes (as opposed to being willing to adopt "The intentional stance") were both attributed in an early version of this presentation to a failure to understand the significance of the explanatory power of virtual machines and their causal powers in computing systems, as discussed in various presentations listed here. However his recent publication doi:10.1101/sqb.2009.74.008 (The Cultural Evolution of Words and Other Thinking Tools) is unequivocal about the importance and reality VMs.


Talk 84: Using virtual machinery to bridge the "explanatory gap"
Or: Helping Darwin: How to Think About Evolution of Consciousness
Or: How could evolution (or anything else) get ghosts into machines?
Revised slides available HERE (PDF).

Slides prepared for the SAB2010 presentation HERE (PDF)
(Out of date but may be useful with the video.)
Videos of talks at SAB2010

Installed: 23 Nov 2010.
Last Updated: 23 Nov 2010; 10 Dec 2010

Invited talk at:

SAB2010
11th International Conference on Simulation of Adaptive Behaviour
Paris, 29 August 2010 (Presented at Clos-Lucé, Amboise)
The published paper.

Also presented 10th Sept at Conference on "Nature and Human Nature" (Consciousness and Experiential Psychology) Oxford:
http://www.bps.org.uk/conex/events/cep_2010.cfm

Related presentations:

Abstract

Many of Darwin's opponents, and some of those who accepted the theory of evolution as regards physical forms, objected to the claim that human mental functions, and consciousness in particular, could be products of evolution. There were several reasons for this opposition, including unanswered questions as to how physical mechanisms could produce mental states and processes -- an old, and still surviving, philosophical problem.

We can now show in principle how evolution could have produced the "mysterious" aspects of consciousness if, like engineers developing computing systems in the last six or seven decades, evolution "solved" increasingly complex problems of representation and control (including self-monitoring and self-control) by producing systems with increasingly abstract, but effective, mechanisms, including self-observation capabilities, implemented in virtual machinery.

It is suggested that these capabilities are, like many capabilities of computer-based systems, implemented in non-physical virtual machines which, in turn, are implemented in lower level physical mechanisms. For this, evolution would have had to produce far more complex virtual machines than human engineers have so far managed, but the key idea of switching information processing to a higher level of abstraction, might be the same.

However it is not yet clear whether the biological virtual machines could have been implemented in the kind of discrete technology used in computers as we know them. These ideas were not available to Darwin and his contemporaries because most of the concepts, and the technology, involved in creation and use of sophisticated virtual machines has only been developed in the last half century, as a by-product of a large number of design decisions by hardware and software engineers.

Note: Some of the ideas about evolutionary pressures from the environment are summarised briefly in a commentary on a 'target article' by Margaret Boden Can computer models help us to understand human creativity?
My commentary is at the end of the above web page, and also copied here.

Note: This is related to hard unsolved philosophical problems about the concept of causation.


Talk 83 Routes from Genome to Architecture (provisional title)
(PDF presentation on how to do research in this area in preparation)

Some ideas about this are presented here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/gentoa
And in Talk 89.


Talk 82: Steps Towards a 21st Century University:
Planting Seeds ... for a unified science of information
Available here (PDF).

Presented at:
Research Awayday, 21st July 2010. Winterbourne, University of Birmingham.

Followed up by a meeting (or series of meetings) to discuss the question

How can a genome specify an information-processing architecture that grows itself guided by interaction with the environment?
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/genome-architecture-project.html

Talk 81: The Design-Based Approach to the Study of Mind
(in humans, other animals, and machines) Including the Study of Behaviour involving Mental Processes
Available HERE (PDF).

Presented at:
Symposium on AI-Inspired Biology at AISB'2010 convention, 31st March--1st April 2010.

The proceedings paper is here.

Abstract (from paper in proceedings):

There is much work in AI that is inspired by natural intelligence, whether in humans, other animals or evolutionary processes. In most of that work the main aim is to solve some practical problem, whether the design of useful robots, planning/scheduling systems, natural language interfaces, medical diagnosis systems or others.

Since the beginning of AI there has also been an interest in the scientific study of intelligence, including general principles relevant to the design of machines with various sorts of intelligence, whether biologically inspired or not. The first explicit champion of that approach to AI was John McCarthy, though many others have contributed, explicitly or implicitly, including Alan Turing, Herbert Simon, Marvin Minsky, Ada Lovelace a century earlier, and others.

A third kind of interest in AI, which is at least as old, and arguably older, is concerned with attempting to search for explanations of how biological systems work, including humans, where the explanations are sufficiently deep and detailed to be capable of inspiring working designs. That design-based attempt to understand natural intelligence, in part by analysing requirements for replicating it, is partly like and partly unlike the older mathematics-based attempt to understand physical phenomena, insofar as there is no requirement for an adequate mathematical model to be capable of replicating the phenomena to be explained: Newton's equations did not produce a new solar system, though they helped to explain and predict observed behaviours in the old one.

This paper attempts to explain some of the main features of the design-based approach to understanding natural intelligence, many of them already well known, though not all.

The design based approach makes heavy use of what we have learnt about computation since Ada Lovelace. But it should not be restricted to forms of computation that we already understand and which can be implemented on modern computers. We need an open mind as to what sorts of information-processing systems can exist and which varieties were produced by biological evolution.


(Provisional version of Talk for SAB2010)

Talk 80: Helping Darwin:
How to Think About Evolution of Consciousness

Or "How could evolution get ghosts into machines?"

Related presentations:

Available HERE (PDF). (Superseded by related talks)

Also on my 'slideshare.net' web site

Preview of invited talk to be presented at Le Clos Lucé, Amboise, France at SAB2010 in August 2010.
(Still being revised. Final version of presentation to go in Talk 84.

Conference paper is here.

Presented at School of BioSciences Seminar, UoB, 11th May 2010

Abstract

Many of Darwin's opponents, and some of those who accepted the theory of evolution as regards physical forms, objected to the claim that human mental functions, and consciousness in particular, could be products of evolution. There were several reasons for this opposition, including unanswered questions as to how physical mechanisms could produce mental states and processes -- an old, and still surviving, philosophical problem.

We can now show in principle how evolution could have produced the "mysterious" aspects of consciousness if, like engineers developing computing systems in the last six or seven decades, evolution "solved" increasingly complex problems of representation and control (including self-monitoring and self-control) by using systems with increasingly abstract mechanisms based on virtual machines.

It is suggested that these capabilities are, like many capabilities of computer-based systems, implemented in non-physical virtual machines which, in turn, are implemented in lower level physical mechanisms. For this, evolution would have had to produce far more complex virtual machines than human engineers have so far managed, but the key idea might be the same.

However it's not yet clear whether the biological virtual machines could have been implemented in the kind of discrete technology used in computers as we know them. These ideas were not available to Darwin and his contemporaries because most of the concepts, and the technology, involved in creation and use of sophisticated virtual machines has only been developed in the last half century, as a by-product of a large number of design decisions by hardware and software engineers.

Note: Some of the ideas about evolutionary pressures from the environment are summarised briefly in a commentary on a 'target article' by Margaret Boden Can computer models help us to understand human creativity?
My commentary is at the end of the above web page, and also copied here.

Note: This is related to hard unsolved philosophical problems about the concept of causation.

See also:


Talk 79: If learning maths requires a teacher, where did the first teachers come from?
Why (and how) did biological evolution produce mathematicians?
Available HERE (PDF).
Also on Slideshare.net (Flash)

Presented at Symposium on Mathematical Practice and Cognition, AISB 2010 Convention, Leicester, March 29-30 2010.
http://homepages.inf.ed.ac.uk/apease/aisb10/programme.html

Proceedings paper available here.

Abstract

This is the latest progress report on a long term quest to defend Kant's philosophy of mathematics. In humans, and other species with competences that evolved to support interactions with a complex, varied and changing 3-D world, some competences go beyond discovered correlations linking sensory and motor signals. Dealing with novel situations or problems requires abilities to work out what can, cannot, or must happen in the environment, under certain conditions. I conjecture that in humans these products of evolution form the basis of mathematical competences. Mathematics grows out of the ability to use, reflect on, characterise, and systematise both the discoveries that arise from such competences and the competences themselves. So a "baby" human-like robot, with similar initial competences and meta-competences, could also develop mathematical knowledge and understanding, acquiring what Kant called synthetic, non-empirical knowledge. I attempt to characterise the design task and some ways of making progress, in part by analysing transitions in child or animal intelligence from empirical learning to being able to "work things out". This may turn out to include a very general phenomenon involved in so-called "U-shaped" learning, including the language learning that evolved later. Current techniques in AI/Robotics are nowhere near this. A long term collaborative project investigating the evolution and development of such competences may contribute to robot design, to developmental psychology, to mathematics education and to philosophy of mathematics. There is still much to do.

Slightly revised version of parts of previous presentations on closely related topics. See below.


Talk 78: Computing: The Science of Nearly Everything. (PDF)
I.e. not just:
  • useful skills of various kinds,
  • useful and/or entertaining applications,
  • formal properties of computations
  • hardware/software engineering.

But also:

  • Powerful and deep new concepts and models
  • able to illuminate many other disciplines,
  • including studies of mind and life
    (partly by raising questions never asked before).

Early draft presented at Computing At School (CAS) meeting/, Microsoft Research, Cambridge, 27-28 April 2010.
Revised for the 2010 CAS Teacher Conference, Birmingham, 9th July 2010

Poster: Computing The Science of Nearly Everything (PDF).    (PPT - using OpenOffice).

NOTE:
Some examples of relatively unconventional kinds of programming (including "thinky" programming), that could be explored by young learners, are presented here:
http://www.cs.bham.ac.uk/research/projects/poplog/examples
See also Talk 87 above: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk87
Talk 87: What does AI have to do with Biology?

NOTE: Alan Bundy organised a seminar series at the University of Edinburgh, on a related theme, between 2006 and 2009. Details:
http://www.inf.ed.ac.uk/research/programmes/comp-think/previous.html

Abstract
Topics for possible discussion within CAS
1. Do we want to broaden the scope of CAS to include: teaching about the ways in which computing ideas and programming experience can illuminate other disciplines, especially understanding natural intelligence, in humans and other species?
[Including the nature of mind and consciousness.]

2. How do those goals affect the choice of computing/programming concepts, techniques and principles that are relevant?

3. What are good ways to do that? E.g. what sorts of languages and tools help and what sorts of learning/teaching activities?

4. Which children should learn about this? Contrast
-- Offering specialised versions for learners interested in biology, psychology, economics, linguistics, philosophy, mathematics.
-- Offering a study of computation as part of a general science syllabus.

5. Is there any scope for that within current syllabus structures, and if not, what can be done about making space?

Why nearly everything?

RELATED MATERIAL


Talk 77: How to do AI-inspired biology, as a change from biology-inspired AI.
Some thoughts about the past and future of AI as science and philosophy
Available HERE (PDF).

Invited talk at:
Workshop on AI Heritage, MIT, June 11-12, 2009

Abstract

To be added

(Includes some personal reflections 1969-onwards.)


Talk 76: title The history, nature, and significance of virtual machinery
Available HERE (PDF).

Talk at:
Theory lab lunch, School of computer science, Tues 23rd March 2010.

Abstract

Trying to get even computer scientists to take the ideas seriously.

Related (more recent) presentations:


Talk 75: Possibilities between form and function
Or between shape and affordances.
Available HERE (PDF).

Also available at 'slideshare.net'.

Invited talk at:

Dagstuhl Seminar: "From Form to Function" Oct 18-23, 2009 http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=09431
A precursor talk is here.

Abstract

I discuss the need for an intelligent system, whether it is a robot, or some sort of digital companion equipped with a vision system, to include in its ontology a range of concepts that appear not to have been noticed by most researchers in robotics, vision, and human psychology. These are concepts that lie between (a) concepts of "form", concerned with spatially located objects, object parts, features, and relationships and (b) concepts of affordances and functions, concerned with how things in the environment make possible or constrain actions that are possible for a perceiver and which can support or hinder the goals of the perceiver.

Those intermediate concepts are concerned with processes that *are* occurring and processes that *can* occur, and the causal relationships between physical structures/forms/configurations and the possibilities for and constraints on such processes, independently of whether they are processes involving anyone's actions or goals.

These intermediate concepts relate motions and constraints on motion to both geometric and topological structures in the environment and the kinds of 'stuff' of which things are composed, since, for example, rigid, flexible, and fluid stuffs support and constrain different sorts of motions.

They underlie affordance concepts. Attempts to study affordances without taking account of the intermediate concepts are bound to prove shallow and inadequate.

A longer abstract is here http://www.cs.bham.ac.uk/research/projects/cogaff/misc/between-form-and-function.html


Talk 74a: For Workshop: Inside and Outside of Computers and Minds
Why robots interacting intelligently with a complex 3-D environment will need qualia and how they can have them.

Filename: inside-outside.pdf (PDF)
Date installed: 12 Mar 2010

Presented Wed 10th March 2010 at the Senate House in the Inside Outside workshop
http://graham.web-stu.dcs.qmul.ac.uk/insideOutside.xhtml

This is a modified version of the talk below.


Talk 74: Why the "hard" problem of consciousness is easy and the "easy" problem hard.
(And how to make progress)
(Slides subject to revision)
Available HERE (PDF).

(Last changed 9 Jan 2010: added mechanisms for change detection.)
Also available on slideshare http://www.slideshare.net/asloman/why-the-hard-problem-of-consciousness-is-easy-and-the-easy-problem-hard-and-how-to-make-progress
With other presentations http://www.slideshare.net/asloman/.

Talk at:

Language and Cognition Seminar, School of Psychology, 6 Nov 2009

(This is a sequel to Talk 73 below, presented at Metaphysics of Science 2009 on "Virtual Machines and the Metaphysics of Science".)

I have a closely related tutorial paper on this topic destined for Int. Journal of Machine Consciousness
http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#906
Phenomenal and Access Consciousness and the "Hard" Problem: A View from the Designer Stance

Abstract

The "hard" problem of consciousness can be shown to be a non-problem because it is formulated using a seriously defective concept (the concept of "phenomenal consciousness" defined so as to rule out cognitive functionality and causal powers).

So the hard problem is an example of a well known type of philosophical problem that needs to be dissolved (fairly easily) rather than solved. For other examples, and a brief introduction to conceptual analysis, see http://www.cs.bham.ac.uk/research/projects/cogaff/misc/varieties-of-atheism.html

In contrast, the so-called "easy" problem requires detailed analysis of very complex and subtle features of perceptual processes, introspective processes and other mental processes, sometimes labelled "access consciousness": these have cognitive functions, but their complexity (especially the way details change as the environment changes or the perceiver moves) is considerable and very hard to characterise.

"Access consciousness" is complex also because it takes many different forms, since what individuals are conscious of and what uses being conscious of things can be put to, can vary hugely, from simple life forms, through many other animals and human infants, to sophisticated adult humans,

Finding ways of modelling these aspects of consciousness, and explaining how they arise out of physical mechanisms, requires major advances in the science of information processing systems -- including computer science and neuroscience.

There are empirical facts about introspection that have generated theories of consciousness but some of the empirical facts go unnoticed by philosophers.

The notion of a virtual machine is introduced briefly and illustrated using Conway's "Game of life" and other examples of virtual machinery that explain how contents of consciousness can have causal powers and can have intentionality (be able to refer to other things).

The beginnings of a research program are presented, showing how more examples can be collected and how notions of virtual machinery may need to be developed to cope with all the phenomena.


Talk 73: Virtual Machines and the Metaphysics of Science
(Expanded version of presentation at: Metaphysics of Science'09)
Available here (PDF)

Related presentations:

Abstract

Philosophers regularly use complex (running) virtual machines (not virtual realities) composed of enduring interacting non-physical subsystems (e.g. operating systems, word-processors, email systems, web browsers, and many more). These VMs can be subdivided into different kinds with different types of functions, e.g. "specific-function VMs" and "platform VMs" (including language VMs, and operating system VMs) that provide support for a variety of different (possibly concurrent) "higher level" VMs, with different functions.

Yet, almost all ignore (or misdescribe) these VMs when discussing functionalism, supervenience, multiple realisation, reductionism, emergence, and causation.

Such VMs depend on many hardware and software designs that interact in very complex ways to maintain a network of causal relationships between physical and virtual entities and processes.

I'll try to explain this, and show how VMs are important for philosophy, in part because evolution long ago developed far more sophisticated systems of virtual machinery (e.g. running on brains and their surroundings) than human engineers so far. Most are still not understood.

This partly accounts for the apparent intractability of several philosophical problems.

E.g. running VM subsystems can be disconnected from input-output interactions for extended periods, and some can have more complexity than the available input/output bandwidth can reveal.

Moreover, despite the advantages of VMs for self-monitoring and self control, they can also lead to self-deception.

SEE ALSO:
A longer abstract (and a workshop paper) here
http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#vms

For an application of these ideas to old philosophical problems of consciousness see Talk 74: Why the "hard" problem of consciousness is easy and the "easy" problem hard. (And how to make progress)

For an attempt to show how Darwin could have used these ideas to provide answers to critics who claimed that evolution by natural selection could not produce consciousness see:
Talk 80: Helping Darwin: How to Think About Evolution of Consciousness -- Or "How could evolution get ghosts into machines?"

For an attempt to specify a (very large and ambitious) multi-disciplinary research project related to this see

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/genome-architecture-project.html
A Possible Genome To Architecture Project (GenToA)
[The Meta-Genome Project?]
How can a genome specify an information processing architecture that grows itself guided by interaction with the environment?

Some early ideas about this were in Chapter 6 of The Computer Revolution in Philosophy: Philosophy Science and Models of Mind (1978)
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/

For a lot of related material see Steve Burbeck's web site http://evolutionofcomputing.org/Multicellular/Emergence.html


Talk 72: Some thoughts and demos, on ways of using computing for deep education on many topics.
As a change from teaching:
  • "useful" skills (of various kinds),
  • uses of computing,
  • computer science
  • computer/software engineering.
Incomplete draft available HERE (PDF).

(Still being revised and extended.)
Also on Slideshare in flash format.

Invited talk at:

Opensource Schools Unconference: NCSL Nottingham 20th July 2009
http://opensourceschools.org.uk/unconference09

The theoretical ideas, using Vygotsky's notion of a "Zone of Proximal Development" (ZPD), among other ideas, are illustrated using teaching methods based on Pop-11 and the Poplog AI programming environment, some illustrated here: http://www.cs.bham.ac.uk/research/projects/poplog/freepoplog.html#teaching


Talk 71: What Cognitive Scientists Need to Know about Virtual Machines.
(Presented at CogSci'09).
Alternative title: Virtual Machines and the Metaphysics of Science (To be presented at Metaphysics of Science'09)
Available HERE (PDF).

(Still being revised and extended.)
Revised version presented at Metaphysics of Science Conference, Sept 2009

Presented at

Cognitive Science Conference 2009 CogSci'09, Amsterdam, 31st July 2009

There is an older presentation related to this here
`Virtual Machines in Philosophy, Engineering & Biology" (presented at WPE 2008).
A later version aimed at computer scientists is Talk 76: The history, nature, and significance of virtual machinery.

Abstract

Many psychologists, philosophers, neuroscientists and others interact with a variety
of man-made virtual machines (VMs) every day without reflecting on what that implies
about options open to biological evolution, and the implications for relations
between mind and body. This tutorial position paper introduces some of the roles
different sorts of VMs, contrasting Abstract VMs (AVMs) which are merely mathematical
objects that do nothing, and running instances (RVMs) which interact with
other things and have parts that interact causally. We can also distinguish single
function, specialised VMs (SVMs), e.g. a running chess game or word processor, from
"platform" VMs (PVMs), e.g. operating systems which provide support for changing
collections of RVMs. (There was no space in the paper to distinguish two sorts of
platform VMs, namely operating systems that can support actual concurrent interacting
processes, and language run-time VMs which can support different sorts of
functionality, though each instance of the language run-time VM (e.g. a Lisp VM, a
Prolog VM) may not support multiple processes.

The different sorts of RVMs play important but different roles in engineering
designs, including "vertical separation of concerns" and suggests that biological
evolution "discovered" problems that require VMs for their solution long before we
did. Some of the resulting biological VMs have generated philosophical puzzles
relating to consciousness, mind-body relations, and causation. Some new ways of
thinking about these are outlined, based on attending to some of the unnoticed
complexity involved in making artificial VMs possible.

The paper also discusses some of the implications for philosophical and cognitive
theories about mind-brain supervenience and some options for design of cognitive
architectures with self-monitoring and self-control, along with warnings about a kind
self-deception arising out of use of RVMs. 
The 6 page conference paper (very compressed) is available here.


Talk 70: What Has Life Got To Do With Mind? Or vice versa? (PDF)
(Thoughts inspired by discussions with Margaret Boden.)

Presented at a seminar on Margaret Boden's work, Sussex University, 22 May, 2009.
Includes some reminiscences about Cognitive Science/AI at Sussex from 1965,
and a discussion of whether mind requires life, or life requires mind
(defined as information processing, or informed control).


Talk 69: Future Human-Like Robots: Requirements vs. Designs
Understand problems before you try to solve them
(Using iterated implementation if necessary)

Available HERE (PDF).

Invited talk at:
Session on 'The Ultimate Robot' part of FET'09 in Prague, April 2009.

Abstract

I am not trying to build a robot: I am trying to understand what problems evolution solved, how it solved them and whether the problems can be solved on computer-based systems.

One way of doing that is trying to build things, to find out what's wrong with our theories and what the problems are. But it is also necessary tp keep looking at products of evolution to compare them with what you have achieved so far.

Moreover, many of the problems come from the structure of the environment (e.g. the kinds of processes that do occur, that can occur, that can be produced or prevented, and the varieties of information that can be obtained by perceiving and acting in the environment). Most AI/Robotics/Cognitive Science/ don't study the environment enough.

Related talks


Talk 68: Ontologies for baby animals and robots
From "baby stuff" to the world of adult science: Developmental AI from a Kantian viewpoint.

Latest version, presented at Brown University, on 10th June 2009 Available HERE (PDF).
Last modified: 27 May 2010

Older version (presented in Prague) available HERE (PDF).
Presented at

Workshop on Matching and Meaning, at AISB'09 Edinburgh 9th April 2009.
at Spring 2009 Pattern Recognition and Computer Vision Colloquium
April 23, 2009 Czech Technical University, Center for Machine Perception

Abstract

In contrast with ontology developers concerned with a symbolic or digital environment (e.g. the internet), I draw attention to some features of our 3-D spatio-temporal environment that challenge young humans and other intelligent animals and will also challenge future robots. Evolution provides most animals with an ontology that suffices for life, whereas some animals, including humans, also have mechanisms for substantive ontology extension based on results of interacting with the environment. Future human-like robots will also need this. Since pre-verbal human children and many intelligent non-human animals, including hunting mammals, nest-building birds and primates can interact, often creatively, with complex structures and processes in a 3-D environment, that suggests (a) that they use ontologies that include kinds of material (stuff), kinds of structure, kinds of relationship, kinds of process (some of which are process-fragments composed of bits of stuff changing their properties, structures or relationships), and kinds of causal interaction and (b) since they don't use a human communicative language they must use information encoded in some form that existed prior to human communicative languages both in our evolutionary history and in individual development. Since evolution could not have anticipated the ontologies required for all human cultures, including advanced scientific cultures, individuals must have ways of achieving substantive ontology extension. The research reported here aims mainly to develop requirements for explanatory designs. The attempt to develop forms of representation, mechanisms and architectures that meet those requirements will be a long term research project.


Talk 67: This talk has taken several forms using slightly different titles.
Why (and how) did biological evolution produce mathematicians?
Available in PDF A4 Landscape Format
(Title used for presentation at University of Birmingham mathematics graduate conference 1st June 2009).

Presented at Nottingham LSRI Tuesday 2nd Feb 2010
If learning mathematics requires a teacher, where did the first teachers come from?
Slides for the talk (PDF, messy)
Video of the presentation at Nottingham LSRI 2nd Feb 2010.
(Includes Zeyn Saigol's refutation of my rubber-band star theorem.)

Also available as a (submitted) workshop paper here. (Comments welcome).

Alternative title: A New Approach to Philosophy of Mathematics:
Design a young explorer, able to discover "toddler theorems"
(Or: "The Naive Mathematics Manifesto").

Installed 16 Dec 2008 (Updated 24 Dec 2008; 30 Jan 2009; 15 Apr 2009, 7 May 2009, 25 May 2010)
(Likely to be revised in the light of comments received.)

Invited talk at Mathematics Graduate conference, June 2009

Invited talk at York CS department Wed 6th May 2009, (combined with part of talk on UKCRC Grand Challenge 5)
Previously presented at CISA seminar, Informatics, Edinburgh, Wed 8th April 2009.
At a joint meeting of the Language and Cognition Seminar and the Vision Club,
School of Psychology, University of Birmingham, Friday 12th December 2008

Presentation at Sussex University
An earlier version of the above talk on development of mathematical competences was given at University of Sussex, Tuesday 9th December 2008

The PDF slides for the Sussex presentation are here.
The presentation at Sussex, including part of the discussion, was recorded
on video by Nick Hockings and he kindly made the resulting
video available online (in three resolutions).

That is temporarily unavailable, but the medium resolution version is available here.

A link will be added here when Nick has found a new location.

Michael Brooks, a journalist who was present at the Sussex presentation wrote a report for the New Scientist here.
Unfortunately someone very silly at New Scientist gave it a totally inappropriate headline
and misrepresented my claims as being to make a mathematical robot, as opposed to understanding
human mathematical competences and their biological origins.
(I don't think it was Michael Brooks as he seemed to understand what I was saying.)

NOTE: The slides were much revised between the successive presentations.
Some versions start with a fairly detailed example experimental domain,
concerned with shapes that can and cannot be made with a rubber band and pins.
Later versions start with an introductory overview on the evolution of cognition.

Previous versions
The talks above build on and overlap with earlier presentations:

Abstract

Main theses [*] http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0609
Natural and artificial meta-configured altricial information-processing systems, IJUC 2007


Talk 66: Virtual Machines in Philosophy, Engineering & Biology (at WPE 2008)
Available in PDF format (A4 Landscape).

A version (using 'flash') is also available on my 'slideshare.net' space. A more recent version of this was presented at CogSci'09 (31 July 2009) "What cognitive scientists need to know about virtual machines".
A later version aimed at computer scientists is Talk 76: The history, nature, and significance of virtual machinery.
Presented 11th November 2008 at The Workshop on Philosophy and Engineering (WPE 2008), 10-12 November 2008, Royal Academy of Engineering, London.

Longer version available above in Talk 64 on virtual machines.

A 6 page paper on this, accepted for CogSci'09 (Amsterdam, July-Aug 2009) is available here:
"What Cognitive Scientists Need to Know about Virtual Machines"


Talk 65: Superseded talk on: Assembling bits of stuff and bits of process,
in a baby robot's world
A Kantian approach to robotics and developmental psychology.

This is now superseded by a newer version presented in May 2009.
Old version Available HERE (PDF).

Originally intended as talk at:

Kickoff workshop for the CogX project (29 September to 3rd October, 2008, Portoroz, Slovenia)
But insufficient time was available to present the material.

Later slides, extending the material can be found in Talk 67 on toddler theorems, and Talk 68 on ontologies for baby animals and robots.

Abstract

These slides are based on the observation that current machine perceptual abilities and machine manipulative abilities are extremely limited compared with what humans and many other animals can do.

There are mobile robots that are impressive as engineering products, e.g. BigDog -- the Boston dynamics robot and some other mobile robots that are able to keep moving in fairly rough terrain, including in some cases moving up stairs or over very irregular obstacles.

However, they all seem to lack any understanding of what they are doing, or the ability to achieve a specific goal despite changing obstacles, and then adopt another goal. For more detailed examples of missing capabilities see these web sites

As far as I know, none of the existing robots that manipulate objects can perceive what is possible in a situation when it is not happening, and reason about what the result would be if something were to happen.

Neither can they reason about why something is not possible.

I.e. they lack the abilities underlying the perception of positive and negative affordances.

They cannot wonder why an action failed, or what would have happened if..., or notice that their action might have failed if so and so had occurred part way through, etc., or realise that some information was available that they did not notice when they could have used it.


Talk 64: Why virtual machines really matter -- for several disciplines
(Or, Why philosophers need to be robot designers)
Available HERE (PDF) (A4 landscape).

Also available compressed for printing: 4 pages per A4 sheet

A more recent version of this, aimed mainly at philosophers, is Talk 73: Virtual Machines and the Metaphysics of Science.

A much shorter version was presented at The 2008 Workshop on Philosophy and Engineering.
(10-12 Nov 2008, Royal Academy of Engineering, London). The slides for that are here. Abstract here.

Previously presented at:

This is a revised, extended version of parts of previous presentations on virtual machines, information, and architectures, including

Abstract

One of the most important ideas (for engineering, biology, neuroscience, psychology, social sciences and philosophy) to emerge from the development of computing has gone largely unnoticed, even by many computer scientists, namely the idea of a running virtual machine (VM) that acquires, manipulates, stores and uses information to make things happen.

The idea of a VM as a mathematical abstraction is widely discussed, e.g. a Turing machine, the Java virtual machine, the Pentium virtual machine, the von Neumann virtual machine. These are abstract specifications whose relationships can be discussed in terms of mappings between them. E.g. a von Neumann VM can be implemented on a Universal Turing Machine. An abstract VM can be analysed and talked about, but, like a mathematical proof, or a large number, it does not {\bf do} anything. The processes discussed in relation to abstract VMs do not occur in time: they are mathematical descriptions of processes that can be mapped onto descriptions of other processes. In contrast a physical machine can consume, transform, transmit, and apply energy, and can produce changes in matter. It can make things happen. Physical machines (PMs) also have abstract mathematical specifications that can be analysed, discussed, and used to make predictions, but which, like all mathematical objects cannot do anything.

But just as instances of designs for PMs can do things (e.g. the engine in your car does things), so can instances of designs for VMs do things: several interacting VM instances do things when you read or send email, browse the internet, type text into a word processor, use a spreadsheet, etc. But those running VMs, the active instances of abstract VMs, cannot be observed by opening up and peering into or measuring the physical mechanisms in your computer.

My claim is that long before humans discovered the importance of active virtual machines (AVMs), long before humans even existed, biological evolution produced many types of AVM, and thereby solved many hard design problems, and that understanding this is important (a) for understanding how many biological organisms work and how they develop and evolve, (b) for understanding relationships between mind and brain, (c) for understanding the sources and solutions of several old philosophical problems, (d) for major advances in neuroscience, (e) for a full understanding of the variety of social, political and economic phenomena, and (e) for the design of intelligent machines of the future. In particular, we need to understand that the word "virtual" does not imply that AVMs are unreal or that they lack causal powers, as some philosophers have assumed. Poverty, religious intolerance and economic recessions can occur in socio-economic virtual machines and can clearly cause things to happen, good and bad. The virtual machines running on brains, computers and computer networks also have causal powers. Some virtual machines even have desires, preferences, values, plans and intentions, that result in behaviours. Some of them get philosophically confused when trying to understand themselves, for reasons that will be explained. Most attempts to get intelligence into machines ignore these issues.


  • Talk 63: Kantian Philosophy of Mathematics and Young Robots
    Could a baby robot grow up to be a Mathematician and Philosopher?

    See also Talk 67: A New Approach to Philosophy of Mathematics: Design a young explorer, able to discover "toddler theorems"
    (This starts with more examples.)

    Available in PDF format.
    A version (using 'flash') is also available on my 'slideshare.net' space.

    Talk at 7th International Conference on Mathematical Knowledge Management Birmingham, UK, 28-30 July 2008
    http://events.cs.bham.ac.uk/cicm08/mkm08/
    University of Birmingham, 29 Jul 2008

    Proceedings paper online here

    http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0802
    Kantian Philosophy of Mathematics and Young Robots
    ABSTRACT:
    A child, or young human-like robot of the future, needs to develop an information-processing architecture, forms of representation, and mechanisms to support perceiving, manipulating, and thinking about the world, especially perceiving and thinking about actual and possible structures and processes in a 3-D environment. The mechanisms for extending those representations and mechanisms, are also the core mechanisms required for developing mathematical competences, especially geometric and topological reasoning competences. Understanding both the natural processes and the requirements for future human-like robots requires AI designers to develop new forms of representation and mechanisms for geometric and topological reasoning to explain a child's (or robot's) development of understanding of affordances, and the proto-affordances that underlie them. A suitable multi-functional self-extending architecture will enable those competences to be developed. Within such a machine, human-like mathematical learning will be possible. It is argued that this can support Kant's philosophy of mathematics, as against Humean philosophies. It also exposes serious limitations in studies of mathematical development by psychologists.

    See also Talk 56 and

    http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0807
    The Well-Designed Young Mathematician
    Artificial Intelligence, December 2008.)


    Talk 62: Varieties of Meta-cognition in Natural and Artificial Systems
    Some pressures on design-space from niche-space
    Available HERE (PDF)

    Invited talk at
    Invited talk for Workshop on MetaReasoning: Thinking about Thinking at AAAI'08,
    Washington, 13-14 July 2008.

    The paper for the proceedings is available at
    http://www.cs.bham.ac.uk/research/projects/cogaff/08.html#805
    and
    http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0802

    ABSTRACT:
    Some AI researchers aim to make useful machines, including robots. Others aim to understand general principles of information-processing machines whether natural or artificial, often with special emphasis on humans and human-like systems: They primarily address scientific and philosophical questions rather than practical goals. However, the tasks required to pursue scientific and engineering goals overlap considerably, since both involve building working systems to test ideas and demonstrate results, and the conceptual frameworks and development tools needed for both overlap. This paper, partly based on requirements analysis in the CoSy robotics project, surveys varieties of meta-cognition and draws attention to some types that appear to play a role in intelligent biological individuals (e.g. humans) and which could also help with practical engineering goals, but seem not to have been noticed by most researchers in the field. There are important implications for architectures and representations.



    Talk 61: Evolution, development and modelling of architectures for intelligent organisms and robots.
    Available HERE (PDF)

    Talk for Graduate School Seminar series, Biosciences, University of Birmingham, on 24th June 2008.


    Talk 60: Requirements for a Human-like Information Processing Architecture that Builds Itself by Interacting with a Rich Environment
    Available in HTML format and in PDF format

    Given on 9th June, at Birmingham Informatics CRN Workshop on Complexity and Critical Infrastructures - Environment focus.
    and earlier (May 13th 2008) at UIUC Complexity conference on Understanding Complex Systems


    Talk 59: Understanding the Functions of Animal Vision What Are We Trying To Do:
    How Do Logic And Probability Fit Into The Bigger Picture?
    Available HERE (PDF) Warning: there seems to be an out of date version on citeseer.

    Invited talk at:
    Dagstuhl Seminar No. 08091, 24.02.2008-29.02.2008
    Logic and Probability for Scene Interpretation. Schloss
    Dagstuhl, Feb 25th 2008

    NOTE: a sequel to this talk is available here.

    Abstract

    http://www.cs.bham.ac.uk/research/projects/cogaff/dag08/


    Talk 58: What designers of artificial companions need to understand about biological ones.
    Expanded slides available HERE (PDF)

    Invited Presentation at Public Session of AISB'08,
    3rd April 2008, Aberdeen, Scotland

    Abstract

    The talk aims to:


    Talk 57: Seeing Possibilities: A new view of Empty Space
    Available HERE (PDF)

    Talk at: Intelligent Robotics Lab Seminar, Birmingham, 22nd Jan 2008

    Abstract

    A short history of AI vision research, introducing 'Generalised Gibsonianism (GG)', which allows for 'Proto-affordances' and use of vision in planning, reasoning and problem solving, based on seeing and manipulating possibilities. Closely related to Talk 56.


    Talk 56: Could a Child Robot Grow Up To be A Mathematician And Philosopher?
    Available HERE (PDF)

    Invited talk at:
    Thinking about Mathematics and Science Seminar, University of Liverpool. Monday 21 January 2008.

    Some old problems going back to Immanuel Kant (and earlier) about the nature of mathematical knowledge can be addressed in a new way by asking (a) what sorts of developmental changes in a human child make it possible for the child to become a mathematician, and (b) how this could replicated in a robot that develops through exploring the world, including its own exploration of the world.

    This is relevant not only to philosophy of mathematics, developmental psychology, and robotics, but also to a future mathematical education strategy based on much deeper ideas about what a mathematical learner is than are available to current educators. How many educators could design and implement a learner?

    The slides have been substantially expanded since the talk, partly in the light of comments and criticisms received. This process is likely to continue. There are partial overlaps with several other talks here.

    Abstract

    The original abstract is here.

    A conference paper summarising some of the issues is here

    http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0802
    Kantian Philosophy of Mathematics and Young Robots

    See also Talk 67


    Talk 55: Why Some Machines May Need Qualia and How They Can Have Them: Including a Demanding New Turing Test for Robot Philosophers
    Available HERE (PDF)

    Invited talk at:
    Symposium on AI and Consciousness: Theoretical Foundations and Current Approaches
    at AAAI Fall Symposium, Washington, 9-11 November 2007

    Abstract

    See online paper http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0705


    Talk 54: Diversity of Developmental Trajectories in Natural and Artificial Intelligence
    Available HERE (PDF)

    Invited talk at:
    Symposium on Computational Approaches to Representation Change During Learning and Development at AAAI Fall Symposium, Washington November 2007

    Abstract

    See the full paper
    http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0704


    Talk 53: Requirements for Digital Companions and their implications: It's harder than you think
    Available HERE (PDF)

    Invited talk at:

    University of Oxford Internet Institute, 26 Oct 2007
    Workshop on Artificial Companions in Society: Perspectives on the Present and Future Oxford 25th--26th October, 2007
    Organised by The Companions Project

    Abstract

    For the position paper see (revised version, published in 2010 in a book based on the workshop):
    http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#oii
    An early draft of the chapter is here:
    http://www.cs.bham.ac.uk/research/projects/cogaff/07.html#711


    Talk 52: Evolution of minds and languages.
    What evolved first and develops first in children: Languages for communicating, or languages for thinking (Generalised Languages: GLs)
    Available in PDF format: Latest version, reorganised 21 Mar 2014
    Also available four slides per page (2x2 PDF))

    A version (using 'flash') is also available on my 'slideshare.net' space.
    For Birmingham Language and Cognition seminar, School of Psychology, 19 Oct 2007
    Also presented on 2nd November at Mind as Machine, Continuing Education Weekend Course Oxford 1-2 Nov 2008

    Abstract

    Investigating the evolution of cognition requires an understanding of how to design working cognitive systems since there is very little direct evidence (no fossilized behaviours or thoughts).

    That claim is illustrated in relation to theories about the evolution of language. Almost everyone seems to have got things badly wrong by assuming that language must have started as primitive communication between individuals that gradually got more complex, and then later somehow got absorbed into cognitive systems.

    An alternative theory is presented here, namely that generalised languages (GLs) supporting (a) structural variability, (b) compositional semantics (generalised to include both diagrammatic syntaxes and contextual influences on semantics at every level) and (c) manipulability for reasoning, evolved first for various kinds of 'thinking', i.e. internal information processing. This is inconsistent with many theories of the evolution of language. It is also inconsistent with Dennett's account of the evolution of consciousness in Content and Consciousness (1969).

    See the slides for more detail.

    A earlier presentation in the School of Computer Science, in March 2007 is closely related to this:

    http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0702 (PDF)
    What is human language? How might it have evolved?
    This work based on collaboration with Jackie Chappell. See also
    http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0703
    'Computational cognitive epigenetics'
    A. Sloman and J.Chappell (BBS 2007)


    Talk 51: Why robot designers need to be philosophers -- and vice versa
    Available in PDF format

    Presentation at University of Bielefeld on 10th October 2007
    Short talk at the Inauguration ceremony of the "Research Institute for Cognition and Robotics - CoR-Lab"
    This is an expanded version of the slides. Part of the argument is that control of complex systems, including complex robots, and animals, can be usefully mediated by virtual machines.

    Where such a virtual machine also acquires and uses information about itself this can be useful, but it can also lead to the machine becoming philosophical and getting confused.

    A much expanded version of these slides is in Talk 64: Why virtual machines really matter -- for several disciplines


    Talk 50: Understanding causation in robots, animals and children: Hume's way and Kant's way.
    (Includes some methodological background and biological conjectures.)

    Available here, in PDF format, with videos Presentation at CoSy MeetingOfMinds Workshop, Paris, Sept 2007


    Talk 49: Why symbol-grounding is both impossible and unnecessary, and why theory-tethering is more powerful anyway.
    (Introduction to key ideas of semantic models, implicit definitions and symbol tethering through theory tethering.)
    Available in PDF Format

    Also available on 'slideshare.net' (possibly an older version)
    Date Added: 23 Sep 2007
    Revised: 30 Nov 2007, 16 Jun 2008, 7 Jan 2010

    This is a revised, clarified and expanded version of a part of Talk 14, on Symbol Grounding vs Symbol Tethering.

    Revised after presentation at the University of Sussex 27 Nov 2007, and University of Birmingham 29 Nov 2007

    Also listed as COSY-PR-0705 on CoSy web site.

    Abstract

    This is, like Talk 14, an attack on concept empiricism, including its recently revived version, "symbol grounding theory".

    The idea of an axiom system having some models is explained more fully than in previous presentations, showing how the structure of a theory can give some semantic content to undefined symbols in that theory, making it unnecessary for all meanings to be derived bottom up from (grounded in) sensory experience, or sensory-motor contingencies. Although symbols need not be grounded, since they are mostly defined by the theory in which they are used, the theory does need to be "tethered", if is capable of being used for predicting and explaining things that happen, or making plans for acting in the real world. These ideas were quite well developed by 20th Century philosophers of science, and I now both attempt to generalise those ideas to be applicable to theories expressed using non-logical representations (e.g. maps, diagrams, working models, etc.) and begin to show how they can be used in explaining how a baby or a robot, can develop new concepts that have some semantic content but are not definable in terms of previously understood concepts. There is still much work to be done, but what needs to be done to explain how intelligent robots might work, and how humans and other intelligent animals learn about the environment, is very different from most of what is going on in robotics and in child and animal psychology.

    The addition of new explanatory hypotheses is abduction. Normally abduction uses pre-existing symbols. The simultaneous introduction of new symbols and new axioms (ontology-extending abduction) generates a very difficult problem of controlling search.

    Advertised abstract for Birmingham talk.

    See also this discussion on What's information?, and the ideas about virtual machine functionalism, here.


    Talk 48: Machines in the ghost
    Available in PDF format.

    Invited talk at ENF'2007, Emulating the Mind
    1st international Engineering and Neuro-Psychoanalysis Forum
    Vienna July 2007

    The full paper is available http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0702

    Abstract

    This paper summarises a subset of the ideas I have been working on over the last 35 years or so, about relations between the study of natural minds and the design of artificial minds, and the requirements for both sorts of minds.

    The key idea is that natural minds are information-processing machines produced by evolution. We still do not have a good understanding of what the problems were that evolution had to solve, nor what the solutions were: e.g. we do not know how many different kinds of information processing system evolution produced, nor what they are used for -- even in ourselves.

    What sort of information-processing machine a human mind is requires much detailed investigation of the many kinds of things minds can do.

    It is not clear whether producing artificial minds with similar powers will require new kinds of computing machinery or merely much faster and bigger computers than we have now. Having been studying the problems of visual perception for many years I don't believe that any model proposed so far, whether based on conventional computation, neural computation, or anything else is capable of explaining the phenomena of human visual perception, including what it achieves, how fast it achieves it, how it develops and how many non-visual tasks the visual system is used for (e.g. doing mathematics).[*]

    Insofar as some sorts of psychotherapy (including psychoanalysis) are analogous to run-time debugging of a virtual machine, in order to do them well, we need to understand the architecture of the machine well enough to know what sorts of bugs can develop and which ones can be removed, or have their impact reduced, and how.

    Otherwise treatment will be a hit-and-miss affair.

    This requires understanding how minds work when they don't need therapy -- a distant goal.

    [*] Some challenges for vision researchers are here:

    ============


    Talk 47: Causal competences in animals and machines
    (Including Humean and Kantian causal understanding.)

    Invited talks by Jackie Chappell and Aaron Sloman at WONAC'07: NSF/EU-funded Workshop on Natural and Artificial Cognition
    Pembroke College, Oxford, 24th-26th June 2007
    Presentations by both of us, along with abstracts, and also a post-workshop presentation on varieties of causal competence available in PDF format.
    1. Aaron Sloman:
      Evolution of two ways of understanding causation: Humean and Kantian. (PDF), Abstract (HTML)

    2. Jackie Chappell:
      Understanding causation: the practicalities -- Screen version with hyperlinks (PDF) Abstract (HTML)
      Print version without hyperlinks (PDF)

    3. Causal competences of many kinds (PDF)
      An incomplete draft version written after the workshop:


    Talk 46: Architectural and representational requirements for seeing processes and affordances.
    Expanded version of presentation Available HERE (PDF)

    Invited talk at:

    BBSRC funded Workshop on
    Closing the gap between neurophysiology and behaviour: A computational modelling approach
    University of Birmingham, United Kingdom
    May 31st-June 2nd 2007
    A paper for the proceedings is online here (PDF).
    Abstract
    Over several decades I have been trying, as a philosopher-designer, to understand requirements for a robot to have human-like visual competences, and have written several papers pointing out what some of those requirements are and how far all working models known to me are from satisfying them. This included a paper in 1989 proposing replacing 'modular' architectures with 'labyrinthine' architectures, reflecting the varieties of interconnectivity between visual subsystems and other subsystems (e.g. action control subsystems, auditory subsystems).

    One of the recurring themes has been the relationship between structure and process. For instance, doing school Euclidean geometry involves seeing how processes of construction can produce new structures from old ones in proving theorems, such as pythagoras' theorem. Likewise understanding how an old-fashioned clock works involves seeing causal connections and constraints related to possible processes that can occur in the mechanism. In contrast performing many actions involves producing processes (e.g. grasping), seeing those processes, and using visual servoing to control the fine details. This need not be done consciously, as in posture control, and many other skilled performances. Some processes transform structures discretely, e.g. by changing the topology of something (adding a new line to a diagram, separating two parts of an object) others continuously (e.g. painting a wall or blowing up a balloon).

    Another theme that has been evident for many decades is the fact that percepts can involve hierarchical structure, although not all the structures should be thought of as loop-free trees, e.g. a bicycle doesn't fit that model even though to a first approximation most animals and plants do (e.g. decomposition into parts that are decomposed into parts, etc.) Less obviously, perception (as I showed in chapter 9 of The Computer Revolution in Philosophy) can involve layered ontologies, where one sub-ontology might consist entirely of 2-D image structures and processes, whereas another includes 3-D spatial structures and processes, and another kinds of 'stuff' of which objects are made and their properties (e.g. rigidity, elasticity, solubility, thermal conductivity, etc.), to which can be added mental states and processes, e.g. seeing a person as happy or sad, or as intently watching a crawling insect. The use of multiple ontologies is even more obvious when what is seen is text, or sheet music, perceived using different geometric, syntactic, and semantic ontologies.

    What did not strike me until 2005 when I was working on an EU-funded robot project (CoSy) is what follows from the combination of the two themes (a) the content of what is seen is often processes and process-related affordances, and (b) the content of what is seen involves both hierarchical structure and multiple ontologies. What follows is a set of requirements for a visual system that makes current working models seem even further from what we need in order to understand human and animal vision, and also in order to produce working models for scientific or engineering purposes.

    One way to make progress may be to start by relating human vision to the many evolutionary precursors, including vision in other animals. If newer systems did not replace older ones, but built on them, that suggests that many research questions need to be rephrased to assume that many different kinds of visual processing are going on concurrently, especially when a process is perceived that involves different levels of abstraction perceived concurrently, e.g. continuous physical and geometric changes relating parts of visible surfaces and spaces at the lowest level, discrete changes, including topological and causal changes at a higher level, and in some cases intentional actions, successes, failures, near misses, etc. at a still more abstract level. The different levels use different ontologies, different forms of representation, and probably different mechanism, yet they are all interconnected, and all in partial registration with the optic array (not with retinal images, since perceived processes survive saccades).

    The slides include a speculation that achieving all this functionality at the speeds displayed in human (and animal) vision may require new kinds of information-processing architectures, mechanisms and forms of representation, perhaps based on complex, interacting, self-extending, networks of multi-stable mutually-constraining dynamical systems -- some of which change continuously, some discontinuously.

    See also these challenges for vision researchers listed below [*]


    Talk 45 (Poster): Consciousness in a Multi-layered Multi-functional Labyrinthine Mind
    A4 slides available HERE (PDF)

    This was a poster presentation at

    PAC-07 Conference, 1-3 July 2007, Bristol, on
    Perception, Action and Consciousness confronting the dual-route (dorsal/ventral) theory of visual perception and the enactivist view of consciousness.
    Abstract available online (HTML)


    Talk 44: Some requirements for human-like visual systems, including seeing processes, structures, possibilities, affordances, causation and impossible objects.
    Available HERE (PDF)

    Invited talk at
    COSPAL Workshop Aalborg, 14th June 2007 on
    Cognitive Systems: Perception, Action, Learning


    Talk 43: COSY-PR-0702: What is human language? How might it have evolved?

    Aaron Sloman
    Slides for a seminar presented in Birmingham on 5th Mar 2007
    Follow link for abstract and PDF


    building-roadmaps-sloman.pdf (PDF)
    Previously at:
    http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0701
    Also available (using flash format) on slideshare.net
    http://www.slideshare.net/asloman/how-to-build-a-research-roadmap
    Talk 42: COSY-PR-0701: What's a Research Roadmap For? Why do we need one? How can we produce one? (PDF)

    Aaron Sloman
    This is a much expanded version of a presentation at the euCognition Research Roadmap discussion in Munich on 12 Jan 2007.
    For more on the Research Roadmap project see:
    http://www.eucognition.org/wiki/index.php?title=Research_Roadmap
    Follow link for abstract and PDF


    Talk 41: COSY-PR-0604: Evolution of ontology-extension:
    How to explain internal and external behaviour in organisms, including processes that develop new behaviours. (PDF)

    Aaron Sloman, with much help from Jackie Chappell
    In collaboration with members of the EU CoSy Robotic Project
    Presented to combined Biosciences and AINC seminar, University of Birmingham, 9th Oct 2006, and in Edinburgh 7th Dec 2006.
    Follow link for abstract and PDF


    Talk 40 (Poster): COSY-PR-0603: Putting the Pieces of AI Together Again (PDF)

    Poster for Member's Poster Session: AAAI'06, Boston, July 2006.
    Follow link for abstract and PDF


    Talk 39 (Poster): COSY-PR-0602: How an animal or robot with 3-D manipulation skills experiences the world (PDF)

    Poster for ASSC10, Oxford June 2006.
    Follow link for abstract and PDF


    Talk 38 (Poster): COSY-PR-0601: Acquiring Orthogonal Recombinable Competences(PDF)

    Poster for COGSYS II Conference, Nijmegen, April 2006
    Follow link for abstract and PDF


    Talk 37: FUNDAMENTAL QUESTIONS - THE SECOND DECADE OF AI
    Towards Architectures for Human-like Machines Available HERE (PDF)
    (Overlaps with several previous talks)

    Expanded version of invited presentation at
    Symposium on 50 years of AI, at the KI2006 Conference Bremen, June 17th 2006

    A version (using 'flash') is also available on my 'slideshare.net' space.

    Abstract: An abstract for the talk is online here.


    Talk 36: TWO VIEWS OF CHILD AS SCIENTIST: HUMEAN AND KANTIAN

    Presentation to Language and Cognition Seminar, School of Psychology, U. of Birmingham. October 14th 2005
    Follow the link above to get to abstract and PDF file.


    Talk 35: COSY-PR-0505: A (POSSIBLY?) NEW THEORY OF VISION
    Available HERE (PDF)
    Combining several old theories
    Generating many new problems and research tasks.

    Installed: October 2005
    Last Updated: 17 Feb 2007
    Seminar in School of Computer Science University of Birmingham 13th October 2005,
    Imperial College London on 25th October 2005,
    Aston University on 28th October 2005,
    Osnabrück, Germany 16th November 2005
    (Closely related to presentations on affordances, ontologies, causation, child as scientist, and later presentations on vision.)
    Abstract:
    The key idea is that whereas I previously thought (like many others) that vision involved concurrently analysing and interpreting structures at different levels of abstraction, using different ontologies at the different levels (as explained in the summary of the Popeye program in Chapter 9 of 'The Computer Revolution in Philosophy' (1978)) it now looks as if that was an oversimplification, and vision should be seen as involving analysis and interpretation not just of structures but also processes at different levels concurrently, which sometimes implies running several simulations concurrently at different levels of abstraction, using different ontologies -- in partial registration with sensory data were appropriate, and sometimes also motor signals.

    The talk explains what this means, what it does not mean, presents some of the evidence, summarises some of the implications, and points to some of the (many) unsolved problems, including unsolved problems about how this could be implemented either on computers or in brains. The presentation briefly lists some of the many precursors of the theory, but does not go into detail.

    The slides will go on being revised and extended in the light of comments and criticisms. It soon became clear that the topic is much broader than vision, but I have left the title. One of the implications concerns our understanding of causation, and our learning about causation, discussed in the next presentation. There are also implications regarding visual/spatial reasoning. The work of Rick Grush reported in BBS 2004 is very closely related to the ideas presented here.
    See http://mind.ucsd.edu/papers/intro-emulation/intro-em.pdf

    The theory is also very closely related to theories about the development of mathematical competences presented above, and also presentations on perception of affordances and proto-affordances here.


    Talk 34: TUTORIAL ON INTEGRATION AT EC COGNITIVE SYSTEMS 'KICKOFF' CONFERENCE,
    (Bled, Slovenia, 28-30 October 2004)
    Available HERE (PDF)

    Aaron Sloman


    Talk 33: THE ALTRICIAL-PRECOCIAL SPECTRUM FOR ROBOTS

    Talk at IJCAI-05 5th Aug 2005 (Chappell and Sloman), based on this paper in the conference proceedings.
    Slides Available HERE (PDF)
    (Overlaps with several previous talks)
    The presentation at the conference included a number of movies.

    The Birmingham CoSy Web Site includes several sequels to this paper. See also talks on understanding of causation in animals and machines at WONAC 2007.


    Talk 32: ROYAL SOCIETY OF EDINBURGH MEETING: Artificial Intelligence: In your Life Today
    Available HERE (PDF)
    (Overlaps with several previous talks)

    Talk at RSE 5th Aug, Co-located with IJCAI-05, Edinburgh. Event Web Site is here


    Talk 31: ARCHITECTURES FOR HUMAN-LIKE MACHINES

    Talk at Goldsmiths 19th Jan 2005

    Abstract:

    Much discussion of the nature of human minds is based on prejudice or fear of one sort or another -- sometimes arising out of 'turf wars' between disciplines, sometimes out of dislike of certain theories of what we are, sometimes out of religious concerns, sometimes out of ignorance of what has already been learnt in various disciplines, sometimes out of over-reliance on common sense and introspection, or what seems 'obviously' true. But one thing is clear to all: minds are active, changing entities: you change as you read this abstract and you can decide whether to continue reading it or stop here. I.e. minds are active machines of some kind. So I propose that we investigate, in a dispassionate way, the variety of design options for working systems capable of doing things that minds can do, whether in humans or other animals, in infants or adults, in normal or brain-damaged people, in biological or artificial minds. We can try to understand the trade-offs between different ways in which complete systems may be assembled that can survive and possibly reproduce in a complex and changing environment (including other minds.) This can lead to a new science of mind in which the rough-hewn concepts of ordinary language (including garden-gate gossip and poetry) are shown not to be wrong or useless, but merely stepping stones to a richer, deeper, collection of ways of thinking about what sorts of machines we are, and might be. This will also help to shed new light on the recent (confused) fashion for thinking that emotions are 'essential' for intelligence. It should also help us to understand how the concerns of different disciplines, e.g. biology, neuroscience, psychology, linguistics, philosophy, etc. relate to different layers of virtual machines operating at several different levels of abstraction, as also happens in computing systems.

    Other talks in this directory elaborate further on some of the themes presented.


    Talk 30: VARIETIES OF MEANING

    Talk to Language and Cognition seminar, the University of Birmingham, 5th Nov 2004

    Available here

    This talk explains why 'symbol tethering' (which treats most of meaning as determined by structure, with experience and action helping to reduce indeterminacy) is more useful for explicit forms of representation and theorising than 'symbol grounding' (which treats all meaning as coming 'bottom-up' from experience of instances, and which is just another variant on the old philosophical theory 'concept empiricism' defended by empiricist philosophers such as Locke, Berkeley and Hume, and refuted around 1781 by Kant.

    NOTE: following a suggestion from Jackie Chappell, I now use the phrase 'symbol tethering' instead of 'symbol attachment'.

    Since writing this I have discovered another attack on concept empiricism on the web page of Edouard Machery. See Concept Empiricism: Taking a Hard Look at the Facts.

    This talk overlaps in part with Talk 49 and Talk 14

    The talk was originally entitled 'Varieties of meaning in perceptual processes' but I did not manage to get to the perceptual processes part, being developed in this paper.

    These slides are likely to be updated when I have time to complete the planned section on varieties of meaning in perceptual mechanisms.


    Talk 29: UKCRC GRAND CHALLENGE 5: 'ARCHITECTURE OF BRAIN AND MIND'

    Overview presentation at Grand Challenges Conference April 2004 (Also at Edinburgh University in November 2004).


    Talk 28: DO INTELLIGENT MACHINES, NATURAL OR ARTIFICIAL, REALLY NEED EMOTIONS?
    Revised: 14 Jan 2014

    Abstract

    Since the publication of the book "Descartes' Error" in 1994 by Antonio Damasio, a well-known neuroscientist, it has become very fashionable to claim that emotions are necessary for intelligence. I think the claim is confused and the arguments presented for it fallacious.

    Part of the problem is that many of the words we use for describing human mental states and processes (including 'emotion' and 'intelligence') are far too ill-defined to be useful in scientific theories. Nevertheless there are many people who LIKE the idea that emotions, often thought of as inherently irrational, are required for higher forms of intelligence, the suggestion being that rationality is not all it's cracked up to be. But wishful thinking is not a good basis for advancing scientific understanding.

    Another manifestation of wishful thinking is people attributing to me opinions that are the opposite of what I have written in things they claim to have read.

    So I propose that we investigate, in a dispassionate way, the variety of design options for minds, whether in animals (including humans) or machines, and try to understand the trade-offs between different ways of assembling systems that survive in a complex and changing environment. This can lead to a new science of mind in which the rough-hewn concepts of ordinary language (including garden-gate gossip and poetry) are shown not to be wrong or useless, but merely stepping stones to a richer, deeper, collection of ways of thinking about what sorts of machines we are, and might be.

    For more on this see http://www.cs.bham.ac.uk/research/cogaff/

    This overlaps considerably with

    See also:
        Beyond shallow models of emotion, in
        Cognitive Processing: International Quarterly of Cognitive Science,
        Vol 2, No 1, pp. 177-198, 2001
        http://tinyurl.com/BhamCog/00-02.html#74
    and this review/comment:
        http://www.ce3c.com/emotion/?p=106
    


    Talk 27: REQUIREMENTS FOR VISUAL/SPATIAL REASONING

    Talk to language and cognition seminar, Birmingham, Oct 2003
    Available in two formats using Postscript and PDF here:

    Abstract

    This is yet another set of slides about the role of vision and spatial understanding in reasoning, but with especial emphasis on affordances and the fact that since the possibilities for action and the affordances are different at different spatial scales, and in different contexts, our understanding of space will have different components concerned with those different scales and contexts.

    What is it that an 18 month old child has not yet grasped when he cannot see how to join two parts of a toy train, despite having excellent vision and many motor skills? And what changes soon after when he has learnt how to do it.

    This overlaps considerably with Talk 7 and Talk 21 on Human Vision

    See also these More recent slides on Two views of child as scientist: Humean and Kantian (October 2005).


    TALK 26: WHAT ARE INFORMATION-PROCESSING MACHINES?
    WHAT ARE INFORMATION-PROCESSING VIRTUAL MACHINES?
    Notes from a workshop on models of consciousness, Sept 2003, and a presentation in York, Feb 2004 -- updated from time to time since then e.g. for talk in Birmingham 16 Oct 2008.

    The slides are available in two formats:

    Abstract

    For many years, like many other scientists, engineers and philosophers, I have been writing and talking about "information-processing" systems, mechanisms, architectures, models and explanations, e.g.: Since the word "information" and the phrase "information-processing" are both widely used in the sense in which I was using them, I presumed that I did not need to explain what I meant. Alas I was naively mistaken:

    The conceptual confusions related to these notions lead to spurious debates, often at cross-purposes, because people do not recognize the unclarity in their concepts and the differences between their usages and those of other disputants. I found evidence for this at two recent workshops I attended, both of which were in other ways excellent: the Models of Consciousness Workshop in Birmingham and The UK Foresight Interaction workshop. in Bristol, both held in the first week of September 2003.

    What I heard in that week, often heard in previous discussions, finally provoked me to bring together a collection of points in "tutorial" mode. Hence these slides, developing a number of claims, including these:

    This is work in progress. Comments and criticisms welcome. The presentation will be updated/improved from time to time. These slides are closely related to presentation attacking the notion of 'symbol grounding' and proposing 'symbol tethering' instead. (There are also older slides the slides attacking the notion of 'symbol grounding' (Talk 14).)

    I also have some online notes on What is information? Meaning? Semantic content?
    Now a book-chapter:

    What's information, for an organism or intelligent machine? How can a machine or organism mean?, in
    Information and Computation, Eds. G. Dodig-Crnkovic and M. Burgin, World Scientific, New Jersey,
    http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#905

    See also Talk 12: Supervenience and Implementation.


    Talk 25: ARCHITECTURE-BASED PHILOSOPHY OF MIND (ASSC7 Version)
    What kind of virtual machine is capable of human consciousness?


    Originally presented as an invited talk at ECAP03 Glasgow 28th March 2003 (see Talk 23), then at University of Notre Dame in April 2003, then revised and reorganised for invited talk at ASSC7 May-June 2003
    http://www.cs.memphis.edu/~assc7/

    DRAFT INCOMPLETE SET OF SLIDES, STILL BEING MODIFIED (3 Jun 2003).
    Available in two formats using Postscript and PDF here:

    And in a version for printing two slides per page (if you insist!):

    Abstract

    Most people think that because they experience and talk about consciousness they have a clear understanding of what they mean by the noun "consciousness". This is just one of many forms of self-deception to be expected in a sufficiently rich architecture with reflective capabilities that provide some access to internal states and processes, but which could not possibly have complete self-knowledge. This talk will approach the topic of understanding what a mind is from the standpoint of a philosophical information-engineer designing minds of various kinds.

    A key idea is that besides physical machines that manipulate matter and energy there are virtual machines that manipulate information, including control information. A running virtual machine (for instance a running instance of the Java virtual machine) is not just a mathematical abstraction (like the generic Java virtual machine). A running virtual machine includes processes and events that can interact causally with one another, with the underlying physical machine, and with the environment. People rely on the causal powers of such virtual machines when they use the internet, use word processors or spelling checkers, or use aeroplanes with automatic landing systems. So they are not epiphenomenal.

    Such a virtual machine may be only very indirectly related to the underlying physical machine and in particular there need not be any simple correlations between virtual machine structures and processes and physical structures and processes. This can explain some of the alleged mystery in the connections between mental entities and processes and brain entities and processes.

    We'll see how some designs for sophisticated information-processing virtual machines are likely to produce systems that will discover in themselves the very phenomena that first led philosophers to talk about sensory qualia and other aspects of consciousness. This can serve to introduce a new form of conceptual analysis that builds important bridges between philosophy, psychology, neuroscience, biology, and engineering. For instance, qualia can be accounted for as internally referenced virtual machine entities, which are described using internally developed causally-indexical predicates that are inherently incommunicable between different individuals.

    All this depends crucially on the concept of a virtual machine which despite being virtual has causal powers.

    Papers and talks providing background to the presentation can be found here:
    http://www.cs.bham.ac.uk/research/cogaff/

    For more information on the Association for the Scientific Study of Science, see http://assc.caltech.edu/


    Talk 24: VARIETIES OF AFFECT AND LEARNING IN A COMPLETE HUMAN-LIKE ARCHITECTURE


    Presented at Stanford Symposium on Advances in Cognitive Architectures March 22-23 2003.
    Also presented at University of Notre Dame, 3th April 2003. This overlaps with Talk 3
    Available in two formats using Postscript and PDF here:

    Abstract

    Recent research on different layers in an integrated architecture, using differing forms of representation, different types of mechanisms, and different information, to provide different functional capabilities, suggests a way of thinking about classes of possible architectures (the CogAff schema), tentatively proposed as a framework for comparing and contrasting designs for complete systems. An exceptionally rich special case of the schema, H-Cogaff, incorporating diverse concurrently active components, layered not only centrally but also in its perceptual and action mechanisms, seems to accommodate many features of human mental functioning, explaining how our minds relate to many different aspects of our biological niche.

    This architecture allows for more varieties of learning and development than are normally considered, and also for more varieties of affective states, including different kinds of pleasures, pains, motives, evaluations, preferences, attitudes, moods, and emotions, differing according to which portions of the architecture are involved, what their effects are within that and other portions of the architecture, what sorts of information they are concerned with, and how they effect external behaviour. These ideas have implications both for applications of AI (e.g. in digital entertainments, or in the design of learning environments), and for scientific theories about human minds and brains.

    Here's a sketch of H-Cogaff

    For more on these ideas see these talks http://www.cs.bham.ac.uk/research/cogaff/talks/
    And the Cognition and Affect project papers http://www.cs.bham.ac.uk/research/cogaff/
    A relevant paper


    Talk 23: ARCHITECTURE-BASED PHILOSOPHY OF MIND (ECAP03 Version)
    What kind of virtual machine is capable of human consciousness?


    Presented at ECAP03 Glasgow 28th March 2003.
    Modified version also presented at University of Notre Dame, 4th April 2003. Presented in a different way at ASSC7, in Talk 25 below.
    Available in two formats using Postscript and PDF here:

    Abstract

    Most people think that because they experience and talk about consciousness they have a clear understanding of what they mean by the noun "consciousness". This is just one of many forms of self-deception to be expected in a sufficiently rich architecture with reflective capabilities that provide some access to internal states and processes, but which could not possibly have complete self-knowledge. This talk will approach the topic of understanding what a mind is from the standpoint of a philosophical information-engineer designing minds of various kinds.

    We'll see how some designs are likely to produce systems that will discover in themselves the very phenomena that first led philosophers to talk about sensory qualia and other aspects of consciousness. This can serve to introduce a new form of conceptual analysis that builds important bridges between philosophy, psychology, neuroscience, biology, and engineering. It depends crucially on the concept of a virtual machine which despite being virtual has causal powers. Papers and talks providing background to the presentation can be found here:
    http://www.cs.bham.ac.uk/research/cogaff/


    Talk 22: THE IRRELEVANCE OF TURING MACHINES TO ARTIFICIAL INTELLIGENCE


    Presented at School of Computer Science theory seminar, Birmingham, Friday 28th Feb 2003, and at University of Nevada Reno, 20 March 2003
    Available in two formats using Postscript and PDF here:

    Abstract

    The claim that the development of computers and of AI depended on the notion of a Turing machine is criticised. Computers were the inevitable result of convergence of two strands of technology with a very long history: machines for automating various physical processes and machines for performing abstract operations on abstract entities, e.g. doing numerical calculations or playing games.

    Some of the implications of combining these technologies, so that machines could operate on their own instructions, were evident to Babbage and Lovelace, in the 19th century. Although important advances were made using mechanical technology (e.g. punched cards in Jacquard looms and in Hollerith machines used for manipulating census information in the USA) it was only the development of new electronic technology in the 20th century that made the Babbage/Lovelace dream a reality. Turing machines were a useful abstraction for investigating abstract mathematical problems, but they were not needed for the development of computing as we know it.

    Various aspects of these developments are analysed, along with their relevance to AI (which will use whatever information-processing technology turns up, whether computer-like or not). I'll discuss some similarities between computers viewed as described above and animal brains. This comparison depends on a number of distinctions: between energy requirements and information requirements of machines, between physical structure and virtual machine structure, between ballistic and online control, between internal and external operations, and between various kinds of autonomy and self-awareness. In passing, I defend Chomsky's claim that humans have infinite competence (e.g. linguistic, mathematical competence) despite performance limitations. Likewise virtual machines in computers.

    These engineering ideas, which owe nothing to Turing machines, or the mathematical theory of computation, are all intuitively familiar to software engineers, though rarely made fully explicit. The ideas are important both for the scientific task of understanding, modelling or replicating human or animal intelligence and for the engineering applications of AI, as well as other applications of computers. I think Turing himself understood all this.

    The talk is partly based on this paper:

    A. Sloman, 'The irrelevance of Turing machines to AI' in Matthias Scheutz, Ed., Computationalism: New Directions MIT Press, 2002. (Also online at http://www.cs.bham.ac.uk/research/cogaff/),


    Talk 21: HUMAN VISION --- A MULTI-LAYERED MULTI-FUNCTIONAL SYSTEM


    This was a presentation at a symposium of the British Machine Vision Association http://www.bmva.ac.uk/ on Reverse Engineering: the Human Vision System Biologically inspired Computer Vision Approaches held in London on 29 January 2003.
    Available in two formats using Postscript and PDF here (large files because of images included): See also slides for A (possibly) new theory of vision (October 2005)

    Abstract

    I try to show how a full account of human vision will have to analyse it as a multi-functional system doing very different kinds of processing in parallel, serving different kinds of purposes. These include various kinds of processing that we share with animals that evolved much earlier. In particular there are processes linked to purely reactive mechanisms such as posture control and saccadic triggers, processes providing "chunks" at different levels of abstraction both in the 2-D and 3-D domains, processes providing "parsed" descriptions of complex multi-component structures (e.g. seeing a pair of scissors, reading a sentence), processes categorising types of motion (e.g. watching a swaying branch before jumping onto it, or an approaching predator), processes recognising very abstract functional and causal properties and relations (support, pushing, constraining), processes concerned with detecting various sorts of mental states in other information processors (predators, prey, and conspecifics in social species), and processes concerned with categorising things that don't exist but could exist, e.g. seeing possibilities for action, possible effects of various changes, and other visual "affordances" (generalising J.J.Gibson).

    Most research on vision, whether in AI, psychology, or neuroscience tends to be very narrowly focused on particular tasks requiring particular forms of representation and particular algorithms.

    The multi-functional viewpoint presents a framework for trying to bring different research programmes together, posing new, very demanding constraints because of the great difficulty of designing such complex systems in an integrated fashion.
    More detailed presentations are in papers in the CogAff directory. Some of the other talks listed here are also relevant.


    Talk 20: WHEN WILL REAL ROBOTS BE AS CLEVER AS THE ONES IN THE MOVIES?


    This was originally a presentation at the 2003 Conference of the Association for Science Education (ASE) held at The University of Birmingham 3rd-5th January 2003}
    Also given as an open lecture in the School of Physics and Astronomy, Birmingham 12th Feb 2003, and at the Theoretical Physics Colloquium, Department of Applied Mathematics and Theoretical Physics, Cambridge University.
    Available in two formats using Postscript and PDF here (large files because of images included):

    Abstract

    During the second half of the 20th Century, many Artificial Intelligence researchers made wildly over-optimistic claims about how soon it would be possible to build machines with human-like intelligence. Some even predicted super-human intelligent machines, which might be a wonderful achievement or a disaster, depending on your viewpoint. But we are still nowhere near machines with the general intelligence of a child, or a chimpanzee, or even a squirrel, although many machines easily outperform humans in very narrowly defined tasks, such as playing certain board games, checking mathematical proofs, solving some mathematical problems, solving various design problems, and some factory assembly-line tasks.

    This talk attempts to explain why, despite enormous advances in materials science, mechanical and electronic engineering, software engineering and computer power, current robots (and intelligent software systems) are still so limited. The main reason is our failure to understand what the problems are: what collection of capabilities needs to be replicated. We need to understand human and animal minds far better than we do. This requires much deeper understanding of processes such as perception, learning, problem-solving, self-awareness, motivation and self-control. We also need to extend our understanding of possible architectures for information-processing virtual machines. I shall outline some of the less obvious problems, such as problems in characterising the tasks of visual perception, and sketch some ideas for architectures that will be needed to combine a wide variety of human capabilities. This has many implications for the scientific study of humans, and also practical implications, for instance in the teaching of mathematics. It also has profound implications for philosophy of mind.


    Talk 19: TOWARDS HUMAN-MACHINE SYMMETRY


    A talk on how to get more intelligence into user interfaces, thereby reducing the asymmetry between humans and machines. For second year AI and CS students at Birmingham. December 2002.
    Available in three formats using Postscript and PDF here:

    Abstract

    This is a first draft of a talk on interface design that I expect to go on improving over time. It is in part motivated by hearing many talks on interface design that fail to pay any attention to questions about the kinds of information processing mechanisms that humans use when interacting with machines (or with one another). This often leads to bad designs.


    Talk 18: WHAT IS SCIENCE? (CAN THERE BE A SCIENCE OF MIND?)

    Presented at the launch of Cafe Scientifique for Birmingham, at the Midlands Arts Centre, Birmingham, 25th October 2002. The slides were expanded after the meeting.
    Available in two formats using Postscript and PDF here:

    Abstract

    This presentation gives an introduction to philosophy of science, though a rather idiosyncratic one, stressing science as the search for powerful new ontologies rather than merely laws. You can't express a law unless you have an ontology including the items referred to in the law (e.g. pressure, volume, temperature). The talk raises a number of questions about the aims and methods of science, about the differences between the physical sciences and the science of information-processing systems (e.g. organisms, minds, computers), whether there is a unique truth or final answers to be found by science, whether scientists ever prove anything (no -- at most they show that some theory is better than any currently available rival theory), and why science does not require faith (though obstinacy can be useful). The slides end with a section on whether a science of mind is possible, answering yes, and explaining how.

    See also presentations on virtual machines, e.g. my talk at WPE 2008.


    Talk 17: AI And The Study Of Mind

    Presented at a symposium organised by the Computer Conservation Society at the Science Museum, London on 11th October 2002.
    CCS Symposium: Artificial Intelligence Recollections of the pioneers
    Available in two formats using Postscript and PDF here: The slides, expanded after the symposium, present some autobiographical notes and summarise briefly the history of AI as the new science explaining what minds are and how they work.

    A more detailed record of the meeting with slides of other speakers can be found here, along with pictures: /http://www.aiai.ed.ac.uk/events/ccs2002/


    Talk 16: MORE THINGS THAN ARE DREAMT OF IN YOUR BIOLOGY: Information processing in biologically-inspired robots.
    By Aaron Sloman and Ron Chrisley

    Presented at EPSRC/BBSRC International Workshop on Biologically-Inspired Robotics: The Legacy of W. Grey Walter 14-16 August 2002, HP Bristol Labs, UK
    Available in two formats using Postscript and PDF here:

    Abstract

    This paper is concerned with some methodological and philosophical problems related both to the long-term objective of building human-like robots (like those 'in the movies') and short- and medium-term objectives of building robots with capabilities of more or less intelligent animals. In particular, we claim that organisms are information-processing machines, and thus information-processing concepts will be essential for designing biologically-inspired robots. However identifying relevant concepts is non trivial since what an information processor is doing cannot in general be determined simply by observing it. A phenomenon that we label 'ontological blindness' often gets in the way. We give some examples to illustrate this difficulty. Having a general framework for describing and comparing agent architectures may help. We present the CogAff schema as a first draft framework that can be used to help overcome some kinds of ontological blindness by directing research questions.
    The full paper is at the Cognition and Affect web site.


    Talk 15: CAN WE DESIGN A MIND?

    Keynote talk presented at AI in Design Conference: AID02 Cambridge, UK, 15th July 2002.
    Available in two formats using Postscript and PDF here: (Still undergoing revision. Last changed 17 Jul 2002. Comments welcome.)

    Abstract

    Evolution, the great designer, has produced minds of many kinds, including minds of human infants, toddlers, teenagers, and minds of bonobos, squirrels, lambs, lions, termites and fleas. All these minds are information processing machines. They are virtual machines implemented in physical machines. Many of them are of wondrous complexity and sophistication. Some people argue that they are all inherently unintelligible: just a randomly generated, highly tangled mess of mechanisms that happen to work, i.e. they keep the genes going from generation to generation.

    I attempt to sketch and defend an alternative view: namely that there is a space of possible designs for minds, with an intelligible structure, and features of this space constrained what evolution could produce. The CogAff architecture schema gives a first approximation to the structure of that space of possible (evolvable) agent architectures. H-CogAff is a special case that (to a first approximation) seems to explain many human capabilities.

    By understanding the structure of that space, and the trade-offs between different special cases within it, we can begin to understand some of the more complex biological minds by seeing how they fit into that space. Doing this properly for any type of organism (e.g. humans) requires understanding the affordances that the environment presents to those organisms -- a difficult task, since in part understanding the affordances requires us to understand the organism at the design level, e.g. understanding its perceptual capabilities.

    This investigation of alternative sets of requirements and the space of possible designs should also enable us to understand the possibilities for artificial minds of various kinds, also fitting into that space of designs. And we may even be able to design and build some simple types in the near future, even if human-like systems are a long way off.

    (This talk is closely related to several of the previous talks, e.g. on emotions, on consciousness, on perception, on architectures.)

    There's a brief report on some this work by Michael Brooks, in the NewScientist on 25th Feb 2009
    http://www.newscientist.com/article/mg20126971.800-rise-of-the-robogeeks.html
    Unfortunately it emphasises the engineering potential more than the scientific and philosophical goals -- due to space limitations, I understand.


    Talk 14: Getting meaning off the ground: symbol grounding vs symbol attachment/tethering

    This talk was presented first at Birmingham on Monday 11th March 2002 then in a much revised form at MIT Media Lab on Friday 15th March 2002.

    A revised version of a subset of the presentation was produced in September-November 2007 Talk 49: on model-based semantics and why theory tethering is better than symbol grounding. For most people that will be a better introduction to this topic.

    Available in four formats using PDF and Postscript (which may need to be inverted) here:

    Note added 23 Aug 2005 Jackie Chappell persuaded me that instead of the phrase 'Symbol Attachment' I should use 'Symbol Tethering'. That is explained more clearly in Talk 49.

    Abstract

    This presentation attacks concept empiricism, the theory that all concepts are abstracted from experience of instances or defined in terms of concepts previously understood, recently re-invented and called "symbol-grounding" theory. The attack is closely related to the philosopher Kant's attack on concept empiricism, when he argued that concepts are required in order to have experience, and therefore not all concepts can be derived from experience. Within this framework we explain how a person blind from birth can understand colour concepts, for example.

    A newer talk on 'Varieties of Meaning' presents additional arguments and explains some of the ideas in more detail.

    Several other presentations here (e.g. the presentation on information processing virtual machines) are also relevant.

    A related discussion paper (HTML) asks how a learner presented with a 2-D display of a rotating Necker cube could
    develop the 3-D ontology as providing the best way to see what's going on.
    http://www.cs.bham.ac.uk/research/projects/cogaff/misc/nature-nurture-cube.html
    (including pointers to some online rotating cubes!).

    A simpler example of continuously moving linear objects projected onto a 2-D discrete array:
    http://www.cs.bham.ac.uk/research/projects/cogaff/misc/simplicity-ontology.html

    Related discussion papers and presentations on the CoSy robot project web site include

    Two papers written a few years before Harnad's symbol grounding paper presented a draft version of a theory explaining how a machine can use symbols to refer to things in a way that does not require causal connection with those things. Both papers presuppose an understanding of the way a formal system can determine a set of Tarskian models.


    Talk 13: Artificial Intelligence and Philosophy
    (New version Talk 109)

    This was a lecture to first year AI students at Birmingham, Dec 11th 2001, on AI and Philosophy, explaining how AI relates to philosophy and in some ways improves on philosophy. It was repeated December 2002, December 2003, October 2004, October 2005, each time changing a little. It introduces ideas about ontology, architectures, virtual machines and how these can help transform some old philosophical debates.

    Available inPDF here:


    Talk 12: Supervenience and Implementation

    Talk presented at the University of Birmingham, in September 2000, then at the University of Nottingham in November 2001.
    Revised October 2003, October 2007.

    Available in two formats using Postscript and PDF here:

    Abstract

    The slides introduce some problems about the relations between virtual machines and physical machines. I attempt to show how the philosophers' notion of "supervenience" is related to the engineer's concept of "implementation", and the computer scientist's notion of "virtual machine". This is closely related to very old philosophical problems about the relationship between mind and matter (or mind and brain).

    Virtual machines are "fully grounded" in physical machines without being identical with or in other ways reducible to them.

    One popular way of trying to understand virtual machines makes use of a common notion of 'functionalism'. This is often explained in terms of a virtual machine that has a state-transition table. This notion is criticised as inadequate and compared with a more sophisticated notion of a virtual machine that has multiple states of different sorts changing and interacting concurrently on different time-scales: Virtual Machine Functionalism (implicitly taken for granted by software engineers, but unfamiliar to many philosophers and others who discuss functionalism).

    Multi-component virtual machines are ubiquitous in our common sense ontology, though we don't normally notice, e.g. when we talk about social, political, and economic processes. Some philosophers argue that virtual machine events are purely "epiphenomenal" and therefore cannot have any effects.

    A rebuttal of this view requires a satisfactory analysis of the concept of "cause" -- one of the hardest unsolved problems in philosophy. A partial analysis is sketched, and shown to accommodate parallel causation in hierarchies of virtual machines. This allows mental causes to be effective in producing effects. This should be no surprise to software engineers and computer scientists who frequently build virtual machines precisely because they can have desired effects. Philosophers with no knowledge of computing often find this very hard to understand. A corollary is that the training of philosophers needs to be improved, and probably the training of psychologists also.
    See also Talk 5 (IJCAI Tutorial on Philosophy, 2001), and Talk 26 on Information-processing virtual machines.


    Talk 11: Artificial Intelligence development environments

    This was a lecture to first year AI students at Birmingham, Dec 4th 2001, on how AI programming is both like and unlike other forms of software engineering and how this influences design of AI languages and development environments. Since then it has been presented several times, with minor revisions.

    Available in three formats using Postscript and PDF here:


    Talk 10: What is Artificial Intelligence?

    A talk prepared for students who have applied for our Artificial Intelligence degree course, explaining what AI is, distinguishing AI as science and AI as engineering, and summarising some of the sub-fields of AI.

    The slides are available in Postscript and PDF here:

    See also http://www.cs.bham.ac.uk/research/projects/cogaff/misc/aiforschools.html


    Talk 9: Varieties of Consciousness

    Presented at Oxford University Consciousness Society, 24th Oct 2001 A modified version was presented to the CS/AI School seminar, Birmingham on 8th Nov 2001.

    Many of the other talks overlap with this. Talk 23 is a followup to this, as is Talk 25 .

    The slides are available in Postscript and PDF here:

    If reading the files using a postscript viewer, such as "gv" you may need to change the orientation (e.g. to seascape).


    Talk 8: Evolvable, Biologically Plausible Visual Architectures

    Presented at BMVC01 (British Machine Vision Conference, Sept 2001).

    The slides are available in Postscript and PDF here:

    See also slides for A (possibly) new theory of vision (October 2005)

    Abstract:

    Much work in AI is fragmented, partly because the subject is so huge that it is difficult for anyone to think about all of it. Even within sub-fields, such as language, reasoning, and vision, there is fragmentation, as the sub-sub-fields are rich enough to keep people busy all their lives. However, there is a risk that results of isolated research will be unsuitable for future integration, e.g. in models of complete organisms, or human like robots. This paper offers an architectural framework for thinking about the many components of visual systems and how they relate to the whole organism or machine. The viewpoint is biologically inspired, using conjectured evolutionary history as a guide to some of the features of the architecture. It may also be useful both for modelling animal vision and designing robots with similar capabilities.

    If reading the files using a postscript viewer, such as "gv" you may need to change the orientation (e.g. to seascape).


    Talk 7: When is seeing (possibly in your mind's eye) better than deducing, for reasoning?

    Presented at CS & AI Theory seminar, Birmingham, Sept 2001
    Also at BCS/SGAI meeting, City University London, March 2006
    The slides are available in Postscript and PDF here: See also these More recent slides on Two views of child as scientist: Humean and Kantian (October 2005).

    If reading the files using a postscript viewer, such as "gv" you may need to change the orientation (e.g. to seascape).


    Talk 6: Invited Talk on architectures for human-like agents, presented to one day conference at Nokia Research Centre, Helsinki, on 8th June 2001

    The slides are available in Postscript and PDF here:
    These slides were revised in August 2006, partly taking into account ideas from two recent papers with Jackie Chappell

    COSY-TR-0502: The Altricial-Precocial Spectrum for Robots
    COSY-TR-0609: Altricial Self-organising Information-processing systems


    Talk 5: TUTORIAL ON Philosophical foundations: Some key questions Presented at IJCAI01 Seattle 5th Aug 2001

    Information about this tutorial, presented jointly with Matthias Scheutz, can be found, along with Postscript and PDF versions of the slides for the tutorial (also in the tutorial booklet) here: http://www.cs.bham.ac.uk/~axs/ijcai01



    Talk 4a: Debate on This house believes that robots will have free will
    Available HERE (PDF)
    (Added: 24 Aug 2007, after rediscovering my slides!)

    A version (using 'flash') is also available on my 'slideshare.net' space.

    An International AI Symposium in memory of Sidney Michaelson was organised by the British Computer Society, Edinburgh Branch, on 7th April 2001.
    Reviewed here (with pictures).

    Abstract

    The event ended with a debate on the motion:
    "This house believes that robots will have free will"
    The review states:
    The formal part of the proceedings concluded with a debate. Getting this off the ground was no mean task. Can you imagine getting a bunch of academics to agree what they will debate and who will propose and oppose the motion? The email trail this exercise generated, including debating the voting strategy, became a marathon in itself. However, we achieved agreement, and Harold Thimbleby, Chris Huyck and Yorick Wilks spoke for, and Mike Brady, Aaron Sloman and Mike Burton spoke against the motion "This house believes that robots will have free will". The debate was chaired by Ian Ritchie (recent past president of BCS) who skilfully kept the speakers to time. A vote was taken before and after the debate. Before, the Ayes had a big majority, but at the final count outcome was evens: a good way to end.
    A picture of the opposing team is here.
    Two more serious papers on this topic are here


    Talk 4: HOW TO UNDERSTAND NATURAL MINDS OF MANY KINDS.

    This invited talk was presented at a workshop on Adaptive and interactive behaviour of animals and computational systems (AIBACS): organised by EPSRC and BBSRC at Cosener's House, Abingdon, on 28-29th March 2001.

    The slides are available in Postscript and PDF here:

    If reading the files using a postscript viewer, such as "gv" you may need to set the page size to A3.


    Talk 3: VARIETIES OF AFFECT AND THE CogAff ARCHITECTURE SCHEMA

    This talk was presented in the symposium on Emotion, cognition, and affective computing, at the AISB 2001 conference held at the University of York, March 2001.

    A revised version was presented at University College London on 19th Jun 2002 (Gatsby Centre and Institute for Cognitive Neuroscience).
    This overlaps with talk 24

    The slides are available in Postscript and PDF here:

    New version (June 2002)

    Old version (April 2001)

    Abstract

    In the last decade and a half, there has been a steadily growing amount of work on affect in general and emotion in particular, in empirical psychology, cognitive science and AI, both for scientific purposes and for the purpose of designing synthetic characters, e.g. in games and entertainments.

    Such work understandably starts from concepts of ordinary language (e.g. "emotion", "feeling", "mood", etc.). However, these concepts can be deceptive: the words appear to have clear meanings but are used in very imprecise and systematically ambiguous ways. This is often because people use explicit or implicit pre-scientific theories about mental states and process which are incomplete or vague. Some of the confusion arises because different thinkers address different subsets of the phenomena.

    More sophisticated theories can provide a basis for deeper and more precise concepts, as has happened in physics and chemistry following the development of new theories of the architecture of matter which led to revisions of our previous concepts of various kinds of substances and various kinds of processes involving those substances.

    In the Cognition and Affect project we have been exploring the benefits of developing architecture-based concepts of mind. We start by defining a space of architectures generated by the CogAff architecture schema, which covers a variety of information-processing architectures, including, we think, architectures for insects, many kinds of animals, humans at different stages of development, and possible future robots.

    In this framework we can produce specifications of architectures for complete agents (of various kinds) and then find out what sorts of states and processes are supported by those architectures. Thus for each type of architecture there is a collection of "mental concepts" relevant to organisms or machines that have that sort of architecture.

    Thus we investigate a space of architectures linked to a space of possible types of minds, and for some of those minds we find analogues of familiar human concepts, including, for example, "emotion", "consciousness", "motivation", "learning", "understanding", etc.

    We have identified a special type of architecture H-Cogaff, a particularly rich instance of the CogAff architecture schema, conjectured as a model of normal adult human minds. The architecture-based concepts that H-Cogaff supports provide a framework for defining with greater precision than previously a host of mental concepts, including affective concepts, such as "emotion", "attitude", "mood", "pleasure" etc. These map more or less loosely onto various pre-theoretical versions of those concepts.

    For instance H-Cogaff allows us to define at least three distinct varieties of emotions; primary, secondary and tertiary emotions, involving different layers of the architecture which we believe evolved at different times. We can also distinguish different kinds of learning, different forms of perception, different sorts of control of behaviour, all supported within the same architecture.

    A different architecture, supporting a different range of mental concepts might be appropriate for exploring affective states of other animals, for instance insects, reptiles, or other mammals. Human infants probably have a much reduced version of the architecture which includes self-bootstrapping mechanisms that lead to the adult form.

    Various kinds of brain damage can be distinguished within the H-Cogaff architecture. We show that some popular arguments based on evidence from brain damage purporting to show that emotions are needed for intelligence are fallacious because they don't allow for the possibility of common control mechanisms underlying both tertiary emotions and intelligent control of thought processes. Likewise we show that the widely discussed theory of William James which requires all emotions to involve experience of somatic states fails to take account of emotions that involve only loss of high level control of mental processes without anything like experience of bodily states.

    We have software tools for building and exploring working models of these architectures, but so far model construction is at a very early stage.

    Further details can be found here http://www.cs.bham.ac.uk/research/cogaff/


    Talk 2: SIMAGENT: TOOLS FOR DESIGNING MINDS
    A toolkit for philosophers and engineers

    The slides are available in Postscript and PDF here:

    Revised version March 2007

    The toolkit is also described here in more detail. http://www.cs.bham.ac.uk/research/poplog/packages/simagent.html
    Movie demonstrations of the toolkit are available here http://www.cs.bham.ac.uk/research/poplog/figs/

    The slides are modified versions of slides used for talks at a Seminar in Newcastle University in September 2000, at talks in Birmingham during October and December 2000, Oxford University in January 2001, IRST (Trento) in 2001, Birmingham in 2003 to 2007, and York University in Feb 2004.

    Abstract

    The SimAgent toolkit, developed in this school since about 1994 (initially in collaboration with DERA) and used for a number of different projects here and elsewhere, is designed to support both teaching and exploratory research on multi-component architectures for both artificial agents (software agents, robots, etc.) and also models of natural agents. Unlike many other toolkits (e.g. toolkits associated with SOAR, ACT-R, PRS) it does not impose a commitment to a particular class of architectures but allows rapid-prototyping of novel architectures for agents with sensors and effectors of various sorts (real or simulated) and many different kinds of internal modules doing different sorts of processing, e.g. perception, learning, problem-solving, generating new motives, producing emotional states, reactive control, deliberative control, self-monitoring and meta-management, and linguistic processing.

    The toolkit supports exploration of architectures with many sorts of processes running concurrently, and interacting in unplanned ways.

    One of the things that makes this possible is the use of a powerful, interactive, multi-paradigm extendable language, Pop-11 (similar in power and generality to Common Lisp, though different in its details). This has made it possible to combine within the same package support for different styles of programming for different sub-tasks, e.g. procedural, functional, rule-based, object oriented (with multiple inheritance and generic functions), and event-driven programming, as well as allowing modules to be edited and recompiled while the system is running, which supports both incremental development and testing and also self-modifying architectures.

    A collaborative project between Birmingham and Nottingham is producing extensions to support distributed agents using the HLA (High Level Architecture) platform.

    The talk will give an overview of the aims of the toolkit, show some simple demonstrations, explain how some of it works, and provide information for anyone who wishes to try using it.

    The talk may be useful to students considering projects requiring complex agent architectures.

    FURTHER INFORMATION


    Talk 1: VARIETIES OF EVOLVABLE MINDS
    OR
    How to think about architectures for human-like
    and other agents
    OR
    How to Turn Philosophers of Mind into Engineers

    This talk was presented in Oxford on 22nd Jan 2001 in the seminar series of the McDonnell-Pew Centre for Cognitive Neuroscience

    The slides are available in Postscript and PDF here:

    Also presented at the University of Surrey 7 Feb 2001, and in a modified form at a "consultation" between christian scientists and AI researchers at Windsor Castle, Feb 14-16, 2001.

    The slides are modified versions of slides used for talks at ESSLLI in August 2000, at a Seminar in Newcastle University in September 2000, at a seminar in Nottingham University November 2000.


    BACK TO LIST OF CONTENTS AND POINTERS TO COSY TALKS


    OTHER COLLECTIONS OF SLIDES

    Slides for IBM Symposium March 2002:
    Architectures and the spaces they inhabit

    Two invited talks were given at a workshop followed by a conference on architectures for common sense, at IBM T.J. Watson Research Centre, New York on 13th and 14th March 2002. The slides have been collected into a single long file.

    The other main speakers at the Conference were John McCarthy and Marvin Minsky.

    The slides attempt to explain (in outline) what an architecture is, what virtual machine functionalism is, what architecture-based concepts are, what the CogAff architecture schema is, what is in the H-Cogaff (Human-Cogaff) architecture, how this relates to different sorts of emotions and other mental phenomena, how architectures evolve or develop, trajectories in design space and niche space, and what some of the very hard unanswered questions are.

    UK Grand Challenge Project Proposal 2002

    Papers and slides prepared for the workshop in November 2002
    http://www.cs.bham.ac.uk/research/cogaff/gc

    And a more detailed specification:
    http://www.cs.bham.ac.uk/research/cogaff/manip/

    Presentation at DARPA Cognitive Systems Workshop Nov 2002
    How to Think About Cognitive Systems: Requirements and Designs

    http://www.cs.bham.ac.uk/research/cogaff/darpa02/


    WARNING:
    Any of my pdf slides found at any other location are likely to be out of date.
    I try to keep the versions on slideshare.net up to date, but sometimes forget to
    upload a new version.


    NOTES and related references.

    NOTE: Both Postscript and PDF versions of slides should have several coloured slides. If the colours in the postscript version don't show up when you read it in netscape, try saving the file and reading it with "gv". (This is probably a problem only on 8-bit displays). The colours are not crucial: they merely help a little.


    Further papers on the topics addressed in the slides can be found in the Cognition and Affect Project directory http://www.cs.bham.ac.uk/research/cogaff/

    Comments and criticisms welcome.

    Our Software tools are available free of charge with full sources in the Free Poplog directory: http://www.cs.bham.ac.uk/research/poplog/freepoplog.html



    ACKNOWLEDGEMENTS


    Some of this work arises out of, or was done as part of, a project funded by the Leverhulme Trust on
    Evolvable virtual information processing architectures for human-like minds (Oct 1999 -- June 2003)
    described here.

    The ideas are being developed further in the context of the EC-Funded CoSy project which aims to improve our understanding of design possibilities for natural and artificial cognitive systems integrating many different sorts of capabilities. CoSy papers and presentations are here.


    Creative Commons License
    This work is licensed under a Creative Commons Attribution 2.5 License.
    If you use or comment on our ideas please include a URL if possible, so that readers can see the original (or the latest version thereof).


    I Support
PLoS - Public Library of Science
    Last updated: 29 Nov 2009; 7 Jan 2010; 21 Jan 2010; 18 Feb 2010; 8 Mar 2010; 12 Mar 2010;
    28 Mar 2010; 13 May 2010; 19 May 2010; 23 Jul 2010; 27 Jul 2010; 8 Aug 2010; 15 Aug 2010;
    24 Sep 2010; 26 Sep 2010; 30 Sep 2010; 24 Dec 2010; 16 Jan 2011; 23 Feb 2011; 27 Feb 2011;
    5 Apr 2011; 26 Aug 2011; 30 Aug 2011; 16 Sep 2011; 15 Nov 2011; 1 Feb 2012; 21 Sep 2012;
    1 Dec 2012; 4 Dec 2012; 1 Jan 2013; 24 Jan 2013; 3 Mar 2013; 20 May 2013; 5 Jan 2014; 14 Jan 2014

    Maintained by Aaron Sloman
    School of Computer Science
    The University of Birmingham