Why symbol-grounding is both impossible and unnecessary, and why theory-tethering is more powerful anyway
Aaron Sloman, School of Computer Science, University of Birmingham
Date and time: Thursday 29th November 2007 at 16:00
Location: UG40, School of Computer Science
Concept empiricism is an old, very tempting, and mistaken theory, going back to David Hume and his precursors, recently re-invented as "symbol-grounding" theory and endorsed by many researchers in AI and cognitive science, even though it was refuted long ago by the philosopher Immanuel Kant (in his Critique of Pure Reason, 1781).
Roughly, concept empiricism states:
- All concepts are ultimately derived from experience of instances
- All simple concepts have to be abstracted directly from experience of instances
- All non-simple (i.e. complex) concepts can be defined in terms of simple concepts using logical and mathematical methods of composition.
Symbol grounding theories may add extra requirements, such as that the experience of instances must use sensors that provide information in a structure that is close to the structure of the things sensed. This is closely related to sensory-motor theories of cognition, which work well for most insect cognition. People are tempted by concept empiricism because they cannot imagine any way of coming to understand concepts except by experiencing instances or defining new concepts explicitly in terms of old ones.
My talk will explain how Kant's refutation was elaborated by philosophers of science attempting to explain how theoretical terms like 'electron', 'gene', 'valence', etc. could have semantic content, and will go on to show how there is an alternative way of providing semantic content using theories to provide explicit definitions of the underfined symbols they use. The meanings are partly indeterminate insofar as a theory can have more than one model. The indeterminacy can be reduced by 'tethering' the theory using 'bridging rules' that play a role in linking the the theory to evidence. This does not require symbols in the theory to be 'grounded'. A tutorial presentation of these ideas is available here: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#models
It turns out that there is a growing community of researchers who reject symbol grounding theory and are moving in this direction. This has implications for forms of learning and development in both robots and animals, including humans.