Yet another (more enjoyable?) behavioural test for intelligence.
I have previously argued in favour of a
theory not a test, but ...
If you can't beat them, join them.
FIRST DRAFT -- COMMENTS WELCOME
Installed: 28th Nov 2014;
4 Feb 2017: (Added ref to Michael Rescorla on Computational Mind in SEP.)
25 Jul 2015: (Added tongue in cheek and Sheffield Anton
12 May 2015: (Added reference on blind mathematicians)
22 Mar 2015 (Added link to Ron Chrisley's page)
30th Nov 2014; 2 Dec 2014; 22 Dec 2014
I have also repeatedly argued not only that Alan Turing did not propose a
behavioural test for intelligence (he was far too intelligent to do that
[*]) but that
what we really need are tests for a good theory of human
intelligence that is relevant to a wide variety of forms of human
learning, development, society and culture, in the way that the theory of Turing
machines is relevant to a very wide range (indeed an infinite variety) of Turing
machines, and Turing machine behaviours.
[*]See The Mythical Turing Test
Theories about different sorts of intelligence, e.g. ant intelligence, squirrel intelligence, crow intelligence require different sorts of tests. The Turing-inspired Meta-Morphogenesis project aims to encompass all of them.
Proposing a behavioural test for intelligence based on ability to replicate human behaviour is partly like proposing a test for the ability of a planet to support life by counting the proportion of life forms that that planet shares with our planet. It's a silly form of cherry-picking, a charge that I hope my proposal below will not merit, despite its crucial use of cherries.
I expect most readers of this document will have seen cartoon pictures indicating the relative numbers of neuronal connections between brain and mouth, supporting my claim (below) about the profound role of the mouth (including lips, tongue, cheeks, jaw and other movable parts) in human intelligence, although the claim itself is not based on evidence from brain science, but on chewing over everything I know about the role of a mouth in a typical human life, from sucking to get precious milk, through exploring a vast collection of portions of the immediate environment, including toes, fingers, earthworms and bits of clothing, and playing an increasingly active role in the consumption of increasingly varied types of food, then later (in most but not all humans) learning to talk, in some cases more than one human language, in some cases with enormous variation in speed, tone and volume (which in itself may have nothing much to do with intelligence).
Having recently been chewing over the flaws in (surprisingly popular) theories that put too much emphasis on supposed connections between intelligence and sensory-motor morphology (embodied cognition, enactivism, situated cognition, "New AI", etc.), I've decided to put my money where my mouth is, swallow my pride, and albeit somewhat reluctantly, propose a behavioural test for intelligence, the "Chewing Test for Intelligence", partly inspired by a test reportedly used by a famous Oxford college for selecting its Fellows. Perhaps it should be called the "The Oxford Chewing Test for Intelligence",
It is rumored that All Souls College selects Fellows by inviting the best applicants for dinner in College, and serving cherry pie for desert, in a rimless bowl. The Fellows, but not the candidates, get portions of pie containing only previously stoned cherries. The candidates have to deal with the cherry stones (i.e. the pips).
The new test is applied to both humans and robots. It requires robots to have a human-like mouth (which most humans have by default), including tongue, lips, teeth, jaw mechanism, and a waste outlet for chewed cherry flesh. Passing the test involves being able to put N cherries in one's mouth, chew them, and swallow the cherry flesh without swallowing or otherwise disposing of any of the stones, or breaking any parts of the mouth. A weaker version of the test could allow one or more teeth to be broken on impact with a cherry stone. How the number N is selected is explained below.
Obviously designers will not be allowed to submit robots with special oral apparatus designed only to perform this task, though selection of required collateral capabilities is a task for the future. It might, for example, include checking whether the number of cherries in one's mouth is a prime number.
A wide variety of humans of varying shapes, sizes, ages, and cultures, will first be given this test, with N = 1, then 2, then 3, etc. until it is clear which value of N is the largest that the majority of the tested humans can cope with. Let's call that Nh. (Some further work is required to specify what should count as coping with N cherries in this context. A possibility might be managing to deal with N cherries and then going on to perform normal robot functions, such as cleaning carpets. A more challenging version would require both to be done in parallel.)
Pilot experiments suggest that Nh is likely to exceed 5, with cherries of types I have encountered. It may well turn out that the value of Nh is highly culture-dependent, being very low for cultures deprived of cherries and much higher for cultures with a passion for listening to poetry readings while eating cherries from a bag, at performances where cherry stone receptacles are not provided.
Readers wishing to propose a specific target value for Nh should email me, citing evidence. Perhaps future research will reveal an algebraic formula for deriving Nh from demographic data. In that case, instead of there being only one Chewing test for intelligence we shall have Chewing tests for English intelligence, for Scottish intelligence, for Indian intelligence, etc.
Robot candidates for this test will also be given variants of the test with N cherries, including N = 1, 2, etc. up to Nh, or some higher value (in the interests of scientific research rather than testing for intelligence). Future research will be required in order to select the appropriate value for robots manufactured, or designed, in one culture and tested in another. Alternatively it may be possible to devise procedures for robot brain-washing to ensure that each machine is tested using the value of Nh relevant to its current place of employment.
Those that cope with Nh cherries will be deemed to have passed the Chewing test. Any robot designer at least 30% of whose robots manage to pass the test will be acknowledged as an intelligent robot designer. (A future wealthy sponsor may be willing to provide suitable plaques to be awarded.)
Since I don't expect any robots able to take part in this test will exist for at least 10 years, and those will all fail, I suggest that the test be advertised and run once every 10 years till the end of the century. At that stage an international Chewing Test panel (containing humans and some robot philosophers if any have been produced by then) will decide whether and how the process should continue in the next century. If no humans are still alive on this planet by then, the panel may contain only robots interested in this problem.
Thank you for your attention.
Such a philosophy could be contrasted with an alternative view based on two ideas (discussed in more detail in other papers):
The human population is very varied not only as regards superficial features such as skin colour, hair colour, height, weight, vocal timbre, etc. but also in deep ways connected with genetic abnormalities or results of illness or injury, in some cases before or soon after birth. As a result there are humans born blind, deaf, blind and deaf, missing arms, missing legs, conjoined as twins, with cerebral palsy, seriously deprived by illness or injury while very young, or by drugs, such as thalidomide, taken by mothers during pregnancy. But despite all those surface differences their brains may develop common structures that are to a large extent based on instantiating shared abstract patterns provided through the genome -- not innate knowledge of specific places, but innate a-modal meta-knowledge about what places are and how they can be related. (Compare the meta-knowledge about language provided by the genome and used in developing knowledge of specific languages with many different details of semantics, vocabulary and syntax. See the example of the Nicaraguan deaf children below.)
These children were not deriving linguistic knowledge empirically from a
pre-existing linguistic community. Their shared genome allowed them to
collaborate in creating a rich new sign language. I have argued elsewhere that
rich sign languages must have evolved before spoken languages, and internal
languages used for percepts, control of complex actions, formation of
intentions, plans, reflection on past experience etc. must have evolved before
languages for communication.
Talk 52: Evolution of minds and languages.
Some of those unusual individuals manage (with varying amounts of help) to surmount their disadvantages, so as to lead rich and satisfying lives, and some of them become famous including, for example, Helen Keller (who lost sight and hearing very young), Alison Lapper, the artist and writer without arms, Esref Armagan, the painter born blind, Abigail and Brittany Hensel, the conjoined twins, and many others. (All of these, and more, can be found via internet search engines.)
An interesting discussion of blind mathematicians can be found here, with links to further online information: http://m-phi.blogspot.co.uk/2011/07/what-is-it-like-to-be-blind.html
It references this very interesting paper:
If the fashionable claim that cognition is inherently bound up with sensory-motor morphology were true, that would suggest some deep chasm between the minds of humans with "normal" physiology and those with varying degrees and kinds of divergence from those norms. For example, it would seem to imply that individuals with normal vision and motor control cannot converse about euclidean geometry or topology with those born blind, or without hands. (I have heard a well known researcher go further and claim that people born blind cannot understand spatial structures and relationships.)
It may be true that the rich sensory and motor capabilities deployed out of sight within a normal mouth do not directly support notions like straightness, or metrics for distance, area, volume, angle, curvature, velocity etc. However, the same is true of biological mechanisms involved in auditory, visual, haptic, tactile, or kinaesthetic perception of spatial structures and relations. Many vision researchers seem to be seriously misled by the fact that video cameras generally provide visual input in the form of rectangular arrays of measurements, quite unlike biological visual systems.
Far from Euclidean spatial structure (including notions of straightness, parallelism, etc.) being inherent in the biological mechanisms, a complex process of development (through learning, invention of new technology for measurement, and social evolution), is required to build our familiar Euclidean spatial ontology, e.g. for use in various practical tasks, such as way-finding or building houses using initially scattered materials.
This is clearly not a sensory-motor ontology but an a-modal ontology: the distance between two trees has nothing specific to do with whether the gap is seen, crawled along, estimated using outstretched arms, or measured in some other way.
If this is correct, researchers aiming to explain multi-modal integration of sensory information, e.g. in terms of statistical relationships between different sensory and motor streams, are misguided if they claim that that is all that's going on, ignoring the construction of a-modal, multi-functional stores of information, e.g. about local geography. Robotic techniques for SLAM (Simultaneous Localisation and Mapping) illustrate some forms of a-modal integration of sensory-motor information to achieve new power.
How cognitive systems develop abilities to think about and make use of such properties and relations inherent in spatial structures, is a non-trivial research topic. I have not yet encountered any plausible candidates based on sensory-motor theories, though there are obvious alternatives that depend on use of measuring rods, procedures for comparing objects by lining them up, and use of other external objects and processes.
In that case there is no obvious reason why the same ontology, and associated theory, could not be built by intelligent agents with different sensory-motor mechanisms but engaged with the same rich environment. This, after all, is how our increasingly versatile and accurate devices and procedures for assigning geometric properties and relations to objects have evolved over past centuries. Compare the ways of using length, angle, volume, etc. available to ancient and medieval builders of houses, churches, temples, bridges, etc. and those available to modern scientists and engineers. Their tools and techniques can change without what they are referring to changing.
The moral for communication between humans with different sensory-motor organs and capabilities should be obvious.
Note added 4 Feb 2017
A useful antidote to anti-computational philosophical prejudices can be found in this Stanford Encyclopedia article: Michael Rescorla on Computational Mind
A.2. NOTE ON TONGUE CONTROL
Anyone snorting at a cheeky child "Control your tongue!" needs to be aware of the complexities and difficulties involved in tongue-control. A human tongue has several muscles controlled by the brain using information both from sensors in tongue and other parts of the mouth sending information to the brain and information from brain to muscles in the tongue and other parts of the mouth. The muscles and nerves enable your tongue to be used not only for spoken communication, but also for a considerable variety of other actions, including cleaning teeth in different parts of the mouth, detecting some kinds of tooth damage, and various kinds of food that can be stuck in the mouth, manipulating objects in the mouth (e.g cherry stones, and various kinds of food that need to be manipulated during chewing), sucking a source of fluid, squeezing objects (e.g. by pressing an object against the roof of the mouth, or against teeth). What humans can do with their tongues is partly similar to what elephants do with their trunks, and what an octopus can do with its tentacles. In addition there are many other kinds of control in speech and singing.
As far as I know, the important role of the tongue in human sensing and acting has been completely ignored by researchers attempting to build humanoid robots, including those who accept the slogan that replicating human cognition will require replication of human morphology. It follows that if their theories are correct their robot projects are failures.
Some other animals, e.g. giraffes, have much longer and more versatile tongues, which can grasp vegetation and pull to detach it from the rest of the plant.
A.3. NOTE ON WHIMSY
It's possible that readers unfamiliar with British whimsy will need to be warned that not everything in this document is written with a straight face. Had it been a more serious document it would have drawn attention to the important distinction between online intelligence (on which which researchers emphasising embodied cognition, enactivism, and the like tend to focus) and offline intelligence used in considering possible actions, including planning multi-step actions, without actually performing any, and in discovering and proving theorems in various branches of mathematics, designing aeroplanes and skyscrapers, inventing deep scientific theories, composing music or poems in one's head, enjoying music or poetry without moving, and many more.
For more on the difference between online and offline intelligence see this
abstract for an invited presentation at a "Computers and Minds" workshop in
Edinburgh, 21st Nov 2014.
BACKGROUND AND REFERENCES
(Needs to be expanded)
Ron Chrisley on Extended Mind
Michael Rescorla on Computational Mind
Michael Rescorla, Michael (2016), "The Computational Theory of Mind", The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.),
Added 5 Jan 2015:
For more links see: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/mm-background.html
a.sloman [@] cs.bham.ac.uk
School of Computer Science
The University of Birmingham