Abstract for AISB 2000: How to Design a Functioning Mind
AUTHOR: Kerstin Dautenhahn
Current address: Department of Cybernetics, University of Reading
From 1/4/2000: Department of Computer Science
University of Hertfordshire College Lane
Hatfield, Hertfordshire AL10 9AB
TITLE: Design issues of Biological and Robotic "Minds"
What are minds for? The evolutionary perspective.
Designing a functioning mind can benefit from analysing the
conditions and constraints which have shaped the evolution of
animal minds. Minds are certainly attributed to members of Homo
sapiens (and as some evidence suggests several other hominid
species might have existed with "minds"), but other candidates
exist among mammals (e.g. non-human apes, dolphins, elephants)
and birds (e.g. parrots and members of the crow family).
Interestingly, species which we describe as possessing a "mind"
are all highly social. Even the "solitary" life style of
Orangutans (who nevertheless seem to be highly social in their
ability to recognise and interact with each other) is rather a
secondary adaptation to a particular environment which demands a
spatially "distributed" social organisation. The Social
Intelligence Hypothesis which has been discussed in primatology
for decades claims that primate intelligence primarily evolved in
adaptation to social complexity, i.e. in order to interpret,
predict and manipulate conspecifics (Byrne and Whiten, 1988).
However, as Richard Byrne recently pointed out (Byrne, 1997),
this hypothesis can account for the evolution of primate
intelligence, but not for the specific human kind of
intelligence. Here, narrative psychology and studies on the
development of autobiographic memory and a "self" might offer an
explanation: evidence suggests that "stories" are the most
efficient and natural human way to communicate, in particular to
communicate about others (Bruner, 1991). The Narrative
Intelligence Hypothesis which I recently suggested (Dautenhahn,
1999) proposes that the evolutionary origin of communicating in
stories was correlated with increasing social dynamics among our
human ancestors, in particular the necessity to communicate about
third-party relationships (which in humans reaches the highest
degree of sophistication among all apes, e.g. gossip).
Implications for designing a functioning mind are: a) minds need
to be designed as social minds, b) a human-style social mind need
to be able to communicate in "stories".
The case of autism: Diversity and Adaptive Radiation of "Minds".
Autism is a specific disorder which (among other things) results
in significant deficits in the social domain: people with autism
generally have great difficulty relating to other people,
interacting socially in an appropriate way (Baron-Cohen, 1995).
However, some (high-functioning) people with autism can very well
cope with their lives, pursue a career etc. Thus, rather than
considering people with autism as having "defective" minds, I
suggest to view them as having non-defective minds, but minds
which are different from other people. Similarly, other non-human
animals might possess minds equally "powerful" as ours, but
different, and often difficult to study due to our limited
understanding of the animals and their environments (and due to
the fact that humans are very active in destroying biological
diversity, including our closest relatives). Natural evolution
supported diversity and adaptive radiation so that different
minds might have evolved in adaptation to particular
environmental constraints (biotic and abiotic) and thus creating
a particular niche. A "general purpose animal" does not exist.
Implications for designing functioning minds: 1) A single
architecture as a "solution" to designing a functioning mind is
unlikely. The design spaces of natural and artificial minds are
still to be discovered, but we can expect a high degree of
diversification. 2) For artificial minds, a number of constraints
(e.g. body shape of robots) are under our (the designer's)
control and this synthetic approach could further our
understanding of minds complementary to investigating animal
minds. 3) In the same way as the notion of "fitness landscape"
helps biologists in understanding evolution and speciation, a
similar concept can be developed in order to describe and
evaluate the "fitness" of artificial systems.
The project AURORA (supported by EPSRC, Applied AI Systems, Inc.
and the National Autistic Society in UK) investigates how an
autonomous robotic platform can be developed as a remedial tool
for children with autism (Dautenhahn 1999, Dautenhahn and Werry
1999). A deliberately non-humanoid robot with a simple behaviour
repertoire is used as a toy, providing an "entertaining" and
playful context for the child in which it can practise
interactions (specific issues addressed are e.g. attention span
and eye contact). The project which I initiated just one year ago
poses several challenges for developing an artificial mind: 1)
the idea is that the robot's "mind" grows and its complexity
increases during interactions with the child: how can we "grow" a
mind rather than using a prespecified architecture, e.g. how can
narrativity develop, what are its precursors? 2) the role of
physical interaction and embodiment: what can the physical robot
provide that cannot be studied in using software systems?, 3) can
the robot ultimately serve as a "mediator" between the child and
the teacher and/or other children? 4) what is the relationship
between the internal control architecture (the "artificial mind")
and the way the robot behaves believable and "mindfully"?
AURORA project: http://www.cyber.rdg.ac.uk/people/kd/WWW/aurora.html
S. Baron-Cohen (1995) Mindblindness. An essay on autism and
theory of mind. Cambridge, London: a Bradford Book, MIT Press.
J. Bruner (1991) The Narrative Construction of Reality, Critical
Inquiry 18(1), 1-21
R.W. Byrne, A. Whiten (1988) Machiavellian Intelligence: Social
Expertise and the Evolution of Intellect in Monkeys, Apes and
Humans. Oxford: Clarendon Press.
R.W. Byrne (1997) Machiavellian Intelligence. Evolutionary
Anthropology, 5, 172-180
K. Dautenhahn (1999) Robots as Social Actors: AURORA and The
Case of Autism, Proc. CT99, The Third International Cognitive
Technology Conference, August, San Francisco, 359-374.
K. Dautenhahn (1999) The Lemur's Tale - Story-Telling in Primates
and Other Socially Intelligent Agents. Proc. "Narrative
Intelligence", AAAI Fall Symposium 1999, Technical Report
I. Werry, K. Dautenhahn (1999): Applying robot technology to the
rehabilitation of autistic children. In: Proceedings SIRS99, 7th
International Symposium on Intelligent Robotic Systems '99,