This is part of the Meta-Morphogenesis project:
http://tinyurl.com/CogMisc/meta-morphogenesis.html
Offers of collaboration welcome.
(DRAFT: Liable to change: Please do not save copies -- save a pointer.)
See also
This file is
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/evolution-info-transitions.html
A PDF version can be produced on request.
Or use 'print to file' in firefox (maybe also other browsers?)
(Please do not save copies of this document
-- as they will get out of date quickly.)
__________________________________________________________________________________________
It is not uncommon for biologists and others interested in evolution to discuss and
investigate evolutionary transitions that produced new physical forms, or new sensorimotor
organs, or new physical behaviours. What is not so common, and is much harder to do, is to
identify transitions that produced new forms of information-processing, including new
information contents, new forms of representation, new sources of information, new ways or
transforming or deriving information and new ways of using information.
The attempt to identify and analyse those transitions in information-processing is
the Meta-Morphogenesis project, so named because the mechanisms that produce the
transitions sometimes produce new mechanisms for producing such transitions: for
instance, some of the types of evolution, learning and development that exist on
earth now are themselves products of evolution, learning and development, and did
exist in the earliest life forms.
This document presents and attempts to explain the importance of a growing collection of
examples of transitions in information-processing capabilities in evolution, in
development, in learning, in society/culture, and perhaps also in ecosystems. The
transitions created by information engineers since the 1940s could also be regarded as
products of biological evolution (like the cathedrals built by termites), but for now they
are used merely to illustrate types of information-processing phenomena. Recent
information-processing technology provides several pointers to problems and solutions that
previously turned up in biological evolution (e.g. the advantages of control by virtual
machines rather than physical machines, when virtual machines are easier to design,
monitor, debug, modify, extend and combine with other mechanisms, as explained here.)
Others have asked some of the questions raised here, but I am trying to collect a wide
variety of examples of transitions that may show patterns not visible to researchers
focusing on narrower sets of examples.
_______________________________________________________________________________________
Some related work (a tiny subset)
Although the scope of this project seems to be larger than any other, this is not the
first work to be concerned with evolution of information processing mechanisms.
A similar concern can be found in many other publications, e.g. here's a tiny sample:
Modularity in Development and Evolution
Eds. Gerhard Schlosser, Gunter P. Wagner
University of Chicago Press, Chicago, 2004
Living is information processing; from molecules to global systems,
K.D. Farnsworth and J. Nelson and C. Gershenson, 2012,
http://arxiv.org/abs/1210.5908
Stuart Kauffman,
At home in the universe: The search for laws of complexity,
Penguin Books, 1995,
Chapter 15 of Margaret A. Boden,
Mind As Machine: A history of Cognitive Science (Vols 1--2),
Oxford University Press, 2006,
Note added 2 Aug 2013: I have been reading Merlin Donald's 2002 book
A Mind So Rare: The Evolution of Human Consciousness
The book is spoilt by excessive rants against reductionism, and a seriously
ill-informed account of symbolic computation, but is a superb introduction to many
of the evolutionary transitions that involve information-processing, e.g. Chapter 4.
Donald seems to understand the importance of the fact that what exists now, e.g.
in human minds, builds on many layers of previously evolved function and mechanism
which may be shared with many other species. He raises many important questions
about how and why various features of human minds evolved, even though he lacks
(or lacked) the engineering expertise to provide deep answers.
_______________________________________________________________________________________
Beyond the organism's boundaries
A common thread in the work on evolution of information processing is the importance
not only of the sensorimotor morphology of organisms, and the mechanisms in brains
and nervous system, but also the nature of an organism's environment, the problems it
poses, the opportunities it provides, and the kinds of information-processing systems
required for dealing with it.
In the last few decades there has been much emphasis on the importance of embodied
cognition, or enactivism. I think it will turn out that much of the work done under
that banner, especially the polemical pronouncements, merely illustrate the dangers
of following narrow fads instead of trying to get a deep understanding of the variety
of design requirements for organisms and robots, and the variety of possible
solutions and their trade-offs.
In particular a narrow approach to the study of embodied cognition tends to emphasise
the importance of "online intelligence" as if "offline intelligence" either did not
exist or had no major biological function, whereas I argue that it is crucial to
understanding the variety of types of affordance and their perception and use (going
far beyond the ideas of James Gibson on affordances). This is also essential to
understanding human mathematical and scientific theory-building competences, for
example. The distinction between online and offline information-processing is
discussed further below.
For a more detailed critique see:
Some Requirements for Human-like Robots:
Why the recent over-emphasis on embodiment has held up progress
In Creating Brain-like Intelligence,
Eds. B. Sendhoff, E. Koerner, O. Sporns, H. Ritter, and K. Doya, pp. 248--277,
http://tinyurl.com/BhamCosy/#tr0804
__________________________________________________________________________________________
Sources of variety in types of Meta-Morphogenesis:
For any biological (e.g. genetic) changes B1, B2, B3,.. etc. and for
any environmental states or changes E1, E2, E3,... there can be influences
of the following forms ...
__________________________________________________________________________________________
It is clear that evolution, learning, development, and cultural changes produce new
biological information used in reproduction and in many forms of behaviour.
However, the mechanisms for producing new forms of information-processing have themselves
been changed -- including new forms of reproduction, learning, development, cultural
change, and "unnatural selection" mechanisms such as mate-selection, animal and plant
breeding, and more recently cloning and use of genetic manipulation to control
reproduction.
The meta-morphogenesis project seeks to identify (a) all such changes in information
contents, and information-processing mechanisms and their consequences, especially the
many unobvious changes that are needed to answer old philosophical questions and shed
light on the relations between nature and nurture and relations between minds and brains,
and (b) the processes and mechanisms that drove those changes.
If identifying all (including future changes) is impossible, we can attempt to identify as
diverse a range as possible.
It may be necessary to start with relatively coarse-grained transitions and gradually home
in on details.
In the earliest phases of evolution, the mechanisms, and the changes in information![]()
Ideally where we should start....
(Picture by NASA on Wikimedia: protoplanetary-disk.jpg)
Conjecture: In similar ways, new products of biological evolution, and products of
its products, enhance evolution's ability to produce more complex products.
This document presents some examples of transitions in information-processing competences,
starting with very simple cases and moving to increasingly complex examples, but without
presuming that there's a fixed order in evolution, or development. The diversity of
possible trajectories, is clearly indicated by human learning and development and by
differences in evolutionary lineages. Whether there are any absolute restrictions on
possible trajectories is a question to be investigated later.
NOTE: A failure to recognise diversity in developmental and learning trajectories can ruin educational systems for many learners.Many of the transitions in biological information-processing are closely connected with
Some of the competences are illustrated and discussed briefly here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-theorem.html
Other transitions in information-processing were required to allow attention to be
switched between objects, events and processes in the environment and objects, events and
processes in perceivers, for example the ability to notice, when looking at unchanging
external structures from a moving viewpoint, the changeable intermediate results of
perceptual processing, such as aspect ratios, optical flow patterns, texture gradients,
and assumed but unperceived parts, e.g. 'far sides' of objects. Such changes in contents
of awareness have produced philosophical puzzles about the relationships between
experience and reality, since ancient times. (Think of Plato's Cave, for example.)
But those are much later developments certainly in evolutionary time, and possibly also in
individual development of humans, since it's not obvious that newborn infants have such
capabilities.
extensions in the types of information contents that can be represented and used
(changing ontologies)
extensions in the forms of representation that can be used for expressing
or storing information -- including the use of virtual machinery in which
information structures are created, manipulated, and used
extensions in the forms of derivation of new information from old
extensions in the mechanisms for exploring varieties of information contents
extensions in the sensory mechanisms that are available for acquiring
information of various sorts (including information about internal states
of the organism)
extensions in the control mechanisms that are available for producing actions
using motor mechanisms and internal mechanisms, such as attention-switching
mechanisms, motive generating mechanisms, and many more
extensions in the uses to which information can be put
(this can include extensions to new physical environments that can
be perceived, created, controlled, or modified)
extensions in the architectures of information processing systems
(e.g. changes that allow new kinds of processing to occur concurrently,
or new kinds of interactions between different sub-systems, such as one
sub-mechanism monitoring or modulating another, or changes that allow
task- and environment-dependent changes of architecture.)
See the CogAff project
extensions in the ability to deal with other information-processors
The requires evolution or development of meta-semantic competences of
various kinds, some of which may be inwardly directed.
(See examples below.)
extensions in collaborative information-processing, including communication
There are complex 'emergent' features of collaborative or collective
information-processing in swarms, flocks, hives, and the production of
termite cathedrals.
There is a different sort of collaborative information processing when
small numbers of individuals, e.g. a few carnivores hunting grazing mammals,
or two humans discussing how to solve a problem.
here must have been complex communications between internal
subsystems long before whole individuals communicated as humans do.
Additions to verbal/linguistic forms of representation used for thinking, perceiving,
communicating, and possibly other purposes. A readable overview document by
William A Woods implicitly (and unintentionally) provides reminders of some of the
unobvious types of complexity that might have been added over evolutionary time to
language-using capabilities, including transitions that now happen much more
quickly during individual language learning/development, supported in some way by
genetic meta-competences. (See Chappell and Sloman 2007)
Note: not all changes are extensions -- some forms of evolution or development may involve
Jump to CONTENTS list
__________________________________________________________________________________________
This is completely different from the relatively new usage of the word introduced by
Shannon, in 1948, which subsequently confused philosophers, composers, scientists, and
many others.
In this document I never use the word in Shannon's sense.
Moreover, most of the attempts to define the older meaning are either erroneous, or
circular, or else misleading in various ways. Like many powerful theoretical terms (e.g.
"matter", "energy", "gene", "electrical charge", "valence",...) the word "information"
cannot be explicitly defined. Rather it is implicitly defined by the theories in which it
occurs, which through their structure partially identify a class of models, which can be
more precisely identified by adding links with observation and experiment to the theory
--
a process I call "theory tethering", not to be confused with the seriously misleading notion
of "symbol grounding". The concept of information is discussed more fully in
A. Sloman, What's information, for an organism or intelligent machine?
How can a machine or organism mean?,
In, Information and Computation, Eds. G. Dodig-Crnkovic and M. Burgin,
World Scientific, pp.393--438, 2011
http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#905
The notion of "representation" is often defined in a very narrow way, e.g. by specifying
A. Sloman, 'The mind as a control system',
in Philosophy and the Cognitive Sciences,
Eds. C. Hookway and D. Peterson, pp. 69--110, CUP, 1993,
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#18
A. Sloman,
What enables a machine to understand?,
Proceedings 9th IJCAI, Los Angeles, pp. 995--1001, 1985,
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#4
__________________________________________________________________________________________
Jump to CONTENTS list
__________________________________________________________________________________________
Added: 14 Jan 2013
Place holder: roles of information in control
We can distinguish different sorts of functional roles for mechanisms involved in use of
information for control, in biological and non-biological systems. Examples:
Many thinkers discussing information processing or computation consider only formal
manipulations of structures within some sort of machine, e.g. a Turing machine or
computer. This raises questions about how any semantic content can be involved in what the
machine does. We can answer those questions by explaining how information can be related
to control, whether in organisms or human-made machines. Ian Wright attempts to present
and extend these ideas in this slideshare presentation (slides and audio):
http://www.slideshare.net/wrighti/sloman2011-slides
Such uses of everyday language in asking scientific questions can be seriously misleading
because the concepts are not based on a deep explanatory theory, and as a result group
together things that are superficially similar but deeply different (like sharks and
whales, both originally thought of as fish) or treat as different things that have deep
commonalities, e.g. use of manufactured tools, like hammers, cutters, spears, and use of
kinds of pre-existing matter to perform manipulations on other objects, including the use
of body parts: e.g. the use of one hand to hold an object that is being peeled by another
hand, has deep similarities with the use of a space between two rocks, or a manufactured
vice to hold the object being manipulated.
More subtly, asking questions about whether or when human infants, or other animals, have
or acquire concepts like "enduring object", "causation", "false belief", "number",
"error", or
"emotion", will typically cause researchers to group together processes, competences and
mechanisms that are deeply different, or fail to notice similarities between examples that
are superficially different, like the similarities between marine mammals and land mammals
that were initially not noticed.
Another source of deep traps is the word (or concept) "language". When researchers ask
which animals can learn or use a language?
or
when do children start to use language?
they
nearly all make use of a shallow common-sense notion of "language" as primarily concerned
Another example is the ordinary concept "teaching", which some biologists have attempted
to apply to animal behaviours often ending up squabbling about what is or is not real
teaching. (Compare Nigel Franks on Teaching in tandem-running ants)
Since I cannot help using ordinary language I try to specify what I am asking,
conjecturing, or proposing by giving examples. But in many cases I am likely to be guilty
of the mistakes I have just criticised, and I welcome critical analysis of examples, from
the point of view of a designer of working systems, showing that my examples need
re-organisation or re-labelling. The ultimate test will be ability to contribute to a
broad, deep and precise, explanatory theory that can be applied to both the explanation
of
natural phenomena and to the construction of working models.
Conjecture: Learn about possibilities before learning about utility
The more complex the organism and the more possible internal and external actions
available, the more important it is to explore possibilities and their consequences
before those explorations have any evident utility. (See the discussion of
architecture-based motivation, below.)
__________________________________________________________________________________________
Jump to CONTENTS list
__________________________________________________________________________________________
Below is a very sketchy summary list of examples of transitions in biological information
processing in evolution, development, learning, etc., some of which also involve
transitions in physical structure or new sensors, many of which are related to
changes in the environment, e.g. new problems, challenges, dangers or opportunities.
In some cases the transition is initially merely related to acquisition or
manipulation of information, without any practical application, though, as the
history of mathematics shows repeatedly, such 'useless' changes can later provide the
basis for massive practical advances.
[NB: the numbering of points below is likely to change, as new items are inserted
and old
ones rearranged, or merged, or split.]
E. BremerIt is likely that several homeostatic mechanisms developed in the earliest life forms and
"Synthesis and uptake of compatible solutes as a microbial defence against osmotic
and temperature stress",
http://www.uni-marburg.de/fb17/fachgebiete/mikrobio/mpireport_bremer.pdf
(There are very many biological mechanisms that do this, some relatively simple, e.g.
phototropism, geotropism, hydrotropism??, others much more complex, e.g. carnivores
seeking prey that can move, or animals seeking mates.
Note that in general change detection requires more complex mechanisms than
detection: e.g. it may require storage of previous information to be compared with
new information. So puzzlement about change blindness is mis-directed: change detection,
not non-detection, is what primarily requires explanation.)
-- combined to detect more complex phenomena
-- used to detect unrelated phenomena in parallel, leading to possible conflicts
in reactions (e.g. choosing what to consume, what to avoid).
Compare S.S. Stevens' "scales" of measurement (1946).
Far more relations are important in interacting with a complex environment than
unary predicates, yet very many writers seem to assume that all or most concept
formation is formation of unary concepts (predicates), e.g. straight, square, box,
shoe, house, dog, etc.
Contrast different ways of generating new concepts:
Unfortunately current AI students seem not to learn about some of the deep pioneering work
done in the late 1960s and early 1970s in which use of structural descriptions, including
comparisons of structural descriptions was central, for example the work by T.G. Evans
on a geometric analogy program
which inspired work by P. Winston on learning structural descriptions from examples,
and G.J. Sussman on A Computational Model of Skill
Acquisition, referring to thinking skills,
not physical skills (both of which are
important in humans and many other species).
Jump to CONTENTS list
__________________________________________________________________________________________
One of the deep questions related to this is how the differences are represented between
and similar examples relating to past and future, or different locations (what could
or could not happen here and there).
It is sometimes proposed that such information contents require use of a modal logic,
that adds operators such as "possible", "impossible", "necessary" and "contingent" to a
formalism for expressing facts. But that presumes that all information is represented
propositionally (in a form expressible in sentences), but, for many reasons, including
observations about mathematical competences below I suspect the ability to
think and reason about counterfactuals uses architectural extensions (illustrated by the
work of John Barnden and Mark Lee on counterfactuals and ATT-Meta).
Frank Guerin informed me that his three and a half year old son asked at a restaurant
"Why didn't we get them last time?" when the restaurant provided wet serviettes for wiping
children's hands. This requires
There are anecdotes about other animals being able to remember past events involving
individuals who have helped or harmed them. [REFs needed.]
How do you know that if a vertex of a planar triangle moves along a median away from the
opposite face the area of the triangle must increase, no matter what the size,
shape, orientation, colour or location of the triangle? Compare the two cases (a) and (b)
in the figure below? I suggest this uses deep functions of animal vision that have mostly
been ignored.
Abilities to perceive and reason about possibilities and constraints on
possibilities
in a mathematical context are deeply connected with the ability to perceive and reason
about affordances, which must have evolved earlier. This sort of requirement is one among
many aspects of cognition that are blindly ignored (as opposed to being temporarily
postponed) by most researchers on "embodied" or "enactive" cognition.
More examples are
e.g. here,
here
and here.
Compare "toddler theorems" about how what's visible through a doorway to another room
changes as you move your location relative to the doorway in various directions.
A draft overview of some of the functions of vision in humans is under construction here.
A larger project is to identify major transitions in biological visual information-processing
since the earliest forms. I have many online papers and presentations related to
functions of vision, and will later attempt to organise them. One way to do that is in
terms of the 3x3 CogAff Schema grid, (outlined here), which
combines three columns
of functionality:
and three layers of functionality, listed here from the bottom up:
Many evolutionary and developmental transitions are concerned with either adding new kinds
of functionality within these layers or columns, or connecting functionality in different
parts of the grid, across columns or layers, to develop more complex systems, e.g.
producing social actions (such as smiling, beckoning, teaching, that involve not just the
low level motor control system but also meta-semantic competences generating intentions
and actions, and visually interpreting actions and responses of other agents.
Because of the nature of this grid there are many possible sequences in which particular
competences can be added by evolutionary and developmental processes.
Although the CogAff grid/schema provides a useful framework for thinking about design
alternatives it must be considered as a very crude approximation, especially the obviously
inadequate implication that there are only 9 major subdivisions among types of information
processing.
Much is known about physical, chemical, and morphological aspects of many kinds of eyes,
and also about their functions, which typically depend on the needs of the organism, its
optical sensor morphology, the features of the environment (including available food,
predators, mate-features), the actions of which the organism is capable, and the available
types of information processing mechanism.
Several animals have two or more eyes, and in some cases these seem to operate as
independent sensors. But humans, and various other animals, including primates, hunting
mammals, and many birds seem to be able to use two eyes pointing roughly in the same
direction to drive two collaborating streams of information processing to compute
distances of perceived objects by triangulation.
Unfortunately, Julesz and others discovered that humans are able to see 3-D structures in
random dot stereograms, and this led many researchers to assume that the methods
required for doing that, by first finding low level correspondences in the images, are used
for all stereo vision. However, most natural scenes do not produce random dot patterns on
the retina, and it is easy to confirm that a great deal of 3-D structure can be seen
monocularly (e.g. try wearing an eye-shield for a few hours). So it is at least possible
that animal systems use the results of monocular perception to identify corresponding
locations in the left and right percepts and use those correspondences to perform
triangulation. I offer this merely as an illustration of how easily an experimental
discovery leading to a large tranche of computational modelling can distract research
attention away from an important biological function.
Conjecture: binocular depth perception first occurred as a transition from monocular
perception that made use of monocular structure to find corresponding items in left and
right visual fields to use for triangulation. Later, additional mechanisms evolved to deal
with cases like textured surfaces or sand dunes, where the monocular percepts do not
provide sharp points of comparison. Normally the two mechanisms function in parallel.
If this conjecture is correct it could lead to much improved stereo vision systems in
robots, primarily using the results of monocular vision.
(Perhaps that has already been done.)
Both of these miss the requirement to identify 3-D features of shapes that may or may not
be visible from all views, and which may or may not be relevant to possible uses and
behaviours of the object, and may take account of various combinations of topological
properties of the objects, metrical properties of the object, qualitative semi-metrical
properties, e.g. constancy, increase or decrease of curvature of part of a surface, or
"phase transitions" in orientation or curvature, e.g. regions where curvature changes from
concave to convex or vice versa, regions where curvature is constant, and many more.
--
When can a typical human infant use vision to take in the information required to answer
such questions? Which other species can do it? What forms of representation and
capabilities had to evolve to make such tasks possible?
Many mathematically well educated researchers assume that animals (and intelligent
machines) must express all those spatial properties and relationships using an ontology
that assumes an all encompassing space, with global metrics for length, area, volume,
curvature, orientation, angle, etc.
However, it is far from obvious that many animals (or any animals) can do that, and
moreover there are many unsolved problems about how to derive such information from visual
and other sensor data. Perhaps that is a poor analysis of the problems evolution and its
products solved.
For various reasons, to be explained later, I suspect that the ability to think about,
make use of, or acquire information expressed in terms of such global metrics, e.g. using
a global cartesian coordinate frame, or a global polar coordinate frame, is a very late,
relatively sophisticated achievement (only developed in 1637 and thereafter by Descartes,
Fermat, and their successors -- without which Newton's mechanics would have been
impossible).
An alternative ontology might instead make use of collections of spatial and topological
relationships between objects, and object parts, where the relationships could be binary,
ternary, etc., with partial orderings of size, area, volume, angle, distance,
direction, straightness, curvature, regularity, where some of the relationships are
detected and represented in far more detail than others, e.g. relationships between
objects or surfaces (including surfaces of manipulators) in the immediate environment, or
relationships between objects on which actions are being, or are intended to be performed.
A network of partial orderings of size, distance, could be enhanced by semi-metrical
relationships, e.g. A is longer than B, and the difference is more than three times and
less than four times the length of B. If B is a pace for a walking animal that could be
relevant to choosing routes for walking. Different kinds of information about partial
orderings might be relevant to grasping and manipulating objects in the immediate environment.
There's lots more to be said about the alternatives, their biological uses, their
evolution, their development in individuals, the forms of representation used, the forms
of reasoning, their roles in perception of different sorts (visual, haptic, auditory, or
multi-modal perception, or a-modal reasoning), and about how organisms differ. E.g most of
this would be impossible for microbes.
I suspect this ability to perceive and reason about semi-metrical partial orderings is
part of what accounts for the early discoveries leading to Euclidean geometry, including
the examples summarised here, and that in humans many transitions in representation
of spatial structures, relationships, processes and interactions occur in the first few
years of life that have not been noticed or studied by developmental psychologists.
(Though Piaget seems to have thought about some of them.) They also have not been
noticed by roboticists, especially 'enactivist' roboticists who focus mainly on online
intelligence ignoring offline intelligence, briefly mentioned below.
Too often researchers think that what they do effortlessly needs no explanation -- so they
look for explanations of failures (e.g. change blindness, lack of "conservation", etc.)
instead of first looking for explanations of successes, without which it is
impossible to construct explanations of what goes wrong.
Some examples of matter-manipulation competences:
I suspect that biological evolution changed the information processing architectures of
some organisms so as to allow more intelligent 'look ahead' to guide choices, or to allow
different exploration strategies to be selected explicitly on the basis of information
available instead of being 'hard-wired' in search strategies.
Such transitions have happened many times in the history of programming language
development. An example was the transition from the Planner AI language to the Conniver
language at MIT in the early 1970s. [Ref Sussman and McDermott, 1972
There are other transitions where failures discovered during a search process can be found
to be detectable at an earlier stage, an example being the process of "compiling critics"
modelled in Sussman's Hacker program [[REF]].
Compare recording 'ill-formed' substrings during parsing to constrain future search, and
the development of 'caching' mechanisms, mentioned below.
[[Other examples of transitions from cognition to meta-cognition needed.]]
Finding out when such transitions occurred in evolution will be much harder. Some cases of
occurrence in individuals may turn out to be examples of what Karmiloff-Smith refers to as
"Representational Re-description" in Beyond Modularity (1992)
Conjecture: There are MANY more transitions made explicitly by programming language
designers that are analogous to transitions made implicitly in changes of biological
information processing.
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/wonac 1. Evolution of two ways of understanding causation: Humean and Kantian. (PDF), 2. Understanding causation: the practicalities 3. Causal competences of many kinds
Jump to CONTENTS list
__________________________________________________________________________________________
These mechanisms, and related mechanisms involving architecture-based motivation discussed
above, are probably deeply involved in the later development of processes involving
aesthetic enjoyment -- creating, observing, taking part in processes and structures that
do not necessarily directly serve any obviously biological need, e.g. some of the play
in young mammals.
[Much more needs to be said about this.]
For example, servo control can make use of physical/mechanical compliance (as in use
of padded skin, or physically compliant manipulators) or virtual compliance
e.g. allowing perceived changes in relationships to affect changes in applied forces.
Such advances in online intelligence do not necessarily provided advances in offline
intelligence, e.g. the ability to think about the past, or future, or what might have
happened under different conditions.Varieties of deliberation are discussed here.
Many products of robotic research show very impressive online intelligence without any
offline intelligence, e.g. the amazing BigDog robot built by Boston Dynamics.See also this discussion of some of Karen Adolph's work on young children:
http://tinyurl.com/CogMisc/online-and-offline-creativity.html
Note (modified 25 Nov 2012):
Serious muddles about ventral and dorsal "streams" of visual processing arise
from failure to understand the different information processing requirements for
(a) servo-control based on transient, constantly changing (mostly scalar?)
information, and
(b) acquiring and storing information for possible multiple uses at different
times, including describing, planning, answering questions, etc.
Referring to these as "where" and "what" functions betrays a deep but common
failure to understand requirements for working systems of different sorts. The
more recent replacement of these labels with the labels "action" and "perception"
betray a failure to appreciate the variety of functions of perception, including
its role in online intelligence.
See On designing a visual system
Examples: a robot, like Boston Dynamics' BigDog produces very impressive
behaviours.
But it does not know what it has done, what it will do, what it hasn't done but could have
done, why it did the one and not the other, what would have happened if it had selected a
different option, what options might be available in a few seconds time, what the
consequences of those various options are, and many more.
Many questions need to be answered:
What sorts of evolutionary transitions led to such counterfactual-metacognitive
capabilities in humans?
Which other animals have them?
At what stage do they develop in children, and how?
How does all that relate to the "proto-mathematical" ability to look at a triangle and ask
what would happen to the area if one of the vertices moved relative to the opposite side,
as illustrated here?
Note: there is much more to be said about "offline intelligence" and how almost all
the research inspired by a concern with embodiment, dynamical systems, enactivism (etc.)
fails to address some of the deepest aspects of biological intelligence (a recurring theme
here).
So the ordinary concept of "language" like the ordinary concept of "tool-use" does not
pick out a well defined scientifically useful class of phenomena, and diverts attention
away from deep similarities between things for which we do not use the same label in
ordinary speech. (Compare the unobvious similarity between graphite and diamond.)
Sometimes educational policies that try to emphasise 'understanding' at the expense of
'memorising' miss an extremely important function of memorising as an aid to altering the
level of complexity of what the learner can understand.
Presumably there was some sort of evolutionary transition between being able to work
out plans or solutions to problems and having mechanisms for storing results of such
computations for future use.
A closely related, but more subtle development is the ability to remember discoveries
about what does not work, so as to reduce the risk of following false trails in planning,
reasoning, designing, doing mathematical reasoning, etc. Compare the work of Sussman on
'compiling' critics mentioned above.
Transitions of this sort occurred several times during the development of programming
languages in the 20th century. A tutorial introduction to use of patterns in programs
manipulating list structures is
here.
Grammars can be used not only for linear structures, like sentences, but also for things like networks. Some of the early AI research in vision (in the 1960s) made use of "web-grammars", i.e. grammars for networks or graph-structures, to express the contents of visual percepts.Many researchers seem to assume that grammars are relevant only to languages used for
For more complex species, evolution seems to have "discovered" the advantages,
especially as life-spans increase, of more powerful ways of enhancing the genetically
specified design, to cope better with threats, opportunities and constraints in each
individual's environment, i.e. replacing pre-configured with meta-configured competences.
In some cases, e.g. in humans and some other altricial species, evolution also seems to
have discovered the advantages of not only slowing down physical development while
information processing mechanisms adapt to each individual's circumstances, but also
staggering the onset of various kinds of later learning that build on the products of
earlier learning: delayed activation of a meta-cognitive learning mechanism allows it to
start looking for patterns in what "lower order" mechanisms have discovered when the
patterns are richer and more stable, instead of wasting effort analysing patterns that are
spurious, because based on too few instances and tests. This may be specially important in
cases where learning cannot easily be undone. These ideas are developed in a little more
detail in this paper.
These are crude analyses: far more details, based on far more examples, are needed.
All living things have semantic competences insofar as they can use any information at
all, whether with external or purely internal referents.
A subset seem to have meta-semantic competences regarding themselves or others. These
competences may be genetically fixed for some species and in others may develop under
multiple influences (meta-configured competences, mentioned above). In humans many kinds
of social/cultural education, and in some cases therapy can enhance meta-semantic
competences, whether self-directed or other-directed (e.g. getting better at telling
whether your actions are upsetting someone).
Human infants (and perhaps the young of some other species) need to develop a variety of
meta-semantic competences, some self-directed some other-directed, some combined with
counterfactual reasoning (e.g. "what could I have done differently?", "How would A have
responded if I had not done X?", "Can A see this part of X?", "Can A tell what I can
see?", "What does A think B did?"). Psychologists have used the label "mind-reading" for
this sort of capability, but mostly restricted it to a small set of competences involved
in working out what another believes or thinks, especially in situations where they don't
have up to date evidence. This is just one of many cases where a fashion for a particular
kind of research has spread because it is easy to vary experimental details, without ever
thinking about the kinds of mechanisms required to make any of the competences involved
possible at all.
For example, meta-semantic competence requires an architecture that supports referential
opacity as well as referential transparency -- the usual default. Referential transparency
refers to properties of representations where replacing item A in a larger representation
referring to object O with item B also referring to O makes no difference to what is
represented by the whole structure, and whether it is true or false. For example, if Fred
is chairman of the club, then if it's true that the chairman of the club is a cricketer,
then it is also true that Fred is a cricketer. But in a referentially opaque context, e.g.
"Joe believes that the chairman of the club is a cricketer" replacing "the chairman of the
club" with "Fred" can turn a true statement into a false one, or vice versa.
Some researchers favour trying to model such effects by extending the language used with a
new operator (e.g. "believes that") and modifying normal inference rules. I suspect that
what is really needed is a change in the architecture, to support a separation between
information structures accepted as true (beliefs) and the information structures that
represent possibilities that are not accepted. This is essential for planning, and for
perception of affordances.
A reverse process seems to me to be far more common and far more important: some feature
or competence is produced in members of a species by natural selection. Then later,
because the genetically-specified competence is too specific to be useful in enough
different situations, the competence may be split into some general framework, provided by
the genome, and context-specific details acquired by some sort of adaptation to the
details of the environment. (E.g. locomotion that evolved for relatively flat terrain
might be replaced by a general competence to acquire locomotion suited to the individual's
environment, which may be rocky, or on a mountain slope, etc.
In some cases this could lead to an inherited group of partly similar competences being
split into a number of inherited sub-competences that can be combined in different ways,
using learning mechanisms to find the combinations that are useful for an individual's
environment. (EXAMPLES NEEDED. REF Deacon?).
In more sophisticated cases, instead of learning (e.g. by experimentally finding out
what works), a process of creative problem-solving or planning may enable individuals to
work out new ways of combining fragments of old (learnt or inherited) competences.
This could have the effect that parts of the genome specifying a particular combination of
competences might become redundant because individuals who need that combination can
synthesise it through planning or learning, when needed, and perhaps synthesise a
combination better tailored to the particular environment than the previously evolved
version.
(This is an extension of Kenneth Craik's idea that being able to "run" internal models of
parts of the environment to find out the consequences of certain actions could save time,
effort, injury, and even life).
Kenneth Craik, The Nature of Explanation, 1943.
This covers a vast mixture of types of process, mechanism, form of representation,
information content, and uses of information, on many scales, for many purposes.
The vast majority of successful organisms on this planet, whether measured by
individual numbers, variety of species, or biomass, lack brains.
Brainless organisms provide both the base of food pyramids for others, and in some
cases essential forms of symbiosis (e.g. bacteria in the human gut.) Lacking brains
does not stop them processing information, e.g. in controlling their reactions to
their immediate environment and internal processes, including reproduction and
growth.
Even in organisms that have brains there is a vast amount of sub-organism control
(including homeostasis) and learning (adaptation) that does not use brain mechanisms,
e.g. in metabolism, reproduction, growth, brain development, immune reactions, and
many more -- but I suspect that only a tiny subset has so far been identified.
The majority of such cases, and certainly all the earliest cases, historically and
developmentally, seem to rest on molecular information-processing, for example
processes required for building brains, which, at least initially cannot use
brains, though later on in life that can change in various ways as discussed briefly
in [*].
It is commonplace to ask how the physical changes (e.g. construction of new complex
molecules, or changes in the availability of oxygen) occurred.
But it is also important to ask "Where does all the information come from?" --
e.g.
the information specifying complex organisms used in their reproduction.
Compare Paul Davies,
The Fifth Miracle: The Search for the Origin and Meaning of Life, 1999
We should not assume the information all came in one large dollop when the earth was
Exactly what information emerges may not be totally determined in the initial state,
if some of the interactions leading to new physical structures and new types of
information processing are physically unpredictable.
Another possibility is that external perturbations (e.g. asteroid impacts, changes in
radiation reaching the planet, could significantly alter the environment in which
already evolved organisms continue evolving -- including changing the physical
properties of the environment or eliminating or reducing other relevant species, e.g.
prey or predators.
The latter would be a special case of a process that happens continually, namely the
environment for any species can be changed as a result of evolutionary changes in
other species in that environment, including prey (food), predators, parasites,
symbiants, etc.
Memes are self-reproducing information structures that move between information users
with capabilities for communication, imitation, teaching, learning, and related competences.
A full discussion would need to include the transitions that led to production of
information-processing machinery capable of supporting meme construction and
reproduction -- very different from the mechanisms involved in encoding, copying,
transmitting, using, interpreting information in genes.
Note: learning by imitation can be seen as a special case of this more general kind of
learning by external provocation and detection of new affordances.
(The information-processing requirements for learning by imitation are ignored by many who
regard that as an explanatory category.)
Russell Foster, Professor of Circadian Neuroscience at Oxford University, is obsessed with biological clocks. He talks to Jim al-Khalili about how light controls our wellbeing from jet lag to serious mental health problems. Professor Foster explains how moved from being a poor student at school to the scientist who discovered a new way in which animals detect light...
"... we may have been lavishing too much effort on hypothetical models of the mindIn Cognition and Reality, W.H. Freeman., 1976.
and not enough on analyzing the environment that the mind has been shaped to meet."
Jump to CONTENTS list
__________________________________________________________________________________________
Other documents introduce the general project and discuss conjectures about overlaps
between mechanisms originally used (pre-historically) to produce the mathematical
knowledge accumulated in Euclid's elements and mechanisms involved in non-human
animal intelligence and types of discovery pre-verbal children can make ("toddler
theorems"), which I think have unnoticed connections with J.J.Gibson's claim that a
major function of perception is discovery of affordances.
[*] Some very sketchy theoretical ideas about the nature-nurture issues related to toddler
theorems are presented in this paper published in IJUC in 2007:
http://tinyurl.com/BhamCosy/#tr0609
Jackie Chappell and Aaron Sloman
Natural and artificial meta-configured altricial information-processing systems
There's more on toddler theorems here
And many other local colleagues and students, including: Jeremy Wyatt, Achim Jung,
Dean Petters, Nick Hawes, Richard Dearden, Ales Leonardis, Rustam Stolkin, John Barnden,
Peter Hancox, Sebastian Zurek, Manfred Kerber, Veronica Arriola-Rios, Jon Rowe, Mark Ryan,
Peter Coxhead, William Edmondson, Ela Claridge, Peter Tino, Marek Kopicki,
And various members of the Theory group who have, from time to time, tolerated my
wild speculations...
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham