School of Computer Science
The University of Birmingham
http://www.cs.bham.ac.uk/~axs Originally Presented at a Royal Institute of Philosophy conference in 1992,
with proceedings later published as
Philosophy and the Cognitive Sciences,
Eds. Chris Hookway and Donald Peterson,
Cambridge University Press, Cambridge, UK, 1993, pp. 69--110,
Converted from 1992 FrameMaker PDF version 10 Apr 2014
(Some edits added. More to come.)
Many people who favour the design-based approach to the study of mind, including
the author previously, have thought of the mind as a computational system, though
they don't all agree regarding the forms of computation required for mentality.
Because of ambiguities in the notion of 'computation' and also because it tends to be
too closely linked to the concept of an algorithm, it is suggested in this paper that we
should rather construe the mind (or an agent with a mind) as a control system
involving many interacting control loops of various kinds, most of them implemented
in high level virtual machines, and many of them hierarchically organised. (Some of
the sub-processes are clearly computational in character, though not necessarily all.)
A number of implications are drawn out, including the implication that there are many
informational substates, some incorporating factual information, some control
information, using diverse forms of representation. The notion of architecture, i.e.
functional differentiation into interacting components, is explained, and the
conjecture put forward that in order to account for the main characteristics of the
human mind it is more important to get the architecture right than to get the
mechanisms right (e.g. symbolic vs neural mechanisms). Architecture dominates
During the 1970s and most of the 1980s I was convinced that the best way to think of
the human mind was as a computational system, a view that I elaborated in my book The
Computer Revolution in Philosophy published in 1978. (Though I did point out that there
were many aspects of human intelligence whose explanation and simulation were still a very
long way off.)
At that time I thought I knew exactly what I meant by 'computational' but during the late
1980s, while trying to write a second book (still unfinished), I gradually became aware that I
was confused between two concepts. On the one hand there is a very precisely definable
technical concept of computation, such as is studied in mathematical computer science
(which is essentially concerned with syntactic relations between sequences of structures,
[April 1993 Page -- 2 -- The mind as a control system]
e.g. formally definable states of a machine or sets of symbols), and on the other hand there
is a more intuitive, less well-defined concept such as people use when they ask what
computation a part of the brain performs, or when they think of a computer as essentially a
machine that does things under the control of one or more programs. The second concept is
used when we talk about analog computers, for these involve continuous variation of
voltages, currents, and the like, and so there are no sequences of states.
Attempting to resolve the confusion revealed that there were not merely two but several
different notions of computation that might be referred to in claiming that the mind is a
computational system. Many of the arguments for and against the so-called 'Strong AI
Thesis' muddle up these different concepts and are therefore at cross purposes, arguing for
not inconsistent positions, despite the passion in the conflicts, as I've tried to show in
(Sloman 1992), which demonstrates that there are at least eight different interpretations of
the thesis, some obviously true, some obviously false, and some still open to investigation.
Eventually I realised that the non-technical concept of computation was too general, too
ill-defined, and too unconstrained to have explanatory power: whereas the essentially
syntactic technical concept was too narrow: there was no convincing reason to believe that
being a certain sort of computation in that sense was either necessary or sufficient for the
replication of human-like mentality, no matter which computation it was.
Being entirely computational in the technical sense could not be necessary for mentality
because the technical notion requires all processes to be discrete whereas there is no good
reason why continuous mechanisms and processes should not play a significant part in the
way a mind works, along with discrete processes.
Being a computation in the technical sense could not be sufficient for production of
mental states either. On the contrary, a static sequence of formulae written on sheets of
paper could satisfy the narrow technical definition of 'computation' whereas a mind is
essentially something that involves processes that interact causally with one another.
To see that causation is not part of the technical concept of computation, consider that
the limit theorems showing that certain sorts of computations cannot exist merely show that
certain sequences of formula, or sequences of ordered structures (machine states) cannot
exist, e.g. sequences of Turing machine states that generate non-computable decimal
numbers. The famous proofs produced by Gödel, Turing, Tarski and others do not need to
make assumptions about causal powers of machines in order to derive non-computability
results. Similarly complexity results concerning the number of steps required for certain
computations, or the number of co-existing memory locations do not need to make any
assumptions about causation. Neither would adding any assumptions about computation as
involving causation make any difference to those results. Even the definition of a Turing
machine requires only that it has a sequence of states that conform to the machine's
transition table: there is no requirement that this conformity be caused or controlled by
anything, not even any mechanism implementing the transition table. All the mathematical
proofs about properties and limitations of Turing machines and other computers depend only
on the formal or syntactic relations between sequences of states. There is not even a
requirement that the states occur in a temporal sequence. The proofs would apply equally to
static, coexisting, sequences of marks on paper that were isomorphic to the succession of
states in time. The proofs can even apply to sequences of states encoded as Gödel numbers
that exist neither in space nor in time, but are purely abstract. This argument is elaborated in
Sloman (1992), as part of a demonstration that there is an interpretation of the Strong AI
thesis in which it is trivially false and not worth arguing about. This version of the thesis, I
[April 1993 Page -- 3 -- The mind as a control system]
suspect, is the one that Searle thinks he has refuted (Searle 1980), though I don't think any
researchers in AI actually believe it. There are other, more interesting versions that are left
untouched by the 'Chinese Room' argument.
Unfortunately, the broader, more intuitive concept of computation seems to be incapable
of being defined with sufficient precision to form the basis for an interesting, non-circular,
conjecture about the nature of mind. For example, if it turns out that in this intuitive sense
everything is a computer (as I conjectured, perhaps foolishly, in (Sloman 1978)), then saying
that a mind is a computer says nothing about what distinguishes minds (or the brains that
implement them) from other behaving systems, such as clouds or falling rocks.
I conclude that, although concepts and techniques from computer science have played a
powerful catalytic role in expanding our ideas about mental mechanisms, it is a mistake to try
to link the notion of mentality too closely to the notion of computation. In fact, doing so
generates apparently endless and largely fruitless debates between people talking at cross
purposes without realising it.
Instead, all that is needed for a scientific study of the mind is the assumption that there is
a class of mechanisms that can be shown to be capable of producing all the known
phenomena. There is no need for researchers in AI, cognitive science or philosophy to make
restrictive assumptions about such mechanisms, such as that they must be purely
computational, especially when that claim is highly ambiguous. Rather we should try to
characterise suitable classes of mechanisms at the highest level of generality and then
expand with as much detail as is needed for our purposes, making no prior commitments that
are not entailed by the requirements for the particular mechanisms proposed. We may then
discover that different sorts of mechanisms are capable of producing different sorts of minds,
and that could be a significant contribution to an area of biology that until now appears not to
have produced any hard theories: the evolution of mind and behaviour.
The purposes for which mental phenomena are studied and explained will vary from one
discipline to another. In the case of AI, the ultimate requirement is to produce working models
with human-like mental properties, whether in order to provide detailed scientific
explanations or in order to solve practical problems. For psychologists the goal may be to
model very specific details of human performance, including details that differ from one
individual to another, or from one experimental situation to another. For engineering
[April 1993 Page -- 4 -- The mind as a control system]
applications of AI, the goal will be to produce working systems that perform very specific
classes of tasks in well-specified environments. In the case of philosophy it will normally
suffice to explore the general nature of the mechanisms underlying mental phenomena down
to a level that makes clear how those mechanisms are capable of accounting for the peculiar
features of machines that can think, feel, take decisions, and so on.
That is the goal of this paper, though in other contexts it would be preferable to expand to
a lower level of detail and even show how to produce a working system, in a manner that
would satisfy the needs of both applied AI and detailed psychological modelling.
Since there are many kinds of control systems, I shall have to say what's special about a
mind. I shall also try to indicate where computation fits into this framework. I'll start by
summarising some alternative approaches with which this approach can be contrasted.
It is very hard to discuss or evaluate such analyses and theories, e.g. because
[April 1993 Page -- 5 -- The mind as a control system]
The real determinants of the mind are not conceptual requirements such as rationality,
but biological and engineering design requirements, concerned with issues like speed,
flexibility, appropriateness to the environment, coping with limited resources, information
retention capabilities, etc. We'll get further if we concentrate more on how it is possible for a
machine to match its internal and external processes to the fine structure of a fast-moving
environment, and less on what it is to be rational or conscious. Properties such as rationality
and intentionality will then emerge if we get our designs right. 'Consciousness' will probably
turn out to be a concept that's too ill-defined to be of any use: it will instead be replaced by a
collection of systematically generated concepts derived from theoretical analysis of what
different control systems can do.
This is closely related to what Dennett described as the 'design stance' (Dennett 1978).
It requires us to specify our theories from the standpoint of how things work: how perception
works, how motives are generated, how decisions are taken, how learning occurs, and so
on. Moreover, it requires us to specify these designs with sufficient clarity and precision that
a future engineer might be able to expand them into a working instantiation. Since this is very
difficult to do, we may, for a while, only be able to approximate the task, or achieve it only for
fragments of mental processes, which is what has happened in AI so far.
But the design stance does not require unique solutions to design problems. We must
keep an open mind as to whether there are alternative designs with interestingly varied
properties: abandoning Kant's idea of a 'transcendental deduction' proving that certain
features are necessary. Instead we can explore the structure of 'design space' to find out
what sorts of behaving systems are possible, and how they differ.
[April 1993 Page -- 6 -- The mind as a control system]
Adopting this stance teaches us that our ordinary concepts are inadequate to cope with
the full variety of kinds of systems and kinds of capabilities, states, or behaviour that can
emerge from exploratory studies of alternative designs in various kinds of environments, just
as they are inadequate for categorising the full variety of forms of mind found in biological
organisms, including microbes, insects, rodents, chimps and human beings. If we don't yet
know what mechanisms there may be, nor what processes they can produce, we can't
expect our language to be able to describe and accurately distinguish all the interestingly
different cases that can occur, any more than ordinary concepts can provide a basis for
saying when a foetus becomes a human being or when someone with severe brain damage
is no longer a human being. Our concepts did not evolve to be capable of dealing with such
We should assess theories in terms of their ability to support designs that actually work,
as opposed to merely satisfying rationality requirements, fitting introspection, or 'sounding
convincing' to willing believers.
In true philosophical spirit we can let our designs, and our theorising, range over the full
space of possibilities instead of being constrained to consider only designs for systems that
already exist: this exploration of possible alternatives is essential for clarifying our concepts
and deepening our understanding of existing systems.
This is very close to the approach of AI, especially broad-minded versions of AI that
make no assumptions regarding mechanisms to be used. Both computational and non-
computational mechanisms may be relevant, though it's not obvious that there's a sharp
[April 1993 Page -- 7 -- The mind as a control system]
[April 1993 Page -- 8 -- The mind as a control system]
[April 1993 Page -- 9 -- The mind as a control system]
All of this implies that:
The idea of a complete system as having an atomic state with a 'trajectory' in phase
space is an old idea in physics, but it may not be the most useful way to think about a system
that is made of many interacting subsystems. For example a typical modern computer can
be thought of as having a state represented by a vector giving the bit-values of all the
locations in its memory and in its registers, and all processes in the computer can be thought
of in terms of the trajectory of that state-vector in the machine's state space. However, in
practice this has not proved a useful way for software engineers to think about the behaviour
of the computer. Rather it is generally more useful to think of various persisting sub-
components (strings, arrays, trees, networks, databases, stored programs) as having their
own changing states which interact with one another.
[April 1993 Page -- 10 -- The mind as a control system]
So it is often more useful to consider separate subsystems as having their own states,
especially when the architecture changes, so that the set of subsystems, and substates, is
not static but new ones can be created and old ones removed. This leads to the following
Within the molecular approach we can identify a variety of functional sub-divisions
between sub-states and sub-mechanisms, and investigate different kinds of functional and
causal interactions. For example, we can describe part of the system as a long term
information store, another part as a short-term buffer for incoming information, another as
concerned with interpreting sensory input, another as drawing implications from previously
acquired information, another as storing goals waiting to be processed, and so on. The
notion of a global atomic state with a single trajectory is particularly unhelpful where the
various components of the system function asynchronously and change their states at
different rates, speeding up and slowing down independently of other subsystems.
Thus if dynamical systems theory is to be useful it will be at best a characterisation of
relatively low level implementation details of some of the subsystems. It does not provide a
useful framework for specifying how intelligent mind-like control systems differ from such
things as weather systems.
[April 1993 Page -- 11 -- The mind as a control system]
Moreover desire-like states can themselves be the objects of internal manipulation, for
instance when an agent suppresses a desire, or reasons about conflicting desires in
deciding what to do. Of course, in principle a defender of the dynamical systems analysis
could try to construe this as a higher level dynamical system with its own attractors
operating on a lower level one. Whether this way of looking at things adds anything useful
remains to be seen.
The control states listed above are not the only types of states to be found in intelligent
agents: they merely indicate the sorts of things that might be found in a taxonomy of
substates of an intelligent system. For complete specifications of control systems we would
need more than a classification of states. We would also need to specify the relationships
between states, such as:
[April 1993 Page -- 12 -- The mind as a control system]
[April 1993 Page -- 13 -- The mind as a control system]
These are merely some initial suggestions regarding the conceptual framework within
which it may be useful to analyse control systems in general and intelligent control systems
in particular. A lot more work needs to be done, including exploration of design requirements,
specifications, designs and mechanisms, and analysis of trade-offs between different
designs. All this work will drive further development of our concepts.
[April 1993 Page -- 14 -- The mind as a control system]
would be many lifetimes' work, but we can get some idea of what is involved by looking at
some special cases.
Thermostats provide a very simple illustration of the idea that a control system can
include substates with different functional roles. A thermostat typically has two control states,
one belief-like (B1) set by the temperature sensor and one desire-like (D1), set by the control
Arguing whether a thermostat really has desires is silly: the point is that it has different
coexisting substates with different functional roles, and the terms 'belief-like' and 'desire-like'
are merely provisional labels for those differences, until we have a better collection of theory-
based concepts. More complex control systems have a far greater variety of coexisting
substates. We need to understand that variety. Thermostats are but a simple limiting case. In
particular they have no mechanisms for changing their own desire-like states, and there is no
way in which their belief-like states can include errors which they can detect, unlike a
computer which, for example, can create a structure in one part of its memory summarising
the state of another part: the summary can get out of date and the computer may need to
check from time to time by examining the second portion of memory, and updating the
summary description if necessary. By contrast the thermostat includes a device that directly
registers temperature: There is no check. A more subtle type of thermostat could learn to
predict changes in temperature. It would check its predictions and modify the prediction
algorithm from time to time, as neural nets and other AI learning systems do.
Moving through design-space we find architectures that differ from the thermostat in the
kinds of sub-states, the number and variety of sub-states, the functional differentiation of
sub-states, and the kinds of causal influences on substates, such as whether the machine
can change its own desire-like states.
Systems with more complex architectures can simultaneously control several different
aspects of the environment. For example, the next figure represents a system involving three
independently variable states of the environment, E1, E2, E3, sensed using sensors S1, S2,
[April 1993 Page -- 15 -- The mind as a control system]
S3, and altered using output channels: O1, O2, O3. The sensors are causally linked to belief-
like internal states, B1, B2, B3, and the behaviour is produced under the influence of these
and three desire-like internal states D1, D2, D3. Essentially this is just a collection of three
independent feedback loops, and, as such, is not as interesting as an architecture in which
there is more interaction between control subsystems.
The architecture can be more complicated in various ways: e.g. sharing channels, using
multiple layers of input or output processing, self monitoring, self-modification, etc. Some of
these complications will now be illustrated.
An interesting constraint that can force internal architectural complexity occurs in many
biological systems and some engineering systems: Instead of having separate sensors (Si)
and output channels (Oi) for each environmental property, belief-like and desire-like state (Ei,
Bi, Di) a complex system might share a collection of Si and Oi between different sets of Ei,
Bi, Di, as shown in the next diagram. The sharing may be either simultaneous (with data
[April 1993 Page -- 16 -- The mind as a control system]
relevant to two tasks superimposed) or successive.
Examples of shared input and output channels are:
[April 1993 Page -- 17 -- The mind as a control system]
[April 1993 Page -- 18 -- The mind as a control system]
[April 1993 Page -- 19 -- The mind as a control system]
[April 1993 Page -- 20 -- The mind as a control system]
The diagram above is an attempt to illustrate all this architectural richness in a visual system,
albeit in a very sketchy fashion.
In human beings some, but not all, of the intermediate perceptual information stores are
accessible to internal self-monitoring processes, e.g. for the purpose of reporting how things
look (as opposed to how they are), or painting scenes, or controlling actions on the basis of
visible relationships in the 2-D visual field. I believe that this is the source of the kinds of
experiences that make some philosophers wish to talk about 'qualia'. From this viewpoint,
qualia, rather than being hard to accommodate in mechanistic or functional terms, exist as
an inevitable consequence of perceptual design requirements. Of course, there are
philosophers who add additional requirements to qualia that make them incapable of being
explained in this way: but I suspect that those additional requirements also make qualia
figments of such philosophers' imaginations. Not pure figments, since such philosophical
tendencies are a result of the existence of real qualia of the sort described here.
Vision, or at least human-like vision, is not just a recognition or labelling process:
creation and mapping of structures is also involved, and this requires architectures and
mechanisms with sufficient flexibility to cope with the rapidly changing structures that occur
as we move around in the environment. I've tried to elaborate on all this in Sloman (1989)
arguing that contrary to views associated with Marr, vision should not be construed simply as
being a system for producing information about shape and motion from retinal input. There
are other sources of information that play a role in vision, there are other uses to which
partial results of visual processing can be put (e.g. posture control, attention control), and
there are richer descriptions that the visual system itself can produce (e.g. when a face looks
happy, sad, dejected, beautiful, intelligent, etc.)
The internal information structures produced by a perceptual system depend not only on
the nature of the environment (E1, E2, etc.) but also on the agent's needs, purposes, etc.
(the Di) and conceptual apparatus. Because of this, different kinds of organisms, or even two
people with different information stores, can look at the same scene and see different things.
Many representational problems are still unsolved, including, for instance the problem of how
arbitrary shapes are represented internally. Clues to human information structures and
processes come from analysing examples in great detail, such as examples of things we can
see, how they affect us, and what we can do as a result. I believe that every aspect of human
experience is amenable to this kind of functional analysis, and that supposed counter-
examples are put forward only because many philosophers do not have sufficient design
creativity: most of them are not good cognitive engineers!
For example, it may be that variations during construction of a plan of action, variations
during visual perception of a continuously moving object, and variations when wondering
what conclusions can be drawn from some puzzling evidence all require very different
internal structural changes, and that different sorts of sub-mechanisms are therefore
[April 1993 Page -- 21 -- The mind as a control system]
The kind of variability needed in Bi and Di states depends on both the environment (e.g.
does it contain things with different structures, things with changing structures, etc.?) and the
requirements and abilities of the agent. Compare the needs of a fly and of a person. Do flies
need to see structures (e.g. for mating)? Do they deliberately create or modify structures?
Rivers don't. There is lots more work to be done analysing the design requirements for
various organisms in terms of their functional requirements in coping with the environment
and with each other. This is one way in which to provide a conceptual framework for
investigating the evolution of mind-like capabilities of different degrees of sophistication.
[April 1993 Page -- 22 -- The mind as a control system]
[April 1993 Page -- 23 -- The mind as a control system]
'Architecture dominates mechanism'
The detailed mechanisms make only marginal differences as long as they support the design
features required for reasons given earlier, such as:
[April 1993 Page -- 24 -- The mind as a control system]
As we've argued above, 'virtual' machines in computers seem to have some of the
required features, including rich structural variability and the ability to change structures very
quickly. It may be that brains can also do this, though if they do it will also most likely involve
another virtual mechanism, for it is not possible for networks of nerve cells to change their
structures rapidly. In computers the virtual machine structures are usually implemented in
terms of changing configurations of bit patterns in memory. Perhaps in brains it is done via
changing configurations of activation patterns of neurones. In computers the same
mechanisms are used for both short term and long term changes (except where long term
changes are copied into a slower less volatile memory medium such as magnetic disks and
tapes). In brains it seems likely that different mechanisms are used for long term and short
term changes. For example in some neural net models the long term changes require
changing 'weights' on excitatory and inhibitory links between neurones, and getting these
changes to occur seems to require much longer 'training' processes than the changing
patterns of activation produced by new neural inputs. (However, there are well known
remembering tricks that produce 'one shot' long term learning.) It seems very likely that there
are other kinds of important processes used in brains including chemical processes.
Whatever the actual biological implementation mechanisms may be it is at least
theoretically possible that the very same functional architectures are capable of being
implemented in different low-level mechanisms. It is equally possible that this is ruled out in
our physical world because some of the processes require tight coupling between high level
and low level machines, and it could turn out that in our universe the only way to achieve this
is to use a particular type of brain-like implementation. E.g. it could turn out that, in our
universe, only a mixture of electrical pathways and chemical soup could provide the right
combination of fine-grained control, structural variability and global control. I have no reason
to believe that there is such a restriction on possible implementations: I merely point out that
it is a possibility that should not be ruled out at this stage.
But we don't know enough about requirements, nor about available mechanisms, to
really say yet which infrastructure could and which couldn't work These are issues still
requiring research (not philosophical pontificating!).
Many people feel that their concepts are so clear and precise that they can be used to
produce a sharp division in the world. That is there is a major dichotomy like this:
[April 1993 Page -- 25 -- The mind as a control system]
Unfortunately when they attempt to decide where the dividing line actually is they
generally find it so hard to provide one, especially one on which everyone will agree, that
many of them then jump to the conclusion that the space is a smooth continuum with no
natural division, so that it's a purely a matter of convenience where the line should be drawn.
So they think of design space like this:
This is a deep mistake: any software designer will appreciate that there are many
important discontinuities in designs. For instance a multi-branch conditional instruction in a
typical programming language can have 10 branches or 11 branches but cannot have 10.5
or 10.25 or 10.125 branches. Each condition-action pair is either present or not present.
Similarly, a machine can have skids for moving over the ground or it can have wheels,
but there is no continuous set of transformations that will gradually transform a skidded
vehicle to a wheeled vehicle: eventually there will be a discontinuity when the system
changes from being made of one piece to being made of pieces that can move against each
other (like an axle in a hole). If we think of biological organisms as forming a continuum then
we fail to notice that there is a very important research task to be done, namely to explore the
many design discontinuities in order to understand where they occur, what difference it
makes to an organism whether it is on one side or the other of the discontinuity, and what
kinds of evolutionary pressures might have supported the discontinuous jump. (Notice that
none of this is an argument in support of a creationist metaphysics: it is a direct consequence
of Darwinian theory that since acquired characteristics cannot be inherited there can only be
a finite number of designs occurring between any two points in time, and therefore there
must be many discontinuous changes, even if many of them are small discontinuities, such
as going from N to N+1 components where N is already large.)
However it could well turn out that some of the discontinuities were of major significance.
So we should keep an open mind and, for the time being assume that design space includes
a large number of discontinuities of varying significance, some far more important than
others. We could picture it something like this:
[April 1993 Page -- 26 -- The mind as a control system]
This picture is still too simple: e.g. it is single-layered, whereas different maps may
required for different levels of design. There are still many design options and trade-offs that
we don't yet understand. We need a whole family of new concepts, based on a theory of
design architectures and mechanisms, to help us understand the relation between structure
and capability (form and function).
Within this framework we can construe different kinds of attention in terms of different
ways patterns of activity can be selected. The selection may involve changing which
information is analysed, how it is analysed (i.e. which procedures are applied), and selecting
where the results should go. Another example would be selecting which goals to think about
or act on, and, for selected goals, choosing between alternative issues to address, e.g.
choosing between working out whether to adopt or reject the goal, working out how urgent or
important it is, selecting or creating a plan for achieving it, etc.
Some selections will be based solely on what is desirable to the system or serves its
needs. However, sometimes two or more activities that are both desirable cannot both be
pursued because they are incompatible, such as requiring the agent to be in two places at
once, or looking in two directions at once or requiring more simultaneous internal processing
[April 1993 Page -- 27 -- The mind as a control system]
than the agent is capable of. The precise reasons why human thought processes are
resource limited is not clear, but resource limited they certainly are. So the control of
attention is important, and allowing control to be lost and attention to be diverted can
sometimes be disastrous. The architecture should therefore include mechanisms that have
the ability to filter out attention distractors.
These remarks are typical of the problems that arise when one adopts the design stance
that would not normally occur to philosophers who don't do so. Their significance is that they
point to the need for mechanisms in realistic, resource-limited, agents in terms of which
mental states and processes can be defined that would be totally irrelevant to idealised
agents that had unlimited processing capabilities and storage space. Thus insofar as it is
part of the job of philosophers to analyse concepts that we use for describing the mental
states and processes of real agents, and not just hypothetical imaginary ideal agents,
philosophers need to adopt the design stance.
This can be illustrated with the example of a certain kind of emotional state. I have tried
to show elsewhere (Sloman and Croucher 1981, Sloman1987, Beaudoin and Sloman 1993)
that certain kinds of resource-limited systems can get into states that have properties closely
related to familiar aspects of certain emotional states, namely those in which there is a partial
loss of control of our own thought processes. Such capabilities would not be the product of
specific mechanisms for producing those states, but would be emergent properties of
sophisticated resource-limited control systems, just as saltiness emerges when chlorine and
sodium combine, and 'thrashing' can emerge in an overloaded computer operating system.
Our vocabulary for describing such emergent global states will improve with increased
understanding of the underlying mechanisms.
There are many shallow views about emotional states, including the view that they are
essentially concerned with experience of physiological processes. If that were true then
anaesthetising the body would be a way to remove grief over the death of a loved one.
A deeper analysis shows, I believe, that the what is important to the grieving mother (and
those who are close to her) is that she can't help thinking back about the lost child, and what
she might have done to prevent the death, and what would have happened if the child had
lived on, etc. There may also be physiological processes and corresponding sensory
feedback but in the case of grief they are of secondary importance. The socially and
personally important aspects of grief are closer to control states of a sophisticated
information processing system.
Several AI groups are now beginning to explore these issues. But there is much that we
still don't understand about design requirements relating to the sources of motivation and the
kinds of processes that can occur in a system with its own motivational substates.
It is often said that a machine could never have any goals of its own: all of its goals would
essentially be goals of the programmer or the 'user.' However, consider a machine that has
the kind of hierarchy of dispositional control states described previously, analogous to very
[April 1993 Page -- 28 -- The mind as a control system]
general traits, more specific but still general attitudes, preferences, and specific desire-like
states. Now suppose that it also includes 'learning' mechanisms such that the states at all
levels in the hierarchy are capable of being modified as a result of a long history of
interaction with the environment, including other agents. After a long period of interacting
with other agents and modifying itself at different levels in the control hierarchy such a
machine might respond to a new situation by generating a particular goal. The processes
producing that goal could not be attributed entirely to the designer. In fact, there will be such
a multiplicity of causes that there may not be any candidate for 'ownership' of the new goal
other than the machine itself. This, it seems to me, is no different from the situation with
regard to human motives which likewise come from a rich and complex interplay of genetic
mechanisms, parental influences and short and long term, direct and indirect effects of
interaction with the individual's environment, including absorption of a culture.
Issues concerning 'freedom of the will' get solved or dissolved by analysing types and
degrees of autonomy within systems so designed, so that the free/unfree dichotomy
disappears. (Compare Dennett 1984, Sloman 1978)
Exploration of important discontinuities in design-space could lead to the formulation of
important new questions about when and how these discontinuities occurred in biological
evolution. For example, it could turn out that the development of a hierarchy of dispositional
control states was a major change from simpler mechanisms permitting only one control loop
to be active at a time. Another discontinuity might have been the development of the ability to
defer some goals and re-invoke them later on: that requires a more complex storage
architecture than a system that always has only one 'adopted' goal at a time. Perhaps the
ability to cope with rapid structural variation in information stores was another major
evolutionary advance in biological control systems, probably requiring the use of virtual
One implication of the claim that there's not just one major discontinuity, but a large
collection of different discontinuities of varying significance is that many of our concepts that
are normally used as if there were a dichotomy cannot be used to formulate meaningful
questions of the form 'Which organisms have X and which organisms don't?', 'How did X
evolve?' 'What is the biological function of X?' This point can be made about a variety of
substitutes for X, e.g. 'consciousness', 'intelligence', 'intentionality', 'rationality', 'emotions'
However, a systematic exploration of the possibilities in design space could lead us to
replace the supposed monolithic concepts with collections of different concepts
corresponding to different combinations of capabilities. Detailed analysis of the functional
differentiation of substates and the varieties of process that are possible could produce a
revised vocabulary for kinds of mental states and process. Thus, instead of the one ill-
defined concept 'consciousness' we might find it useful to define a collection of theoretically
justified precisely defined concepts C1, C2, C3... Cn, which can be used to ask scientifically
answerable questions of the above forms.
This evolution of a new conceptual framework for talking about mental states and
processes could be compared with the way early notions of kinds of stuff were replaced by
modern scientific concepts as a result of the development of the atomic theory of matter.
[April 1993 Page -- 29 -- The mind as a control system]
inherently unstable: internal states are constantly in flux, even without external stimulation.
Most of the 'behaviour' of such a machine would then be internal (including changes within
virtual machines). Moreover, since most of the causal relationships between external stimuli
and subsequent behaviour in such a system would be mediated by internal states, and since
these states are in a state of flux, the chance of finding interesting correlations between
external stimuli and responses would be very low, making the task of experimental
psychology almost hopelessly difficult.
For similar reasons, there would not necessarily be any close correspondence between
internal control states such as the Bi and Di, and external circumstances and behaviour. So,
for such a system, inferring inner states from behaviour with any reliability is nearly
impossible. Moreover, if many of the important control states are states in virtual machines
there won't be much hope of checking them out by opening up the machine and observing
the internal physical states either. This provides a kind of scientific justification for
philosophical scepticism about other minds.
Thus, even if design-based studies lead to the development of a new systematic
collection of concepts for classifying types of mental states and processes it may be very
difficult to apply those concepts to particular cases. This could be put in the form of a
paradox: by taking the design stance seriously we can produce reasons why the design
stance is almost impossible to apply to the understanding of particular individuals which we
have not designed ourselves.
If some of the internal processes are 'self-monitoring' processes that produce explicit
summary descriptions of what's going on (inner percepts?) these could give the agent the
impression of full awareness of his own internal states. But if the self-monitoring processes
are selective and geared to producing only information that is of practical use to the system,
then it will no more give complete and accurate information about internal states and
processes than external perceptual processes give full and accurate information about the
structure of matter. Thus the impression of perfect self-knowledge will be an illusion.
Nevertheless the fact that all this happens could be what explains the strong temptation to
talk about 'qualia' felt by many philosophers. I have previously drawn attention to the special
case of this where internal monitoring processes can access intermediate visual databases.
More generally, a host of notions involving sentience, self-monitoring capabilities, high-
level control of internal and external processes including attention, and the ability to direct
attention internally, including attending to 'qualia', could all be accounted for by a suitable
information-processing control system.
When we have a good design-based theory of how complex human-like systems work it
could lead us to many new insights concerning ways in which they can go wrong. This could,
for example, help us to design improved teaching and learning strategies, and strategies for
[April 1993 Page -- 30 -- The mind as a control system]
helping people with emotional and other problems. If we acquire a better understanding of
mechanisms underlying learning, motivation, emotions, etc. then perhaps we can vastly
improve procedures in education, psychotherapy, counselling, and teaching psychologists
about how minds work (as opposed to teaching them how to do experiments and apply
By now readers will be aware that such questions are based on the unjustified
assumption that we have a precisely defined concept which generates a dichotomous
division. This is an illusion, just like all the other illusions that bedevil philosophical
discussions about mind. It's an illusion because our ability to represent or think about things
is not a monolithic ability which is either entirely absent or all present in every other organism
or machine. Rather it's a complex collection of (ill-understood) capabilities different subsets
of which may be present in different designs.
One group of relevant capabilities involves the availability of sub-mechanisms with
sufficiently varied control states for particular representational purposes. The kinds of
variability in the mechanisms required for intermediate visual perception are likely to be quite
different from the sorts of variability required for comparing two routes, or thinking about what
to do next week. There are probably far more organisms that share with us the former
mechanisms than share the latter. We can label the structural richness requirement a
Another group of requirements involves functional diversity of uses of the representing
structures. Humans can have states in which they perceive things, wonder about things (e.g.
is someone in the next room?), desire things (e.g. wanting a person to accept one's marriage
proposal) or plan sequences of actions. Being able to put information structures to all these
diverse uses requires an architecture that supports differentiation of roles of sub-
mechanisms. Some organisms will have only a small subset of that diversity in common with
us, others a larger set. A bird may be capable of perceiving that there are peanuts in a
dispenser in the garden, but be incapable of wondering whether there are peanuts in the
dispenser or forming the intention to get peanuts into the dispenser. (Of course, I am
speaking loosely in saying what it can see: its conceptual apparatus may store information in
a form that is not translatable into English. It's hard enough to translate other human
languages into English!)
What exactly are the syntactic and functional requirements for full human-like
intentionality, i.e. representational capability? I don't yet know: that's another problem on
which there's work to be done, though I've started listing some of the requirements in
previous papers (Sloman 1985, 1986). One thing that's clear is that any adequate theory of
how X can use Y to refer to Z is going to have to cope with far more varied syntactic forms
than philosophers and logicians normally consider: besides sentential or propositional forms
there will be all the kinds of representing structures that are used in intermediate stages of
sensory processing. Thus an adequate theory of semantics must account for the use of
pictorial structures and possibly also more abstract representational structures such as
[April 1993 Page -- 31 -- The mind as a control system]
patterns of weights or patterns of activation in a neural net.
What convinces me that the problems of filling in the story are not insuperable is the fact
that there are clearly primitive semantic capabilities in even the simplest computers, for they
can use bit patterns to refer to locations in their memories, or to represent instructions, and
they can use more complex 'virtual' structures to represent all sorts of things about their own
internal states, including instructions to be obeyed, descriptions of some of their memory
contents, and records of their previous behaviour. A machine can even refer to a non-
existent portion of its memory if it constructs an 'address' that goes beyond the size of its
memory. With more complex architectures they will have richer, more diverse semantic
Being able to refer to things outside itself, or even to non-existent things like the person
wrongly supposed to be in the next room or the action planned for tomorrow which never
materialises, requires the machine to have a systematic and generative way of relating
internal states to external actual and possible entities, events, processes, etc. Although this
may seem difficult in theory, in practice fragmentary versions of such capabilities are already
possessed by robots, plant control systems and other computing systems that act semi-
autonomously in the world (Sloman 1985,1986). Of course, they don't yet have either the
syntactic richness or the functional variety of human representational capabilities, but the
question how to extend their capabilities is to be treated as an engineering design problem.
Instead of proving that something is or is not possible, philosophical engineers, or design-
oriented philosophers, should expect to find a range of options with different strengths and
Anyone who tries to prove that it is impossible to create a machine with semantic
capabilities risks joining the ranks of those who 'knew' that the earth was flat, that action at a
distance was impossible, that space satisfied Euclidean axioms, that no uncaused events
can occur, or that a deity created the universe a few thousand years ago.
I am grateful for comments and criticisms of earlier versions of this paper and related papers,
made by colleagues and students in the Cognitive Science Research Centre, the University
[April 1993 Page -- 32 -- The mind as a control system]
[April 1993 Page -- 33 -- The mind as a control system]