POSTED 17 Jan 2003

Replies to comments are listed at the end
Last updated: 28 Jan 2003
Some broken links fixed: 4 Jun 2012


To: PSYCHE-B@LISTSERV.UH.EDU
Subject: pre-requisites for discussing consciousness

Here are some thoughts provoked by the recent (January 2003) spurt of messages on consciousness ( with subject: 'Memories are made of this') on the Psyche-b mailing list, archived here with individual messages listed here.

I've learnt over the years that in order to avoid various conceptual, methodological, and empirical errors it is useful if people who discuss consciousness (and other kinds of mental phenomena) have fairly deep knowledge in a number of fields, including these:

1. Software engineering (especially AI software engineering):

Knowing what software engineers know about virtual machines, for instance, is useful in order to avoid a host of philosophical errors about possible forms of mind/body relationships, and ill-informed views about requirements for the existence of mental states and processes or their having causal powers. [I've put this first because it is the one most often lacking in students of consciousness, which leads to the most common collections of confusions. I have some further notes on this below.]

2. Ethology:

In order to avoid concentrating too much on the human case (or worse, just the adult academic human case) instead of seeing how the human case fits into a much bigger picture.

(E.g. flea consciousness, crow consciousness, chimp consciousness, whale consciousness)

We could include developmental psychology here, insofar as infant humans are in many ways like another species.

3. Biology:

From a biological viewpoint human capabilities are the result of millions of years of evolution and depend on much that we share with other animals. Evolvability is a useful constraint on theories about minds, brains, consciousness, and related topics.

4. Anthropology:

In order to avoid the short-sightedness that follows when common assumptions about the nature of mind in our own culture are taken to be universally believed and (therefore?) true. (Kathy Wilkes has argued that nothing like our word 'conscious' existed in European culture till a few centuries ago and is still lacking in some cultures. See K.Wilkes 'Losing Consciousness' in T. Metzinger (ed) Conscious Experience, 1995. Others have argued that our notion of 'mind' did not exist in ancient Greece.)

5. Neuroscience:

In order to understand something about the implementation constraints on adequate theories of mind. (E.g. It is not enough to read typical summaries of how connectionist AI models work -- brains are much richer.) But we have to be sceptical also: neuroscientists are not necessarily trained to think about relevant kinds of virtual machines. (So they produce simplified theories about 'executive functions' for insance.)

6. Neuropsychiatry:

In order to be informed about the variety of components of mind that can be separately damaged or disabled (genetically or otherwise) and how those different sorts of malfunctions, or variations in functionality, can manifest themselves. E.g. what appears to be a unitary capability that is present or absent may turn out to be composed of several sub-capabilities that can be separately disabled, for instance seeing an object before you.

7. Anaesthesiology:

In order to learn about the wide variety of types of states that can be produced between normal full consciousness and profound unconsciousness.

8. Robotics:

In order to appreciate some of the design problems that had to be overcome by evolution in producing human minds and brains and also many others (e.g. explaining how Betty the crow does what she does - watch the video if you have not seen it:

http://news.bbc.co.uk/1/hi/sci/tech/2178920.stm
http://news.bbc.co.uk/media/video/38185000/rm/_38185047_crow_08aug_vi.ram

What are the problems in designing a robot that can do what Betty did? Designing robots helps to expose things that need explaining which otherwise can seem obvious. How do humans do what Betty did? http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk20
E.g. is it possible to represent branching possible futures where the branches (almost) form a continuum as in the crow's environment? How might crows (and humans) deal with this?
Robot vision is a largely unsolved problem:

        http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk8
        http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#gibson

9. Psychology:

This provides useful extensions to what most of us know already, as long as you reject the currently wide-spread naive empiricist methodology that often constrains psychological theorising and testing (partly based on a combination of physics-envy coupled with ignorance of how the physical sciences have actually developed, leading to bad philosophy of science, e.g. operationalism and excessive striving to meet the requirements of statistics packages rather than those of science.).

10. Linguistics:

To fill in all the gaps in the other disciplines concerning what languages are and how they work and how varied they are, and how little we currently understand about them. Psycholinguistics adds further detail regarding processes involved in language.

11. Literature:

In order to be reminded of the wide variety of types of mental phenomena that are possible - as reported by novelists, playwrights, and poets, each case presented in far more detail than scientists can usually manage. (David Lodge discusses some of the relationship in his recent collection of essays on Consciousness and the Novel.) E.g. some types of emotions that are impossible according to widely held theories are reported by sharp-eyed novelists, e.g. long term emotions that can be dormant much of the time, such as grief and jealousy.

12. Philosophy:

In order to become sensitive to conceptual muddles and confusions (especially those that are pervasive in discussions of consciousness, free will, emotion, supervenience, etc.) and in order to learn techniques for exposing hidden assumptions, ambiguities, vagueness, etc. (As long as you don't necessarily accept the philosophical theories of mind that are current, whose proponents usually suffer from ignorance concerning a large subset of the other topics listed here, especially software engineering and AI.)

13. Quantum mechanics:

In order to be able to understand why most of the arguments attempting to prove the relevance of quantum mechanics are irrelevant. However, for this it is not sufficient to understand QM. You also need to be an expert philosopher and to know a lot about the other topics.

14. Mathematics (continuous and discrete) and Logic

(Not just arithmetic and statistics). In order to be able to think at varying levels of abstraction with great precision, including understanding distinctions that are not easily expressible in ordinary language (e.g. it can be hard to explain to a non-mathematician how a process can simultaneously have decreasing velocity and increasing acceleration or how a finite state automaton can implement an automaton with many more states e.g. using sparse arrays, or how a finite machine can essentially implement an infinite machine, e.g. using a stack).

We may need to develop new kinds of mathematics to cope with the complex dynamics of information processing systems.

15. Sociology:

In order to learn about how factions and in-groups form and interact, so that we can more easily detect when we are acting as member of such groups and behaving more like a territorial defenders than good seekers after knowledge.

Social sciences can also provide knowledge of large scale virtual machines containing social/political/economic entities and relationships.


I can't claim to be expert in all those fields, as I have been studying for less than 50 years. But I know enough about most of them to see how they are all required for people who want to understand brains and minds.

NOTE: Why is software engineering so important?

Because one of the major advances in the twentieth century was our discovery that besides machines that manipulate matter or energy there are also machines, both natural and artificial, that manipulate information (not in the Shannon/Weaver sense but in the sense in which information involves semantic content -- i.e. it is about something.)

Long before we discovered that, biological evolution had created myriad varieties of information processors. Most scientists are not trained to think about information processing systems and are therefore seriously handicapped in theorising about organisms. In contrast, people doing various kinds of software engineering and AI have gained considerable experience in specifying, designing, implementing, testing, debugging, analysing, describing and explaining a steadily increasing variety of complex information processing systems, though not yet any as complex as human minds. Interaction is desirable.

In particular,software designers have had to learn how two (or more) very different kinds of reality (with very different ontologies) can co-exist and interact causally in a complex system, namely virtual machines and physical machines.

Example: e.g. a virtual machine containing abstract interacting entities such as

symbols, images, numbers sentences, paragraphs, fonts, spelling checkers, rules, rule-interpreters, plans, proofs, goals, intentions, priorities, constraints, algorithms, etc.

can be fully implemented in an entirely different sort of machine that is completely describable in the language of the physical sciences

(involving wires, transistors, currents, voltages, wave-forms, geometrical and temporal relationships, atoms, molecules, chemical processes, quantum events, etc. etc.)

Note: outsiders sometimes think of a virtual machine as a program, a textual object. Certainly text can specify a virtual machine, just as it can specify a physical machine. But text is passive, whereas a running virtual machine can make an airliner land safely, find solutions to mathematical problems, handle incoming and outgoing email, re-organise a disk filing system, control a chemical plant....

So virtual machines are not just passive collections of abstractions, like propositions, proofs, definitions, plans, rules etc. Such things could not land an airliner. However, combined with running interpreters, schedulers, event handlers, garbage collectors, compilers, mail servers, web servers, firewalls, timers, device controllers... they can do many things.

Virtual machines in computers or brains may contain abstract, non-physical entities, but that does not prevent them having causal powers, just as other more familiar virtual machines do, e.g. economic inflation, unemployment, ignorance, prejudice, generosity, kindness greed and superstition are all components of powerful socio-economic virtual machines. They can all produce effects.

Reportability

If minds are complex virtual machines in which many concurrently active information processing mechanisms interact, then some of what has been said in previous discussions about the reportability requirement takes no account of what we know about virtual machines.

An experienced software engineer can easily produce a system in which there is a complex virtual machine running whose externally observable behaviour is wholly incapable of providing ANY evidence as to what's going on. More commonly, systems are produced whose externally visible behaviour reveals only a small subset of what is going on in the virtual machines producing that behaviour.

(Hence the extreme difficulty of debugging some systems! Many microsoft users have suffered from this.)

The requirement for behavioural evidence for mental states and processes comes from out of date empiricist philosophy. A more sophisticated philosophy of science allows far more subtle and indirect relations between evidence and theory, as frequently arose in the history of physics. (e.g. Imre Lakatos (1970) Criticism and the Growth of Knowledge.)

Premature ejection of theories for lack of evidence, or even impossibility of direct evidence, is not good science. (When rival theories are both consistent with the empirical facts it may take decades, or longer, before the theories and data are developed enough for one theory to be shown to be better than the other.)

An undetectable virtual machine could be proving sophisticated mathematical theorems, or generating and interpreting huge image arrays, without being connected to any output devices capable of reporting what's going on.

Even if there are input/output devices enabling an external observer to probe some of what's happening, their bandwidth may be grossly inadequate to the task of reporting everything going on within the virtual machine. It could even be physically impossible to get sufficient bandwidth without perturbing the internal mechanisms.

The alternative approach of trying to find out what's happening by opening up the system and observing the processes occurring at the physical implementation level may also be inadequate, for instance because describing the virtual machines requires the use of a different ontology from that of the physical sciences.

Except in simple cases, it is not generally possible to read off what the high level ontology in a running virtual machine is just by studying patterns in the physical system that implements that virtual machine. Decompiling is possible only in the very simplest cases. In general an astronomical combinatorial search will be required in order to find any useful high level description, along with high to low mapping rules, consistent with the measurable physical traces over an extended period.
Compare: M. Scheutz, When physical systems realize functions..., Minds and Machines, 9, 2, pp. 161--196, 1999,

The system itself need not know what it is doing: it may lack mechanisms for self-observation that would be capable of internally recording everything that is happening. (In fact recording everything including the recording processes would lead to an infinite regress, which is physically impossible to implement.)

Even a system that records everything at one level of detail may not contain descriptive apparatus using an ontology that would enable it to describe itself at another level. E.g. internal monitoring might describe every bit-level operation yet be incapable of detecting that at a higher level there is a process searching for a proof of some mathematical theorem.

(Compare the brains of most animals, young children, and even adult humans. Humans lack the ontology to describe all their information-processing states even if they had physical circuits to monitor them. We get along for normal purposes with crude approximations. These have a habit of being imported unwittingly and uncritically into scientific discussions.)

Even if a system can describe some of its own virtual machine processes, that might be a result of a self-bootstrapping process in which it starts by recording low level events and then induces a high level ontology for describing its own VM processes. It need not be possible for such a system ever to convey to external observers what its ontology for self-description is. Whatever it says we may misunderstand because our ontology for mental states is different. This kind of radical incommunicability of mental contents does not imply the non-existence of those contents, except in a simplistic empiricist philosophy.

(Even if we can create a precise mathematical model of what's going on inside the machine, if we cannot run that model on our own brains we'll not grasp how the world, including the inner world, looks to the machine, i.e. what it is like to be that sort of machine. Or a bat.)

However, we may be able to formulate partial descriptions of what's going on provided that we have a good meta-theory for types of virtual machine contents, which we can use to generate potential explanatory theories. Generating such theories will require extreme creativity (like physicists building theories about the hidden nature of physical reality -- which often requires creative extension of the explanatory ontology, sometimes expressible accurately only with the use of mathematics).

Theory generation cannot be done by simple induction from lots of observations, even when detailed observations are possible, which they typically will not be.

If we are presented with an externally observable but previously unknown information processor (a mouse, a chimp, a human infant, an adult from another culture, a colleague from another discipline) we can attempt to guess what the appropriate ontology is for describing its virtual machine and then invent a way of mapping that onto a physical implementation (subject to known constraints on implementation mechanisms). We can then check whether the physical observations and observed interactions with the environment are consistent with the guess.

But when theory and proposed implementation are not consistent with observed data that does not disprove the high level theory for there may be a type of implementation (mapping between the levels) that we have not yet thought of. (Experienced software engineers who have not used AI programming languages sometimes find the things done in AI systems inconceivable: e.g. a program recompiling part of itself while it is running. Some computer scientists even think it is wicked to teach AI languages because they allow so much freedom to the programmer!)

Neural correlates of consciousness:

People who don't understand these issues often assume that there must be simple correlations between the virtual machine (e.g. mental) entities, sub-mechanisms, events, or processes, and components or features of the physical machine (like assuming there must be NCCs).

However we now know that the existence of (narrowly defined) NCCs is not generally a requirement for the physical implementation of an abstract machine.

A simple example is the implementation, using sparse array techniques, of an array containing more components than there are physical particles in the universe. This can be done on your desktop computer.

More subtle examples include dynamically evaluated structures, recursive structures, relocatable virtual memory components, entities with distributed implementations, and multi-level implementations (a hierarchy of virtual machines).

For example, it's easy to create a virtual machine in which an object X contains objects P, Q, R, where P contains X, Y and Z. I.e. X is a component of an object, P, which is a component of X. However this is not possible if X and P are physical objects. Thus there will not be a mapping from virtual to physical components that preserves containment relations.

Anyone who expects to find neural correlates of consciousness may be as misguided as people assuming that all virtual machine entities and processes in computers have consistent physical correlates.


The ideas I have summarised (probably inadequately) would have been metaphysical nonsense a century ago.

Now they are commonplace among sophisticated engineers who use them in building designing, debugging, testing, analysing and deploying complex virtual machines. (For instance you are using such virtual machines in composing, sending and receiving email.)

We had better learn how to use these ideas, along with deeper more powerful versions yet to be invented, in order to build good theories of mind and the relation between mind and brain.

Or even to ask good questions.

For more on this, including how to deal with objections to multi-level causation see this tutorial on philosophy of AI (co-authored with with Matthias Scheutz):
http://www.cs.bham.ac.uk/research/projects/cogaff/ijcai01

and this paper on architecture-based conceptions of mind
http://www.cs.bham.ac.uk/research/projects/cogaff/00-02.html#57

We need a far better understanding of the space of possible virtual machines, and how they occur in different kinds of biological systems.

When we have a good idea about what sorts of things can exist we can address issues about how to test for their existence. Premature concern with testability can stifle scientific creativity and lead to shallow theories.

When we understand the space of possible virtual machines better we can then refine and extend our ideas of consciousness to make them more precise in different ways (relative to different architectures).

We can then ask precisely which kinds of consciousness exist in flies, crows, newborn infants, adult humans in different cultures, and robots of the future, and answer this by investigating which sorts of virtual machines they contain.

If you are worried about how virtual machines that don't manifest all their details in behaviour could be selected by evolution, just consider that if something provides biological advantages
(a) it may have all sorts of side effects that have nothing to with those advantages
(b) it can produce those advantages by going through very complex processes whose individual steps need not independently produce biological advantages,
(c) it may solve a problem for which there are multiple different solutions, so that the mechanism used cannot be read off the solution.

I'll put these notes online here http://www.cs.bham.ac.uk/research/projects/cogaff/misc/consciousness-requirements

and will add notes and comments (including critical comments) later, below.

Aaron
===
Aaron Sloman,
School of Computer Science, The University of Birmingham, B15 2TT, UK
EMAIL A.Sloman AT cs.bham.ac.uk
PAPERS: http://www.cs.bham.ac.uk/research/cogaff/
FREE TOOLS: http://www.cs.bham.ac.uk/research/poplog/freepoplog.html
TALKS: http://www.cs.bham.ac.uk/~axs/misc/talks/
FREE BOOK: http://www.cs.bham.ac.uk/research/cogaff/crp/


Comments and replies to comments

19 Jan 2003: Reply to Mark Jonathan Horn

22 Jan 2003: Reply to Stan Klein's question about qualia.

Is it science or just philosophy?

Patrick Wilken posted a comment on this in which he wrote:

With all due deference to the virtue of machine minds, this seems to be one of those endless "I said, you said, she said..." discussions. If we can't cash out in concrete terms now virtual machine minds give real insight to the working of human brains (esp., consciousness) then I think we should move this discussion off list.

This provoked George McKee (who seems largely to agree with my two postings on virtual machines), to post a reply to Patrick which started:

I can try a short answer. Virtual machines represent a class of architectural structure for mental processing that is not easily derivable from current models of brain functioning. Yet from behavioral evidence such as high-level symbol-manipulation tasks, virtual machines are clearly one of the dynamic architectural structures that brains are capable of supporting.

His response saved me the the task of repeating a number of points about the biological relevance of virtual machines.

Jeff Dalton remained unconvinced and argued

But virtual machines do not interact with the host machine, nor are they linked with it, as if as if they were some kind of separate entity.

VMs are implemented via physical changes in the host machine (in its disks, main memory, etc). In effect, you get a different physical machine, just as you would if you "hard-wired", but by changing the state of a general-purpose machine instead.

and continued in a related vein.

In this he is implicitly stating the identity theory of the relation between virtual and physical machines.

I'll respond in more detail shortly. The bulk of the answer is in the Tutorial on Philosophical Foundations of AI that Matthias Scheutz and I presented at IJCAI01.