ABSTRACT OF INVITED TALK BY AARON SLOMAN FOR THE WORKSHOP ON CONSCIOUSNESS AT ELSINORE DENMARK AUGUST 1997 T H E B R A I N A N D S E L F Copenhagen and Elsinore, Denmark, August 18-24, 1997 ==================================================== Details on the web at: http://www.zynet.co.uk/imprint/elsinore ======================================================================= Invited Speaker at Elsinore Workshop: Aaron Sloman, Professor of Artificial Intelligence and Cogniive Science School of Computer Science, The University of Birmingham, B15 2TT, UK http://www.cs.bham.ac.uk/~axs A.Sloman@cs.bham.ac.uk Title: Bridging The Explanatory Gap: "virtual machine functionalism" Abstract: This talk has four main themes. (a) Mere correlations, however well established, explain nothing: a stronger, deeper connection is needed between matter and mind. However, since the physical and the mental involve quite different, definitionally unrelated, ontologies, we cannot expect a mathematical or deductive connection. I shall try to sketch an intermediate alternative based on the notion of "implementation", the engineering equivalent of the philosophical notion of "supervenience". We can sometimes explain X by showing how it is implemented in Y: a link that is stronger than correlation but weaker than deduction. X is then a "virtual" or "abstract" machine. X and Y may or may not be computational (digital, discrete, serial, symbolically programmed) machines. We still have much to learn about possible types of information processing engines and their implementation requirements. (b) Many of our concepts have hidden complexities that elude naive reflection or introspection, e.g. "continuity" and "simultaneity". Likewise our concepts of mental states and processes (including "qualia") actually refer implicitly to collections of coexisting, interacting, states and processes in powerful "virtual machines" and causal relationships between those states and processes. (Kant expressed this differently: intuitions as involving concepts. Wittgenstein: substratum of experience is mastery of a technique.) In short, a conscious ghost must contain a machine (e.g. a concept-applying, schema-applying, engine). Limitations in mechanisms of introspection hide all this and encourage self-delusion, including the delusion that containing such a machine is compatible with being a "zombie". Imagining such a possibility proves as little as imagining the possibility of absolute global simultaneity. Sometimes deeper understanding reduces what we can imagine. (c) This form of functionalism is all about internal states and does not require any external behaviour. E.g. it allows that in principle a disconnected or disembodied mind of a mathematician could be passionately concerned with investigations in pure number theory, constantly exploring conjectured theorems, trying to produce counter examples to conjectured non-theorems, searching for more general, more elegant, and deeper theoretical frameworks, etc. Such a mathematician without eyes, ears, fingers, or voice, could be delighted with successful proofs, disappointed or even depressed on discovering fallacies in its arguments, hopeful about lines of investigation that look promising, surprised at unexpected relationships, frustrated at repeated failures, and so on. Sorrow does not require physical tears or drooping mouth. A disconnected geometer might, however, benefit from internally generated visual qualia. It's not external connections but an appropriate internal ARCHITECTURE that makes possible coexisting mutually influencing states and processes that constitute mental states. Though the architecture may exist in a virtual machine, it will need at some level to be implemented in (supervenient on) a physical architecture, which could, however, be totally different in form. (Most philosophers get the requirements for supervenience wrong.) (d) Different minds involve different architectures. Not all architectures suffice for typical human mental phenomena. Although there are many types of more or less abstract machines (with diverse architectures) that can be implemented in (supervenient on) physical machines (e.g. insect minds, spreadsheets, the internet) not all have the "semantic bootstrapping" ability to interpret themselves as engines with semantic states. Evolution clearly discovered long ago the power of such intentional engines and how to implement them in physical engines. We are now slowly groping towards rediscovery of such designs, with some way to go, though we can already see how different architectural layers can coexist in a complex virtual machine, e.g. reactive, deliberative and self-managing layers. There is not a unique design solution: many varieties of such self interpreting engines with different types of powers are possible. Some of them are also self-transforming so that they develop and change in significant ways, e.g. the minds of human infants, and cultures. When we fully understand points (a) to (d) we shall both understand what the gap is that needs explaining and how to bridge it. Only future research can show whether this claim is correct or not. Note: Not all human mental states are possible in a disconnected architecture: e.g. successful reference to spatio-temporal particulars, such as thinking about the Eiffel Tower, requires causal embedding in the environment containing those particulars. such states are not purely mental. There may not be time to discuss the significance of this point, which leads to tangled philosophical "twin-earth" discussions.