[This file may be converted to HTML later]

From Aaron Sloman Wed Jan 22 05:04:51 GMT 2003 To: PSYCHE-B@LISTSERV.UH.EDU Subject: Re: Memories are made of this (and a virtual machine question) Stan aimed a question at me. It's answer is quite complex. I don't know if I can make it clear and convincing, or even just clear, but I'll try. > Speaking of metaphysical issues I have one for Aaron. What evidence > do you have that virtual machines tell us about the qualia (feels) > aspect of consciousness. I think that is the aspect that we have been > discussing. Individual virtual machines produced by this or that software engineer won't tell us anything specific about qualia or consciousness. Likewise a virtual machine that proves algebraic theorems will probably not tell you much about a virtual machine that controls a chemical plant or an aeroplane's automatic landing system or one that paints pictures, like the AARON program of Harold Cohen. (You can download it or watch it paint from a distance here http://www.kurzweilcyberart.com/ ) However, learning about virtual machines, including how to design them, debug them, document them, extend them, criticise them, exposes you to powerful new ways of thinking about processes, mechanisms and causation. We can then combine these new ideas and techniques with information gained from lots of other disciplines. Of course, ideas concerning virtual machines are not completely new. Freud's id, ego and superego were of course virtual machines not physical machines. (Perhaps he thought some aspects of the id were physical: I don't know.) Long before Freud there were philosophers, logicians and others who speculated about relations between physical and non-physical machines. For instance Aristotle thought of souls (minds) as somehow intimately linked to the body: the soul is the form of the body, but not all bodies had souls of the same kind, since only some could do reasoning, have beliefs, etc. http://azaz.essortment.com/aristotlesoulp_rmeb.htm In between we get Shakespeare saying that there is no art to find the mind's construction the face, which is true in general, even though he allowed Lady Macbeth to chide her husband for letting others see his mind (especially his guilt) in his face 'Your face my thane is as a book, where men may read strange matters' And when the great psychologist wrote: Love is not love which alters when it alteration finds he was not talking about brains (what did Shakespeare know about the amygdala?) but about a virtual machine in which emotional states and attitudes can sometimes be changed on the basis of new information (finding alteration in a person can change your attitude to that person) even though there are some virtual machine states that are not so easily dislodged, e.g. love -- like the poor sods who go on loving the one who beats them up. (Powerful attractors in a state space? But it will have to be a state space for a machine whose states include semantic contents. ) There have also been economists, historians, anthropologists, etc. who study social, economic and political virtual machines and their interactions. I guess entomologists are another case. So, as someone else pointed out, people have been studying virtual machines of many kinds for centuries past. What is new (in the last half century) is that whereas - previously people were studying 'from the outside' very complex systems whose character was very hard to understand and whose relationship to the underlying physical reality was a source of many unsolved puzzles (e.g. about how non-physical causes can produce physical effects) - now we can design and build and debug and modify and completely explain a class of virtual machines where things are very much clearer, including VMs that can do things that previously no artificial machine could do, only humans (or other animals, in some cases). What is more, we can use these simple cases to refute incorrect theories that have confused thinking about the more complex virtual machines, e.g. theories which claim that every component of a virtual machine must correspond to a physical part of the implementing physical machine, and theories that claim that virtual machine entities and events cannot have physical effects since they assume that only physical causes can have physical effects. (A calculation or piece of reasoning based on information gained through sensors can cause a virtual machine controlling a chemical plant to send a signal that closes a valve, or increases the speed of a pump, for instance.) But more importantly, we have begun to discover some of the huge variety that is possible in virtual machines enabling us to explore richer and richer variants, including hybrid virtual machines with complex architectures that include quite different sub-machines: for instance machines combining operations on discrete structures (trees, graphs, lists of words forming sentences, equations etc.), machines that operate on image arrays that are treated as samples of a continuous image, and machines involving highly concurrent networks with continuously varying levels of activation or connection strengths (as in various kinds of neural nets). Thus instead of being confused amateurs studying virtual machines that are almost completely beyond our comprehension we are developing professional concepts, formalisms, tools and theories (including mathematical theories) that enable us to construct ever more sophisticated virtual machines that we don't merely talk about but actually build, and demonstrate working, and in the process discover their limitations and get ideas about further requirements and further design possibilities for meeting those requirements. Unfortunately some people who have studied Turing machines or who have learnt to write programs in Fortran, or have read an introductory book on expert systems, think they therefore understand all there is to know about virtual machines that can be run on computers. Nobody understands all there is to know about them: the subject is still in its infancy (and so-called proofs of their limitations are based on false assumptions). Moreover most of the interesting things to be learnt are not just *general* principles relevant to all virtual machines but what the *specific* requirements are for virtual machines with specific human capabilities. E.g. in designing AARON (no relation) Harold Cohen had to think very hard, over many years, about how he experiences shapes and relationships between shapes, and later on how he experiences colours. (For many years he had to add colours by hand to the pictures produced by the machine.) Of course, we can't say that his program has human experiences, especially aesthetic experiences, but it must have some primitive grasp of spatial structure and relationships including things like relations between human body parts, and flowers, tables, doors, etc. in order to be able to assemble the components of the pictures it makes. Likewise some of my colleagues are trying to develop virtual machines that can find mathematical proofs more successfully than existing theorem provers, and in order to do that they have to reflect on their own experiences in searching for proofs, which requires far more than the shallow knowledge of the low level syntactic structure of legal proofs. In particular they are trying to give the machine a kind of intuitive grasp of the high level structure of proofs so that the machine can first sketch out abstract proofs (proof plans) and then try to fill in the details, sometimes even using geometrical shapes as models, just as human mathematicians sometimes do (e.g. proving that the sum of the first N odd numbers is a perfect square, by thinking of ways of arranging dots in square arrays). I don't know whether AARON works out a sketchy plan for its picture before working out where the detailed lines go, but that ability to move up and down between different levels of abstraction is a feature common to a lot of human thinking. So replicating that would be part of the task of a machine that shared qualia with a mathematician: reasoning qualia? However the notion of qualia that has come out of philosophy and been uncritically accepted by so many scientists is too ill-defined for it to be clear what sort of virtual machine would have qualia. People can't even agree as to whether humans have them, or house-flies have them, an indication that the notion may involve conceptual confusion. I suspect that the deepest theory we'll ever have about qualia will be a theory that tells us how to build robots with virtual machines so like those of humans that the robots go through the very same virtual machine states and processes that led philosophers to invent the notion of qualia, feels, what it is like to experience something, etc. That's still a long way off, as the task is so unclear. However, a possible line of development flows from the observation that humans can switch attention between all sorts of things. This might be implemented in a virtual machine that selects different kinds of information to process and processes it in different ways: e.g. switching between attending to the door and attending to the window, and while attending to the door switches between attending to its shape, its colour, its texture, its style, etc. (Virtual Machine Attention System: VMAS). Suppose you built a robot that could not only move around, and see things using something like TV cameras, but, using VMAS, could also switch its attention around, sometimes attending to past events, sometimes thinking about what may happen in the future, sometimes looking at the shapes of objects, sometimes attending to their colours or textures. Suppose that it could also turn its attention *inwards*, for instance noticing what it was thinking about, or remembering how it had thought about this topic previously, or examining not the external objects perceived visually, but also some of the internal intermediate results of visual processing. Such a robot might then notice that not only do things in the environment change, e.g. if they are moved, painted, broken, etc., but even when nothing in the environment changes something can change in its perception of the environment. E.g. where experienced edges of objects meet, or how acute the angle appears where two edges of a table seem to meet, or whether one thing feels warmer than another to the touch. Some of these are facts about visual appearances that humans don't notice until they are trained to drawn and paint objects realistically. The same might be true of a robot: it might go through life never noticing the non-rectangularity in the experienced shape of a table top (as a young child apparently doesn't). But if the robot does manage to switch its 'inner' attention onto those aspects of how it perceives external objects, it may start thinking: There is something in me which is the content of my sensory experience which is distinct from the external objects that stimulate me to have those contents. The more intelligent robots will short-circuit a few hundred years of philosophy and reinvent most of the the philosophical theories about consciousness, qualia, the relation between mind and body, whether mental events can cause physical events, and whether they have free will. However, producing robots that are capable of replicating the internal states and processes that have led to human theories and debates about consciousness will require an information processing architecture that is capable of supporting all those states and processes. But at present we don't really know what that architecture will be like. Some of us have first draft speculations regarding the architecture (e.g. the H-Cogaff architecture described on my web site) and the similar but slightly different architecture described in Minsky's draft book, The Emotion machine, on his web site. http://web.media.mit.edu/~minsky/ At this stage besides the task of learning how to design sufficiently complex and realistic virtual machines and show how they might be implemented in brains, we have the additional and possibly harder task of analysing the *requirements* for those machines. It's hard for many reasons including the fact that we don't know and can't easily find out what's actually going on in human virtual machines (for the sorts of reasons that have been mentioned previously, including limited bandwidth for external communication, and also restrictions on internal self-monitoring and self reporting). Moreover, the enormous versatility of human virtual machines means that almost anyone studying them will miss out on a lot. But different investigators, especially investigators from different disciplines can learn different things, and we can pool the results to begin to build up a list of specifications for the design of a machine with human-like internal (virtual machine) processes. Of course, some people (and some robots) will argue that even when we have a complete or nearly complete set of specifications and have built a working system that meets the specifications, not only in its externally observable behaviour but also in its internal virtual machine operations, it is still missing something that humans (or really intelligent robots) have. When the virtual machines in our robots go through processes of attending inwardly (using VMAS) and reasoning about the processes, which lead them to the conclusion that our virtual machine design could not possibly explain what is going on inside *them* then we'll have good reason to think our design is at least approximately right as a theory of how human minds work (until a better theory comes along). That will still leave the problem of explaining in detail how the virtual machine is implemented in brains. That is not the same thing as finding neural correlates of bits of virtual machines. In general implementation of virtual machines will be more holistic. But that's another story. I'll attach this message to http://www.cs.bham.ac.uk/research/cogaff/misc/consciousness-requirements/ Aaron