From Aaron Sloman Sat Aug 17 23:49:23 BST 1996 To: PSYCHE-D@IRIS.RFMH.ORG Subject: QM in Stapp&Sarfatti vs Hameroff and Penrose After Stan Klein's reaction to my comments on the possible explanatory role for quantum mechanisms, he and Jack Sarfatti had some private correspondence, and then Jack circulated a comment to a fairly wide list of email addresses, suggesting that Henry Stapp had already *established* that no classical explanation of consciousness and qualia is possible. I don't think Henry has established any such thing. I have not read his book but I have read some of his articles at http://www-physics.lbl.gov/~stapp/stappfiles.html and I have corresponded with him. My impression is that his claims about the impossibility of such an explanation are primarily based on the argument that somehow it follows from classical physics that there can be no functional role for consciousness. From this it would follow, for example, that there could be no evolutionary pressure for the development of consciousness. The argument is suspect since (a) All the arguments I have seen that claim there is no causal functional (e.g. biological) role for consciousness are either invalid (usually because of a failure to look closely enough at the actual capabilities involved in being conscious, noticing, attending, perceiving, deciding, etc.) or else follow from an explicit or implicit stipulative definition of consciousness (or qualia) that *trivially* makes them incapable of causally interacting with anything, in which case it is far from obvious that anything corresponding to THAT definition exists in reality. (People say "yes it does, I am directly aware of it". But what they are reporting there does have effects, e.g. their reports. There are philosophical arguments to show that the attempt to identify classes of occurrences with no causal powers at all are ultimately incoherent. But I admit these are not knock-down arguments -- some people simply reject them. But they can't expect the rest of us to pay much attention to new scientific theories based on the claim that they can introspect events with no causal powers.) (b) All the arguments that have been put forward by philosophers until very recently which purport to show that consciousness (and/or qualia) cannot have a functional role or causal consequences are based on premises that have nothing to do with whether classical physics is true or false. E.g. the argument that different colour qualia might be swapped without that being noticed, or the arguments that zombies are conceivable. There's a further oddity in that it seems to me that some of the QM consciousness theorists start by arguing from the causal independence of consciousness to prove that classical physics can't account for consciousness and yet almost in the same breath bring in consciousness as part of the *dynamics* of their shiny new version of quantum mechanics, which presupposes that consciousness is NOT causally disconnected. I am not sure Henry falls into that inconsistency. I am also not sure he doesn't! He also uses another form of argument, namely that classical physics does not entail the existence of consciousness, and therefore we need non-classical physics to explain consciousness. But that is shown to be fallacious by the fact that there are events and processes that can occur in a virtual (or abstract) machine (e.g. events like re-formatting a paragraph in a word processor, or building a parse tree in a compiler, or taking a pawn in a computer chess game) which is implemented in a computer working only on classical principles. E.g. he wrote in "The Evolution of Consciousness", one of the most recent papers in his web site: > Rational analysis of this problem hinges on one central fact: > classical mechanics does not entail the existence of consciousness. > ..... > There is nothing within the classical physical principles that > provides a basis for deducing how a physical system ``feels''---for > deducing whether it is happy or sad, or feels agony or delight. The fact that you cannot on the basis of classical physics prove that paragraphs get reformatted, that pawns are taken, etc. does not mean that you need non-classical physics for such events to occur. This point is part of a more general point that all sorts of virtual machines involving events and processes of a certain type T1, might be capable of being *implemented* in a lower level virtual machine of type T2, even though there is no way of proving on the basis of the laws of T2 that the events of T1 must occur. (By the way "virtual" does not contrast with "real". Virtual machines are real enough, and events within them can have real causal powers. They help us produce nicely formatted documents and they control the landing of planes, for example.) This general anti-reductionist point arises at many levels. E.g. from the facts of physics and of human psychology, whatever they are, you cannot prove that a certain sort of social system must exist. Yet social systems are *implemented* in sets of human beings (plus the relevant physical and biological environment, etc.). [A point of clarification: The word "implementation" is here being stretched so that it does not require an implementor, as has already happened with the word "design". An implementation is a relationship between two or more classes of designs, neither of which need have a designer. A working implementation ultimately depends on a physical layer, but that doesn't mean that, for example physics can be used to derive the laws of what's implemented. From physics you can't prove anything about what's legal in a chess machine. You can change the rules of the game without changing the laws of physics. There are more subtle points about deriving properties in a *particular* implementation, which I'll ignore for now.] If conscious events and processes, including those involving qualia, are remotely like events in a high level virtual machine, then their existence cannot be derived from *either* classical *or* quantum physics. (The "laws" of consciousness in different animals might be different, even though they are all implemented in the same physical world.) A further argument that is used (by Henry and others) is that classical physics is dynamically complete, and therefore conscious events could not have any causal powers. It is then assumed that by putting consciousness deep into the dynamics of quantum physics this can be overcome. But that's a solution to a non problem. The fact that classical physics is complete does not prevent events in a word processor from having causal powers that lead eventually to a change in format of a physical object, e.g. a printed page or a page displayed on a screen. Similarly the events in virtual machines in a plant control system can cause a valve to open or shut, leading to a change in chemical and physical processes (and possibly a disastrous change if the software has a bug!). (Incidentally I've argued in my paper on actual possibilities http://www.cs.bham.ac.uk/~axs/misc/actual.possibilities.html that in a version of classical physics that rejects the assumption of infinite precision of physical states, the dynamics may not be complete. The assumption of infinite precision is so odd and implausible that I am inclined to think even classical physics is radically indeterministic. But I may have got something wrong, and it's not central to the argument.) As far as I can tell Jack Sarfatti's approach is very similar to Henry's: i.e. bring consciousness in deep into the dynamics in order to be able to show how certain events involving consciousness must occur in physical systems (e.g. brains) and can affect those physical systems. But you don't need to make consciousness part of physics to achieve either. You just have to understand the notion of implementation and the kinds of relationships that can exist between levels of processes. [NB: this point is neutral between connectionist and non-connectionist implementations of mental capabilities.] There's a further problem in that even if they are right in postulating these extra elements in the base level dynamical theory of physics, the use of our ordinary word "consciousness" or anything remotely like it is totally unjustified. It's OK to talk about charm, and spin, and super-strings, when people know that these are not meant to be literal uses of the ordinary words "charm", "spin", "string", but simply useful mnemonics for physicists. But it's not OK to use an old word for a totally new sort of phenomenon and then claim you have a theory about the old phenomena: like claiming modern physics explains charm in humans. People find it easier to get away with that sort of (unintentional and unconscious, I am sure) sleight of hand with "consciousness" because the noun, as I've pointed out before, has no clear, generally agreed, denotation, but covers different clusters of phenomena in different contexts (making ALL one or two line definitions of "consciousness" worthless). So people can offer their new definitions of consciousness and not see any clear inconsistency with their own prior usage, because that was so complicated and indeterminate. But if the new things postulated are NOTHING like itches, tingles, experience of red patches, worries about paying the bills, finding something funny, and all the other kinds of things that are typical cases of what we are normally talking about when we talk about consciousness, experience, awareness, etc., then even the vagueness of the word "consciousness" is no justification for seriously claiming it is applicable with the same meaning both in ordinary life and in new quantum theory. My impression is that Penrose and Hameroff don't make that particular mistake. I.e. they don't start off by assuming they've got something they can call consciousness deep down there in the mechanics of all physical reality. Rather they postulate new sorts of dynamics characterised independently of the notion of consciousness and try to use that to show how, in very special physical circumstances, phenomena with the appropriate structural characteristics can be implemented on the basis of their mechanics. I must say that I do not know enough physics and mathematics to be confident that I have understood what I have read about Orch OR (e.g. in the Journal of Consciousness Studies): my maths is very old and limited, and I work at a very intuitive level in trying to grasp these things, but at that level their approach looks very different. Perhaps one of the four could comment on that observation? However, Penrose (though maybe not Hameroff) does apparently want to make a spurious link between the notion of consciousness and the ability to see the truth of Go"del formulae. This is spurious for two reasons. (a) Most conscious people and animals are not able to see the truth of advanced metamathematical theorems. So the most he can claim is that mathematicians need quantum gravity mechanisms even if (e.g.) human infants and chimps don't. (b) The claim that for a given (omega) consistent formal system F, its Go"del sentence G(F) is true is spurious, taken in itself. The sentence is true only of the so-called "standard" models of F. There are necessarily also many non-standard models of F in which it is false, and not-G(F) is true. This just follows from the technical result that G(F) is not derivable from F. Thus if F is consistent then so is the system F' defined as F&(not-G(F)). And therefore F' has a model, which is also a model of F. In this model of F G(F) is false. By constructing G(F'), and do ng the same with it, and repeating indefinitely, you can see that there are infinitely many models of F in which G(F) is false. (There are other problems that are not spurious: e.g. how does a child's grasp of sets, and of infinity, including the set of natural numbers, develop.) I conclude that all the stuff about Go"del and decidability is a complete red herring when we are trying to understand how chimps, and human infants, and most adult humans, and perhaps many other animals, can be aware of events in the environment and in themselves, and can learn and feel pain and solve problems and do all the other things that the word "consciousness" can refer to. Note that even if this criticism is correct it does not imply that there's anything wrong with Orch OR: I can't see any real connection between Orch OR and Go"del. As earlier discussions have indicated there may be open issues about whether neural systems based on classical physics can account for the fine details of the capabilities of humans and animals. (Stan Klein has said yes, others have said no. I don't know enough so I have an open mind, being a mere philosopher.) That's a level at which *real* scientific debate can occur: identify the phenomena to be explained, using as much precision as possible, and then check (a) whether there's independent evidence for the theory and (b) whether the theory can actually explain the phenomena (without having to use handwaving to make the link.) The level of precision required to characterise what needs to be explained will have to go way beyond what is found in typical discussions of consciousness, qualia etc. by philosophers and scientists. Aaron ======================================================================= From Jeff Daltom Jeff Dalton (jeff@aiai.ed.ac.uk) Wed, 21 Aug 1996 17:32:44 BST Subject Re: QM in Stapp&Sarfatti vs Hameroff and Penrose" http://www.ai.sri.com/~connolly/psyche-list/0906.html PSYCHE-D RE: QM IN STAPP&SARFATTI VS HAMEROFF AND PENROSE Jeff Dalton (jeff@aiai.ed.ac.uk) Wed, 21 Aug 1996 17:32:44 BST In-Reply-To: Aaron Sloman's message of Sat, 17 Aug 1996 23:49:24 +0100 > After Stan Klein's reaction to my comments on the possible explanatory > role for quantum mechanisms, he and Jack Sarfatti had some private > correspondence, and then Jack circulated a comment to a fairly wide list > of email addresses, suggesting that Henry Stapp had already > *established* that no classical explanation of consciousness and qualia > is possible. > > I don't think Henry has established any such thing. > > [...] > > My impression is that his claims about the impossibility of such an > explanation are primarily based on the argument that somehow it follows > from classical physics that there can be no functional role for > consciousness. From this it would follow, for example, that there could > be no evolutionary pressure for the development of consciousness. > > The argument is suspect since > > (a) All the arguments I have seen that claim there is no causal > functional (e.g. biological) role for consciousness are either invalid > [...] or else follow from an explicit or implicit > stipulative definition of consciousness (or qualia) that *trivially* > makes them incapable of causally interacting with anything, in which > case it is far from obvious that anything corresponding to THAT > definition exists in reality. But Aaron, the argument is not "consciousness has not functional role / doesn't interact with anything; therefore we need QM". Instead, it starts from something like this. "it follows from classical physics that there can be no functional role for consciousness" (here I'm quoting what you say above). Now we need to get to the conclusion that we must look to QM. Here's an easy way to get there: But consciousness does have a functional role. (Then do modus tollens.) Since you're arguing that consciousness does have a functional role, you seem to be supporting the (or at least _an_) argument for QM rather than (as you seem to think) undermining it. -- jd From Aaron Sloman Tue Aug 27 23:48:10 BST 1996 To: PSYCHE-D@IRIS.RFMH.ORG Subject: QM in Stapp&Sarfatti vs Hameroff and Penrose Having returned from a couple of conferences, I am trying, belatedly, to catch up. One of the responses to my message of 17th August about the ideas of Stapp, Sarfatti, Penrose and Hameroff, came from Jeff Dalton (jeff@aiai.ed.ac.uk) who wrote, on Wed, 21 Aug 1996: > But Aaron, the argument is not "consciousness has not functional > role / doesn't interact with anything; therefore we need QM". > Instead, it starts from something like this. > > "it follows from classical physics that there can be no functional > role for consciousness" ..... > > Now we need to get to the conclusion that we must look to QM. > Here's an easy way to get there: > > But consciousness does have a functional role. (Then do modus > tollens.) That would work only if the first premiss were OK. I guess my message was not clear enough, as I was making too many points at once. In particular, I dispute this premiss, which Henry Stapp tries to establish, and which Jack Sarfatti claimed he had established: (A) > "it follows from classical physics that there can be no functional > role for consciousness" No such thing follows from classical physics. I tried to point out that the sorts of arguments that Henry uses in his alleged proof of (A) are invalid for exactly the same reason as any parallel argument purporting to show that (B) "it follows from classical physics that there can be no functional role for algorithms or datastructures in high level virtual machines, such as word processors, compilers, operating systems, factory control systems, chess machines, etc." is invalid. We KNOW that in principle computers implemented using classical principles can implement parsers, compilers, operating systems, flight control systems, factory management systems, word processors, etc., and we KNOW that in such machines there can be non-physical objects, events and processes, e.g. algorithms, data-structures, planners, reason-maintenance systems, image interpretation systems, plan execution systems, decision making systems, planning processes, recognition processes, decision processes, inferences, etc., all of which have causal powers functional roles because we made them thus and use them in making our systems work (e.g. controlling the output of a chemical plant, or a robot body). We also know that the existence of these things, like the existence of consciousness, cannot be proved on the basis of the laws of physics, (classical or quantum mechanical), for the simple reason that there's a semantic gap: E.g. there are no laws linking the concept of a "pawn" or "check-mate" or "access-privilege" with concepts of physics, just as there are no laws linking the concept of "itch", or "decision", or "belief" or "humiliation", or "personality" with the laws of physcis. The alleged "holistic", "non-local", etc. properties of mental states that are *supposed* to be incompatible with classical physics are also features of many high level virtual machines which are perfectly compatible with classical physics. Thus we have existence proofs that (B) is false, and I challenge anyone to show that there is an argument for (A) that does not exactly parallel invalid arguments for (B). Unfortunately physicists who wish to discuss things that are really outside the realm of physics, use ill-defined, sloppy, concepts like "consciousness" as if they could be the subject of rigorous proofs, when there are no concepts capable of playing the required role, for reasons I've given in previous messages. Either they should produce a precise mathematically defined concept (like "spin"), in which case they are not talking about what philosophers, psychologists, and others are talking about (itches, visual experiences, beliefs, desires, states of indecision, etc.) or they should analyse the ordinary familiar concepts, in which case (I claim) they cannot really prove anything about the links with precisely defined mathematical theories of physicists. What goes wrong is that they slide (unwittingly?) between the two. The premise (A) may be convincing to (a) people who do not know about layers of implementation and (b) people who think consciousness is causally disconnected from everything else. But it seems to me to be false and the arguments in support of it totally unconvincing. Here's one of Henry's arguments (actually a rhetorical question) from the 1995 paper (listed below): | 2.1 Thoughts are fleeting things, and our introspections concerning them are | certainly fallible. Yet each one seems to have several components bound | together by certain relationships. These components appear, on the basis of | psycho-neurological data (Kosslyn, 1994), to be associated with neurological | activities occurring in different locations in the brain. Hence the question | arises: How can neural activities in different locations in the brain be | components of a single psychological entity? Compare my parallel: | A file-management process (e.g. checking whether your program | can access my files) in an operating system has several | components bound together by certain relationships. These | high level virtual machine processes are typically implemented | in digital processes occurring in different locations in a | computing system. Hence the question arises: How can digital | activities in different locations in the machine be components | of a single virtual machine process? The answer is to be found in the theory and practice of modern computer science and software engineering. (It's not a simple answer.) NB: I am not claiming that the brain does not require quantum mechanisms (for I suspect it does, but for contingent, not logical reasons). I am claiming only that the arguments SO FAR put forward purporting to show that classical mechanisms are inadequate, are based on both: (a) a shallow and loose analysis of mental concepts (ignoring all their important properties, i.e. those involving control functions of the mind), and (b) apparent ignorance of systems in which there are many layers of implementation, with quite different ontologies in different layers. (Other examples: are the causal powers of poverty, of injustice, of good government, of economic inflation, etc. all of which are compatible with various low level implementation mechanisms.) I guess I am going to have to find time in the not too distant future to produce a detailed text-based criticism of papers in this directory: http://www-physics.lbl.gov/~stapp/stappfiles.html including these two, which I recommend people to read: http://www-physics.lbl.gov/~stapp/39241-UNABRIDGED.tex The Evolution of Consciousness} (Unabridged version) (1996) http://www-physics.lbl.gov/~stapp/36574.txt Why classical mechanics cannot naturally accommodate consciousness but quantum mechanics can (1995) E.g. the former uses this argument: | I have argued above that classical mechanics does not entail the | existence of consciousness. The reason was that classical mechanics | does not contain any reference to psychological qualities, and hence | there is no way that one can deduce from the principles of classical | mechanics alone that any activity that classical mechanics entails is | necessarily accompanied by a psychological activity. Compare my parody: | classical mechanics does not entail the existence of rules of | chess. Therefore one cannot deduce from classical mechanics | that any activity that that classical mechanics entails is | necessarily a game of chess. Does that prove that classical computers cannot be used to implement systems that play chess, and which use strategies when they play, obey rules of chess, can distinguish a drawn position from a lost position, can detect a threat and try to avoid it, etc? No it doesn't! Part of the problem is the futile search for a logical *entailment* relation, between ontological levels. Another part of the problem is the consideration of too few options. E.g. Henry writes, in the same paper: | Since classical mechanics is dynamically complete, with respect to | all the variable with which it deals, namely the so-called ``physical'' | variables, one has, with respect to the phenomenal elements of | nature, four options: 1) identify the phenomenal elements with | certain properies or activities of the physical quantities; 2) say that | these phenomenal elements are not identically the same as any physical | property or activity, but are companions to certain physical | properties or activities, and that their presence in no way disrupts | the classical dynamics; 3) accept some combination of 1) and 2); or 3) | accept that phenomenal elements do affect the dynamics, rendering | classical dynamics invalid. Which of these options would allow for the existence of strategies, decisions, inferences, in a chess player implemented in a classical (non-quantum) computer? Or the existence of a file management process in a classical computer? We know such things can occur and are causally efficatious, as are the processes in many other computer-based systems that control real robot bodies or complex machinery. In fact, none of this vocabulary does justice to the complexity and subtlety of the relationship between layers in an *implementation hierarchy*. I had previously assumed that that relationship was widely understood nowadays, but perhaps I was wrong, and I'll have to produce a detailed exposition later on. I had thought it might be enough to produce a few high level pointers to Henry's articles, and then let people read the articles which present these invalid arguments. I now realise I was being naive, and perhaps irresponsible! I'll try to do a more thorough job, later, though it will take some time. Aaron --- Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs ) School of Computer Science, The University of Birmingham, B15 2TT, England EMAIL A.Sloman@cs.bham.ac.uk Phone: +44-121-414-4775 (Sec 3711) Fax: +44-121-414-4281