From Aaron Sloman Fri May 16 20:44:09 BST 1997 Message to Henry Stapp and others. Hello Henry, Thanks for your interesting comments on my comments. There are a few points I should reply to before I get swamped with other things. Unfortunately this reply is too long... [Henry wrote] > Many thanks for your comments. I shall revise my paper to take > them into account. I did make a brief comment to the effect > that the community of observers could be enlarged: once one > goes over to the ontological stance there is the obvious need > to enlarge the set of systems that can be associated > with reductions, in order to deal with just the problems you > raise. I don't see how this addresses the problem. I suspect I will never understand QM properly as I don't know enough mathematics and probably won't learn it at my age! But I don't see what sort of extension that is similar in spirit to your talk about a community of observers will ever be able to cope with the state of the universe before humans or for that matter any other kind of conscious entity existed (unless you are tempted by Bishop Berkeley's answer). In my previous criticism I failed to point out that the problem of allowing unobserved physical processes to exist arises not only before humans or any other conscious agents exist, but also after we are all dead, and also in all the uninhabited nooks and crannies in the universe now. Penrose, as I understand him from a lecture I attended, attempts to deal with the problems not by postulating a role for consciousness in the dynamics but simply a role for processes on different scales (e.g. the scales of brains and measuring devices and perhaps chunks of matter that existed before there were brains or measuring devices). To me that seems much more likely to work out as a coherent theory than bringing something as obscure and ontologically distinct as consciousness into physics. [Henry] > Very germainely, you say: [Aaron] > > "And I have no problems with the notion that processes in high level > > virtual machines can be causally efficacious .......--- even though > > the low level machines in which these virtual machines are implemented > > are causally complete. > > .... My wording was a bit misleading. I should have written "even if" not "even though", for I don't claim they are causally complete. [Henry] > But is it reasonable to think that, in the case of human brains, > there are two parallel levels of causation each of which is really > COMPLETELY dynamically complete within itself? NB: I did not mean to imply that conscious processes or even the collection of conscious and unconscious mental processes form an internally *complete* dynamical system. They obviously don't. A virtual machine may be affected in many ways by other levels including having explicit holes for communication with the low level, e.g. a plant control system that takes in sensor readings. Equally a low level implementation machine can include holes for higher level machines to intervene: a striking example is a computer that allows high level software to change its microcode dynamically. (There used to be a computer called the Orion that was designed to be reconfigured by its software while running, so that different sorts of tasks could be optimised by using different sorts of machine instructions.) I regard it as obvious that the mental machine cannot be causally complete, since if I sit on a pin a new pain starts which cannot be accounted for completely on the basis of the previous state of the mental virtual machine architecture. Likewise my current visual experiences, hunger, sleepines, etc. etc. Similarly though poverty is able to cause crime, there's nothing causally complete about the socio-economic level: for droughts, volcanoes, meteor impacts, evolution of new diseases, etc. can all make a huge difference. Even if you define a virtual machine X mathematically in such a way that it is causally complete in the abstract (e.g. the Prolog or Lisp virtual machine), in any *actual* implementation events can occur in the implementation machine Y that intrude on the virtual machine, e.g. intermittent hardware faults in a chess computer might make it get some calculation wrong, and make a different move as a result. So virtual machines can be kicked sideways by events at the implementation level. (The notorious pentium floating point bug is a slightly different example where the implementation went wrong.) [Henry] > As I see it, the classical stance is that the smallest scale > dynamics really is dynamically complete, and that higher-level > descriptions of human brains (and tornadoes) can be very > useful, but are not exactly and completely accurate. There are two points here: (1) the completenss of classical mechanics and (2) the possible flaws in our accounts of high level virtual machines. I'll start with (2): There are different sources of inaccuracy in high level descriptions that need to be distinguished: (a) We could have the theory regarding the architecture and powers of the high level virtual machine wrong, or partly wrong. (That corresponds to the current state of psychological knowledge. I suspect our knowledge of tornadoes is not nearly so gappy, but I am not sure.) (b) We might have a perfectly good general theory without knowing all the details in a particular case, e.g. we don't know all the data-structures and tuning parameters set up in a particular operating system, and we don't know the detailed conditions affecting a particular tornado. (c) We have the information, but we don't bother to describe it, since an approxiate description is OK for the purpose. (Lot's of teaching examples and everyday communication are like that.) (d) The high level virtual machine is so closely coupled with the implementation machine that there's a vast amount of buffetting and redirection going on, so that our high level descriptions constantly get out of date even if they start off correct. I suspect descriptions of tornadoes would suffer from at lesat (b), (c) and (d). It's an interesting empirical question whether the mind-brain relation is like (d). I suspect that in normal well-functioning minds it isn't but various diseases (e.g. parkinson's) (d) can be very important. However, this cannot be checked out till we have overcome problem (a) which is still very serious! To sum up: the problems of accuracy and completeness in our high level descriptions have different kinds of sources and have nothing specific to do with any assumption about classical mechanics. Return now to point (1), the completness of classical mechanics: First I must make clear that I have no reason to defend CM, since it seems to be false! My only claim is that what makes it false has not been shown convincingly to have anything to do with consciousness. It seems to me that there that there are two sorts of classical mechanics, not always distinguished, the official explicit version and an unofficial implicit version: (a) the explicit theoretical, ideal, classical mechanics which includes mathematically continuous motion and infinitely precise measurements or at least infinitely precise physical states, (b) the practical, implicit, actually useful classical mechanics, as applied by physicists and engineers in making things and predicting things: this sort of "classical" physics has no way of ever making infinitely precise measurements and has no reason to presume that any physical quantity actually has any infinitely precise value, and neither could it use such a presumption anyway. Let's call these two IPCM (infinite precision classical mechanics) and RPCM (reduced precision classical mechanics). I can't be more precise about RPCM just now except that it rejects the notion of infinite precision in physical states. I.e. the mathematical continuum (the set of real numbers) is only an approximate model for kinds of states and kinds of variation. Though implicit in applications of physics, I don't think RPCM is ever taught explicitly, but it seems to me to have profound consequences that are very different from those of IPCM. In particular the lack of infinite precision combined with the mathematics of chaos (sensitive dependence) would mathematically entail REAL physical indeterminacy in many non-linear feedback systems and devices like a double pendulumn, pin-ball games, the weather, etc. In both IPCM and RPCM global organisation can impose constraints on detailed processes. In particular, it is clearly possible to design global structures which partly control the indeterminacy arising out of noisy microstructure (e.g. noise reducing circuits in amplifiers, and statistically reliable gambling machines and lottery devices). But if RPCM is correct we can add that global organisation can also have a distinct causal role in reducing physical indeterminacy. However, all this would need to be worked out in detail mathematically, using some form of non-standard arithmetic while preserving the equations of CM, and I am not a mathematician so I don't know exactly what this would look like. We'd need a way of combining the equations of CM with new types of state descriptions (i.e. variable values) that don't involve infinite precision, though that doesn't mean they have to be discrete either. Is RPCM an old idea or have I invented something new? If it's new is it coherent? Even if it is coherent it is presumably false since I assume it cannot explain some things that QM does. The only relevance it has here is as part of the exploration of logical possibilities in trying to understand varieties of types of supervenience. Question: does QM presuppose infinite precision in physical reality at some level of description, e.g. at the level of the wave function? Could there be a version of QM that did not make this presupposition (FQM)? How would it be different? I'll return to RPCM below. [Henry] > And, as I see it, the quantum stance is that the full causation > can be conceptualized as consisting of many levels of > causation, but that none of them alone is exactly accurate and > complete, and hence that none of them alone can be identified > as the true reality. The level of causation that includes > human thoughts is not COMPLETELY isolatable from the chemical > and physical processes upon which it rides, in the way that a > process run by a computer program is. There's nothing special about QM here, since as indicated above in CM high level machines can be influenced by low level details and also global structure can constrain detailed behaviour. In RPCM high level control can even fill causal gaps by reducing indeterminacy. Similarly in computing systems, as explained above the different levels can influence each other. (E.g. in the Orion) Not all these influences are desirable (e.g. influences due to hardware faults) and various kinds of error correcting memory and redundant processor designs attempt to increase the isolation between levels, but they never achieve it perfectly. (Similar remarks apply to brains, as tragically evidenced by the effects of brain damage or brain disease.) It's also possible to have low level hardware controls to limit the powers of high level software, e.g. access control mechanisms, interrupt mechanisms that enable a scheduler to prevent any process hogging the machine, etc. [Henry] > Neither are these chemical and physical processes > completely independent of the high-level process that involves > thoughts: there is mutual interdependence. Hence thoughts are > not causally superfluous. On this we agree in broad outline, though our detailed models differ, and, as I've tried to show, the possibility of such mutual interdependence arises even within classical mechanics because of the way global organisation can implement high level control functions even if at the lowest level physics were causally complete (as in IPCM). If there is low level indeterminacy I don't see how what we understand by *thoughts* could have any direct impact on reducing it at that level (e.g. directly influencing an energy change in a hydrogen atom) though both biological systems and modern engineering designs show *many* ways in which global organisation of complex physical systems (classical or non-classical) can generate wholly new emergent patterns, including patterns that cannot be described in the language of physics. It is perfectly consistent with this view to claim that the physical universe existed long before there were any of the particular patterns that implement thoughts In that situation the biological and engineered control systems simply did not exist and physical processes then exhibited none of the high level patterns that they produce. I.e. where there are no minds physics grinds merrily away nevertheless. If it does so in a fashion that's partly indeterminate, as RPCM and QM both seem to imply, then no matter: the dynamics is then incomplete but that doesn't stop the system working. Your theory doesn't seem to allow this, unless you reformulate it to say that consciousness plays a role in the dynamics ONLY where the right kinds of mechanisms happen to exist e.g. brains. But that's not what your paper said. [henry] > ... > However, the "ordinary notion of causality", namely completeness > of the microlevel taken alone, is a holdover from earlier > centuries. And one of the reasons that I sketched RPCM above is that I believe that at least *implicitly* applied physicists and engineers (including makers of gambling machines and probably even gamblers) may well have been using a different collection of assumptions from the IPCM of Pascal (and Newton?). Nothing I've said should be taken to imply that the microlevel is causally complete. My point is simply that we don't NEED causal incompleteness to allow for the causal powers of high level structures. However if there is causal incompleteness (as in RPCM and QM?) then that adds marginally to the causal role of high level structures. Put it another way: complex "boundary conditions" can have causal powers without requiring any gaps in the laws through which they work. The high level patterns of behaviour so produced may be inexplicable from the laws of physics alone: they depend on the patterns in the boundary conditions. Physics does not determine boundary conditions. [Henry] > Maybe we should strive to unravel the questions of these seemingly > almost independent parallel levels of causation by paying attention > to the empirical facts that nature offers us, and the efforts of > scientists to make sense of these facts. Yes. I would also suggest also paying attention to the efforts of engineers to make USE of these facts. However, we need to distinguish two kinds of independence. (a) Henry's dualist mechanics, which allows thoughts to have some dynamics independently of the dynamics of the rest of physics, while interacting with and influencing the rest. This is close to the philosophical metaphysical theory sometimes called interactionism, except that you also apparently want physics to be expanded to include both streams of events (i.e. thoughts as part of the dynamics of quantum physics). (b) Aaron's emergent mechanics, which allows features of the global organisation of physical systems (e.g. global boundary conditions) to have powers that are not part of the laws of physics, though they work through them. These global patterns and their consequences are not explained by the laws of physics because the laws of physics do not determine that any particular such structure (any particular boundary conditions) should exist. Physics alone cannot predict that a chess computer will ever exist, for example. Nor that a human brain will exist. More importantly, it is part of (b) that there are features of such global organisation and the behaviour they produce that cannot even be *described* in the language of physics. The language of physics does not include "check mate", and it cannot be defined by some logical combination of physical concepts. Likewise the concepts "thought", "emotion", "itch", "decision", "inference", etc. I think you want (a) whereas I conjecture that (b) is the road to truth. [Henry] > Recent developments in physics over the past > twenty years, based on detailed experimental data, have caused physicists > to believe that there is another level of causation, far below the > level of the known particles and fields, with things again delicately > arranged so that the higher level provides an approximate > causal description that is ALMOST independent of the details of > the lower level dynamics. This seems to be nature's pattern. This all seems to me to be consistent with (b). Does this theory require physical reality to be infinitely precise at any level? [Henry] > The key question is whether the lower levels can be complete > unto themselves. Quantum theory seems to be saying NO! Likewise RPCM ! And even the causal completeness of IPCM allows the possibility of mutual interdependence between levels and the possibility of physically inexplicable virtual machines with the power to change physical states via the implementation machine. [Henry] > The higher are not simply constructs of > the lower; Here is probably the source of all our disagreement. I think your use of the word "simply" is a give away. You would not use it if you really grasped the depth of the extremely NON-simple notion of a higher level control mechanism being implemented in a lower, e.g. the possibility of the higher level machine being conceptually independent of the lower even when totally implemented in the lower, as I've tried to explain above. This is not a case of X "simply" being a construct of Y. > neither the higher nor the lower levels are complete in themselves. My position does not need the lower to be incomplete in its equations, but only in its specification of (1) actual "boundary conditions" (2) the concepts that are useful for describing global features of boundary conditions and the global features of the processes they generate. To describe and explain these we introduce new virtual machines with their own ontologies, a point not understood before this century as far as I know, though past talk of biological emergence may have been an attempt to get there. [I once had a discussion with a theoretical physicist, I can't recall who, maybe Tony Leggett -- with whom I once taught a philosophy of science course -- who remarked that the chemical properties of complex molecules defined a similar level not reducible to physics. He, or someone else, also remarked to me that the macro phenomena of low temperature physics also appeared to have an emergent character, not reducible to the physics of the sub-microscopic. I can't remember the examples he gave me.] [Henry] > Our thoughts seem to occupy some middle place, and we are > trying to reach out and find this place. Must we in this > difficult venture adhere steadfastly to the > microcausal-completeness idea of seventeenth,eighteenth, and > nineteenth century physics that are, on the basis of empirical > evidence, rejected by the phyicists of today? Nope. I hope I've never said anything that implies that adhere to any such thing. But rejecting it does not force one to your position. I've been trying to show that there is an alternative. I constantly have the impression that you don't see it. I think Pat Hayes has also been pointing at something like this position, but I am not sure he would agree with every detail of what I've written, and I felt some of his examples failed to bring out clearly the difference between mere aggregation and the emergence of important new virtual machines. Dennett seems to me to have a slightly different position because for example he argues that all talk of qualia is just a muddle, whereas I think there's a strong element of truth at the bottom of what philosophers say about it and I'm trying to show what could explain that truth. Also he defends what he calls the "intentional stance" as somehow accounting for or justifying our ascription of mentality, whereas I think it is essential to adopt the "design stance" and see how a mind is a special type of control system which justifies talk of beliefs, desires, etc. You also made some comments on my abstract for the Danish workshop. I have already gone on much too long, so I'll simply respond to this: > You suggest that `X is implemented in Y', and sort of identify > that with `X supervenes on Y'. But as regards our differences > everything seems to hang on exactly what your definition of > "implementation" is: I don't yet have a definition! I think good definitions come only after you've got good theories for them to fit into. So I've claimed only that there are lots of examples of implementation around us, some of them occurring naturally and some produced by us, and that we need to study these examples carefully and increase our understanding of the different sorts of implementation in the cases that are not excessively complex before we graduate to trying to understand what is probably the most complex case in the universe, the implementation of human mind in the human brain. But too many people want to fly before they can walk, and they end up grossly oversimplifying and producing totally unconvincing theories. > What is the connection > between these two levels of causal description? I've tried to give partial answers above. Note that not all cases are alike. A tornado is different in many ways from a chess program. A chess program which does not learn is very different from one that does. An operating system is very different from any sort of chess program. A self tuning operating system is different from one that has to be tuned by human operators. Etc. > Chalmers gives a pretty detailed definition of supervenience, > which you seem to be suggesting is not satisfactory: you say > that "Most philosophers get the requirements of supervenience > wrong." At one level there's an agreed and fairly simple definition: X is supervient on (implemented in) Y if nothing can change in X unless there's a change in Y. But various philosophers add extra conditions which they then use to prove various theories they are committed to, e.g. such as that minds cannot be supervenient on, or imlemented in brains. Examples of mistaken views (not all held by all philosophers) include these: If X is implemented in Y then Y must be at least as complex as X If X is implemented in Y then there must be systematic correlations between events in X and events in Y Part-whole relations in X must be mapped onto part whole relations in Y If X is wholly implemented in Y then X must be causally superfluous Only physical causation is real causation All of these are false. Anyhow I was not thinking of Chalmers. It's a long time since I've read anything of his, and all I can remember from what I did read was that I found it wholly unconvincing (sorry Dave!). I'll have to do more homework before I go to Denmark, but in any case Dave will be there and I'll listen. If he shows I am wrong, I'll happily admit error for I have no real *desire* for the truth to be the way I say it is or any other way. I am only concerned to demolish invalid arguments or theories that don't fit the facts. Must go now as I should have been home an hour ago. Apologies if I overload anyone's disk or brain... Aaron Ps I've just received your revised paper. A quick skim over the changed portions doesn't make me think that any of my comments above are now out of date.