http://www.cs.bham.ac.uk/~axs/misc/stapp.consciousness This file contains three messages posted to psyche-d (which is relayed to sci.psychology.consciousness. First is a message from Henry Stapp to me attempting to characterise my views and comment on them. This included a message that he had posted to psyche-d and then withdrew, along with an accompanying note to me. Next is my reply to the points he made in the accompanying note. After that is my reply to the points made in the message he withdrew. Subsequently he agreed that the whole correspondence should be posted. Note: psyche-d discussions are archived at http://www.ai.sri.com/~connolly/psyche-list/ From STAPP@kelvin.lbl.gov Thu Feb 13 17:00 GMT 1997 Date: Thu, 13 Feb 1997 7:47:11 -0800 From: STAPP@theorm.lbl.gov To: A.SLOMAN@CS.BHAM.AC.UK Cc: STAPP@theorm.lbl.gov Message-Id: <970213074711.2a80159d@theorm.lbl.gov> Subject: Re: Qualia & Form Perception From Aaron Sloman Sat Feb 22 09:45:22 GMT 1997 To: psyche-d@rfmh.org Subject: Qualia & Form Perception Dear Psyche-d readers, On Thursday 13th Feb, Henry Stapp sent me the message below. A few days later I responded to him, then he agreed that the whole correspondence should be posted to psyche-d. I shall send my reply after sending this, and I apologise for any overlap with discussions that have occurred in the mean time. Aaron ======================================================================= From: STAPP@theorm.lbl.gov Date: Thu, 13 Feb 1997 7:47:11 -0800 Dear Aaron; After a long absence from psyche-d I did read your Feb10 posting and sent a reply that I will append to this one. However, upon reconsideration I had Patrick kill it, as I thought that I had completely misunderstood your position. But let me now state to you privately what I take your position be, and state my objection to it. You do not deny the existence of consciousness, but rather claim that it IS a certain functional activity of the brain/body. Take a weather system of hurricanes and storms etc. One can describe it in terms of motions of atoms and molecules etc, but a more useful description for some purposes would be in terms of high pressure and low pressure areas and other gross features that could be the basis of a high-level causal description. Your idea is that the description of brain activity in terms of one's consciousness experience is, at least in first approximation, rather like this high-level description of a weather system. Am I close? If so, then my objection is this: I do not object at all if you do not try to maintain also that the lowest-level description (or an approximation to the lowest-evel description that is completely adequate for the discussion of the mind-body problem) is in terms of the concepts of classical physics. But if you hold that one can take a classical physical description to be in principle, in this context, completely adequate for descibing a mind/body system at the micro level, then my objection to the analogy is that the classical-physics description is, within the classical physics framework complete: every aspect of the weather system is in principle deducible from the micro-description in terms of the motions of atoms, ,ions, molecules, electrons, and electromagnetic fields, and the laws and principles of classical mechanics provide in principle the foundation for a complete understanding of the causal relationships in the high-level complexes. But one cannot DEDUCE the presence of the experiences of pain, or saddness or joy from the principles of classical physics and the micro- description: one can POSTULATE certain connections between the classical-physics description and experiences, but cannot deduce them from the principles of classical physics and the micro-description in terms of the local elements of the classical-physics description. Thus the existence and form of conscious experience is not a logically necessary concomitant to the classical-mechanics description of a body/brain in which conscious experience is present, whereas the existence and form of a hurricane is entailed by the classical-mechanics description of a situation in which a hurricane is present For this reason I regard the classical-mechanics description of nature to be deficient in the context of a study of the mind/brain problem: a good theory of closely related phenomena should hang together better than that. Best regards, Henry I append my recalled posting to psyche-d: Aaron Sloman's tack (Feb 10)seems to be to admit into his realm of discourse about what we would normally call " one's conscious experience" all of the functional properties that go along with one's conscious experience, such as memory/recall, global access, pictorial or linguistic representability, etc. but to ban or banish the idea of "my conscious experience" which for most of us is the core reality that seems to lie at the center of this functionality. Since we are committed to a scientific approach Aaron's approach seems hard to fault, because it promises to be able to deal with all of the functionality associated with one's conscious experiences without embroiling us in the difficulties associated with the attempt to bring the concept of consciousness itself into scientific theory. Still, admitting the concept of one's conscious experience into the realm of theoretical discourse does bring certain theoretical advantages. For one thing, it provides clues to the functioning of our brains. There are, ab initio, many possible ways that one might invent to try to account for human gross behaviour patterns. The information provided by the existence and structure of one's own experience can provide important clues as to how the internal processing is organized: there must be in an adequate account of brain processing the description of organizational structures that can account for one's "seemings": for what one seems to experience. Thus these "seemings" serve as clues to the organization of brain processing. But these "seemings" are more than mere clues. Explaining these seemings is necessary for a satisfactory theory. For within web of functionally related theoretical structures one must in the end identify the realities against which to test the theory. These can include observable gross patterns of behavior, and a rather satisfactory theory might be one that merely aims to explain these observable gross patterns of behavior in terms of internal processing structures. But there are in nature, not only these observable gross patterns of behavior, and the internal processes that occur on our brains and bodies but that are not "seemings" ( i.e., are not conscious experiences): there are also these "seemings". Now it may well be true that a theory of internal process can be constructed quite well without initially paying any special attention to these special effects, or spelling out how they are related to the vast internal process within which they are imbedded: it may well be that the most expeditious way to proceed with the construction of a theory of internal process is to ignore these seemings in the initial phase of construction of the theory. But in the end these seemings are part of reality and, presumeably, part of the internal process. They are part of what must be explained by a complete theory of the internal process. Yet they are certainly not identifiable with the whole of the internal process. So there is a bona fide problem of explaining how they are related to the whole internal process. It is in fact not knowable before the construction of an adequate theory of the internal process whether it will be best to first ignore these "seemings", and incorporate them later as special and perhaps bizarre side effect, or whether it will be best to take them as essential features that lie at the very core of our high-level internal processing. Plausibility arguments can be given for each of these approaches. In the end some people's intuitions on this methodological issue will be better than those of others. That's how science progresses. Henry P. Stapp http://www-physics.lbl.gov/~stapp/stappfiles.html From Aaron Sloman Fri Feb 14 12:05:51 GMT 1997 To: STAPP@theorm.lbl.gov From Aaron Sloman Sat Feb 22 10:58:52 GMT 1997 To: psyche-d@rfmh.org Subject: Re: Qualia & Form Perception (reply to Stapp) Cc: STAPP@theorm.lbl.gov This was my first reply (sent 14th Feb) to Henry Stapp's message of 13th Feb. After sending this one I sent a shorter summary of points of agreement and disagreement, which I'll post after this. In this message I've corrected some typos, changed the format to better fit the html psyche-d archive, and added a few explanatory notes all starting "[Note added" and ending "]". === Dear Henry, Thanks for your useful comments. [Henry] > You do not deny the existence of consciousness, but rather > claim that it IS a certain functional activity of the > brain/body. Not exactly. I claim there is no single IT that is consciousness. Rather our ordinary concept actually refers to a mish-mash of different more or less closely related phenomena which do not all necessarily occur together (e.g. different subsets occur in different organisms and machines). But yes, as far as I can tell at present, each of those phenomena is, or is an aspect of, some functional activity or activies implemented in the brain/body (the combination is important, since some of them involve fairly tight control loops including physical devices like eyes and fingers). [Note added 22nd Feb: some reactive mechanisms may be composed entirely of continuous feedback loops. However, deliberative mechanisms (as I've defined them) inherently involve the *additional* ability to assemble complex structures in discrete steps, and to use associations between discrete structures. I.e. these deliberative mechanisms are inherently digital. When the two are combined, analog to digital and digital to analog converters are required. This acknowledges points made by other posters in the last few days about the role of feedback loops. I find it useful to think of a mind as essentially a control system, though early control theory used too small a range of mechanisms, as Norbert Wiener acknowledged in the second (1961) edition of his 1948 book.] [Henry] > Take a weather system of hurricanes and storms etc. One can > describe it in terms of motions of atoms and molecules etc, but > a more useful description for some purposes would be in terms > of high pressure and low pressure areas and other gross features > that could be the basis of a high-level causal description. I am not simply concerned with what is useful for external observers and predictors, but with deeper features that distinguish different sorts systems that we find in our world. There's a big difference hurricanes and animals like fleas or monkeys. In living systems the most important thing about them which explains what they are, how they came to exist and how they behave and develop is the fact that they have, acquire, store, modify, transform and use information, both in keeping themselves and others alive and also in reproducing. [Note added: and of course doing many other things which serve these ends, and some which don't, but may have hitched a ride on the goal achieving mechanism.] I think this is especially true of mobile animals. I think it is also true of plants, but to a much lesser extent. I am not sure I fully understand the difference. [Note added 22nd Feb. I've recently learnt about infected plants "communicating" with other plants by releasing a gas that "warns" uninfected plants to start preparing their defences: a very primitive sort of communication of information.] By contrast, hurricanes are not essentially information processing systems, as far as I know, though they are energy transformers and they impact on their environment. These are two properties they share with animals. They also have a certain coherence and stability (at least for a time), as does the planetary system (for a longer time), and this may make them appear essentially similar to animals. But I think this is a superficial similarity, since they are not information processing systems. E.g. I don't think hurricanes and planetary systems ever make use of any specification of a future state to be achieved or of a remote object that has been sensed as something to eat, mate with, escape from, inspect to acquire more information, etc. The only influence that a remote object can have on a hurricane is energy transfer. (Of course energy transfer is also involved in sensing, but that's just an implementation detail). [Note added: if it turned out that hurricanes could select routes that enable them to reach locations where they'd receive an energy boost allowing them to survive longer, then I'd have to change my views about them.] [Henry] > Your idea is that the description of brain activity in > terms of one's consciousness experience is, at least > in first approximation, rather like this high-level ^^^^^^^^^^^ > description of a weather system. Am I close? Yes: though I would not express it like that because I'd like to stress the differences as well as the similarities. [Note added: See my response to Pay Hayes with subject line "A fourth type of answer". Talk about differences of degree, or "rudimentary" types of consciousness can be misleading, if the features that are missing are not explicitly indicated. The space of possible mechanisms we are discussing is not a seamless continuum.] [Henry] > If so, then my objection is this: I do not object at all if you > do not try to maintain also that the lowest-level description > (or an approximation to the lowest-evel description that is > completely adequate for the discussion of the mind-body problem) > is in terms of the concepts of classical physics. Actually I have a completely open mind on this. After all I believe that at present the lowest level description that's scientifically justified must be quantum mechanical: that's how things are. Maybe something deeper will emerge in physics in 100 or 800 years time? But all that is equally true of computers, hurricanes, digestive processes, etc. What's not clear is which features of quantum mechanics make a difference. I have objections to the arguments which purport to make a link between the quantum mechanics and so-called consciousness. My problem is that they all start either from (a) a specification of "consciousness" which I do not recognize at all, or find incredibly vague, or from (b) a specification which describes only a tiny subset of the abilities of a small subset of animals (e.g. the ability of some humans to understand G\"odel's incompleteness theorem). or they make inference leaps that I find incomprehensible between the different levels of description. [Henry] > But if you hold that one can take a classical physical > description to be in principle, in this context, completely > adequate for descibing a mind/body system at the micro level, Actually I don't believe that classical physics is adequate to describe how computers work in detail. But maybe that's irrelevant for now. > then my objection to the analogy is that the classical-physics > description is, within the classical physics framework complete: > every aspect of the weather system is in principle deducible > from the micro-description in terms of the motions of atoms, > ,ions, molecules, electrons, and electromagnetic fields, I presume this is true of many of the high level concepts used to describe hurricanes, though I don't know whether the butterfly effect is capable of enabling non-classical phenomena to produce global effects that could not have been predicted classically. I suspect your arguments are not affected by that. However when hurricanes are described as "dangerous" "causing damage", or "economically disastrous", new concepts are introduced that depend on higher level systems that cannot be defined in physical terms and whose properties cannot be deduced by logic from those of physics (though they are implemented in the physical world). That's because they use socio-economic concepts, which refer to a different level of reality from physics. [Note added: these terms are "relational". They describe relations between elements in a complex causal network. Similarly, the terms we use to describe mental states and processes, e.g. "believing", "noticing", "considering", "selecting", "enjoying", are relational in that they describe relations between elements in a cojmplex causal network, which nobody so far has characterised adequately. In both cases describing the causal networks requires a vocabulary that cannot be defined in terms of the vocabulary of physics. But in both cases the networks are *implemented* in physical systems. Incidentally I suspect the variety of types of networks that are entirely continuous is far more limited than those that include discrete/digital sub-mechanisms, and evolution "discovered" that and put it to good use.] > ... [Henry] > But one cannot > DEDUCE the presence of the experiences of pain, or saddness > or joy from the principles of classical physics and the micro- > description: one can POSTULATE certain connections between the > classical-physics description and experiences, but cannot deduce > them from the principles of classical physics and the > micro-description in terms of the local elements of the > classical-physics description. For me that's the wrong question. I don't think most of the interesting properties of things implemented in the physical world can be DEDUCED from the principles of any sort of physics. [Note added: because the implicitly relational concepts used in describing different sorts of causal networks cannot all be defined in terms of physical concepts.] If you and I have a game of chess, then no matter which medium we use for it our game is inevitably implemented in physical systems. However, I do not believe that any logical deduction, starting from a *purely physical* description of that system and its history, together with *laws of physics*, and nothing else, can ever have as its conclusion something like: White (Stapp) checkmated black(Sloman) in 25 moves. [Henry] > Thus the existence and form of > conscious experience is not a logically necessary concomitant > to the classical-mechanics description of a body/brain in which > conscious experience is present, nor a logically necessary concomitant to any other physical description, I would add. I.e. I don't see any difference between classical and quantum mechanics here. (Though you have tried to make me see it! Perhaps I just lack the intelligence to understand???) > ..whereas the existence and form > of a hurricane is entailed by the classical-mechanics description > of a situation in which a hurricane is present OK, subject to the various qualifications above. > For this reason I regard the classical-mechanics description of nature > to be deficient in the context of a study of the mind/brain problem: > a good theory of closely related phenomena should hang together better > than that. I agree with you (and Jeffrey Gray, incidentally) that we need to have a deep understanding of relationships between (A) "high level" systems (abstract virtual machines), which include internal and external causal relationships and (B) lower level systems, e.g. the underlying physical reality, and various intermediate level virtual machines where appropriate. This requires understanding how (A) can be implemented in and supervenient on (B). [Note added: lots of philosophers have tried to characterise the possible forms of such relationships, e.g. Bill Robinson makes an interesting attempt in Mind 1990, among others. But unfortunately they tend to start from simple mechanical systems (e.g. alarm clocks) which are fairly easy to characterise, and then jump straight on to the much harder problem of the relation between mind and brain. If only they'd stop to sharpen their tools on compilers, wordprocessors, spreadsheets, operating systems, computer networks, plant control systems (roughly in order of increasing complexity) etc. then maybe they could produce something more relevant to the task.] I also agree with your (and Jeffrey Gray's) implicit claim that this deep understanding [of the relationship between levels] will not simply amount to the discovery of empirical correlations: that's why I find all this talk about "the neural correlates of consciousness" very unsatisfactory. It is explanatory underpinnings we want, or implementation theories, not just empirical correlations, which are what you get from shallow experimental science. [Note added: "shallow" does not mean "easy" or "worthless". Shallow science is often an essential precursor to the deep advances. Shallow science paints the dot picture that the deep reconceptualisations allow us to view in a new way.] However, this does not mean that the deep connections are deductions. The discussions are complicated by the fact that we don't yet have any clear idea regarding what sort of (A) we are talking about, though some people are convinced that they do have a clear idea and I argue that they are just unwittingly deceiving themselves. Maybe we could make progress if we spent more time trying to characterise what sort of (A) we are talking about, and trying to clarify what sorts of explanatory relationships other than deducibility and mere correlation can exist between different levels of reality. (Or, if you prefer de dicto mode, between levels of description.) I hope to read your original draft later. I have to rush now. If you think it worth while we could post this or some modified version to psyche-D. I don't suppose you'll be at the Elsinore workshop in August? I hope we do get a chance to meet and discuss things face to face some time. [Further comments to follow] Aaron From Aaron Sloman Sat Feb 22 11:11:39 GMT 1997 To: psyche-d@rfmh.org Cc: STAPP@theorm.lbl.gov Subject: Re: Qualia & Form Perception This is my second response to Henry Stapp, sent to him on 13th Feb, after which he agreed that the whole correspondence should be posted to psyche-d. As before, there are some notes added, in square brackets. ==== Dear Henry, I've now had a first quick look at the message you posted and recalled, and I will confirm what you presumably concluded yourself namely: I do not deny that there are ways things seem to us. I agree with you that there are mental states and processes that cannot be defined EITHER in terms of externally observable relations between environment and behaviour OR in terms of physical brain processes. I.e. they constitute a level of reality that is not deductively reducible, just like poverty, crime, ownership of houses, and kings checked by bishops in chess. I also agree that we have internal access to a subset of those mental states and processes in ourselves. I also agree that there's a special subset of mental states which can be described as how things seem to us, about which we cannot be mistaken (at the time) since it is true by definition that how things seem to us is how things seem to us. I.e. we cannot be wrong about how things seem to us. But that's a trivial logical truth that is easily mistaken for something profound. I am even prepared to admit that something like this is true: > admitting the concept of one's conscious experience into the > realm of theoretical discourse does bring certain theoretical > advantages. For one thing, it provides clues to the functioning > of our brains. except that I'd want to put in various qualifications about what sorts of things might be referred to by the phrase "one's conscious experience". Nevertheless, there are many phenomena concerning sensory experience and ways in which it can change which provide clues as to what's going on in the brain. Also when these experiences change they can provide clues regarding the nature of damage to the brain. [Note added 22 Feb: as I remarked in a recent posting responding to Stan Klein, even a robot with visual qualia might provide clues as to functioning and malfunctioning of mechanisms deep in its visual system if it could talk about its conscious experience, e.g. reporting difficulties in binocular fusion.] Some people who object to my functionalism seem to think that I wish to define all functions in terms of impact on external behaviour. I.e. they confuse functionalism with some form of behaviourism. In fact I think many functional roles of internal states and processes can ONLY be defined in terms of their causal relations to other INTERNAL states and processes. I.e. my form of functionalism is inconsistent with all forms of behaviourism that I know. To characterise, in broad terms, this collection of internal causal relations is to characterise an architecture. I.e. there are non-physical architectures whose nature we need to understand. Although they are non-physical they are implemented in physical systems. That's a kind of reduction, but does not involve deducibility. [Note added: for the reasons explained in the notes added in my previous message, because the webs of causal relationships use an ontology not definable in terms of physical concepts.] I suspect there's nothing here that you disagree with. [Note added: expect perhaps the point about non-deducibility. See below.] However, you want to go beyond this. (a) You probably want to make additional statements about the nature of these internal mental states. I suspect I may agree with some of them and not with others. E.g. if you think zombies can exist that have mental states and processes with all the properties that I talk about yet lacking real seemings or experiences then I find that incomprehensible: I have no idea what it is that they are supposed to lack if they have everything else. (b) You want to make claims about the inadequacy of classical physical systems to provide an implementation for these mental states and processes. (c) You believe that some sorts of mental phenomena are intimately involved in the underlying dynamics of physical reality (so that perhaps some of physical reality is implemented partly in mental mechanisms???). As regards (a) I have never understood what it is supposed to be that zombies lack if they have internal states with all the features and causal properties of desires, intentions, beliefs, visual experiences, pains, pleasures, etc. that I talk about. [Note added: I think many people who object to this sort of viewpoint have not understood what sorts of functionally defined mental states are at issue: they assume I am talking either about observable behaviour, or about something like what happens in a computer's CPU. They find it hard to conceive of an architecture made of a rich network of high level unobservable dispositions, which may or may not be implementable on a computer as far as we can tell at present.] As regards (b) I have never seen anything that looks like a valid proof that there's an inadequacy in classical physics. [Note: I've previously admitted that machines implemented using only classical physics might turn out to be inadequate to support human-like mentality -- but not for any reason that I've seen proposed so far. Maybe there are other reasons, e.g. the need to compress enormous information stores into very small spaces with very low energy consumption, while preserving robustness?] As regards (c) I don't know enough physics and mathematics to understand the detailed arguments but I find it hard to understand what sorts of mental reality or consciousness is supposed to have existed in the physical world long before there were any conscious animals, etc. Aaron PS Assuming Henry does not object, I shall store this correspondence starting with his original message, in: http://www.cs.bham.ac.uk/~axs/misc/stapp.consciousness Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs ) School of Computer Science, The University of Birmingham, B15 2TT, England EMAIL A.Sloman@cs.bham.ac.uk Phone: +44-121-414-4775 (Sec 3711) Fax: +44-121-414-4281