It is easy for people who have grown up, as we have, in an intellectual environment which takes scientific materialism for granted to be dismissive of old-fashioned views of the mind, such as those held by René Descartes (see chapter 1). He provided a picture of a human being as a combination of a material body and an immaterial soul. The body functioned according to strict mechanistic physical laws, while the soul was the seat of cognition, emotion, will, and sensation. Descartes wanted to defend the traditional canons of Christian belief against a rising tide of secular thought, this latter engendered by the growing successes of the young physical sciences in the seventeenth century. But he also had, as he saw it, some excellent reasons of a purely philosophical kind for believing in a soul, or `spiritual substance', to use his term.
His most famous argument was based on issues of doubt and certainty (Descartes, 1642). There is a lot that I can feel reasonably sure of, but little of which I can have absolute, unquestionable certainty. I assume that my physical body, and the various states that I observe it to undergo, are real occurrences in a real physical world. But all I can observe is a sequence of subjective sensations of these states. I cannot get behind the sensations to the physical reality which I assume to be their cause. I infer from a certain kind of sensation that something is pressing on my right foot. But amputees can have such sensations of non-existent `phantom limbs'. I sense myself as sitting at a desk, typing at a keyboard -- but maybe I am asleep, and merely dreaming of the episode. By some even more extravagant supposition, my firm belief that I do have a real body may be an illusion implanted in my mind by an all-powerful Evil Demon. This is highly unlikely, but the mere possibility of such a thing is sufficient to make it possible for me to say that I do not know for certain that my body exists.
Can I, then, be certain that I exist at all? Perhaps this same Evil Demon has duped me into thinking that I exist, when I do not. Well, my body may be an illusion -- but what about my existence as such? Surely I must exist -- in some form or other -- in order to be able to think the thought ``Maybe I don't exist.'' Thus, Descartes concluded that the part of me which engages in reflection and cogitation must be independent from, and in a sense privileged with respect to, my physical body -- since it appears that it cannot be doubted away in the way that my body can. And that privileged part of me is my mind.
Another argument, inspired by Descartes, if not directly attributible to him, concerns, not cognition, but sensations, such as pain. Just pinch your arm hard, hard enough for it to hurt. Various bodily processes are presumably occurring: local traumas to blood vessels in your arm, impulses to your central nervous system, etc. (We are now once again assuming that you do have a body after all!) An expert physiologist could give a good hour's lecture on the various things which will be happening. But besides the physical processes, something else seems to be occurring, something of which you are directly aware, and to know about which you need no understanding of the physical processes: the feeling of pain itself. Indeed, according to followers of Descartes, this feeling of pain is, conceptually, or logically, completely distinguishable from any of the accompanying physical processes. An amputee who is feeling pain in a phantom limb is still feeling pain, even if the physical source of the pain is misidentified.
We take it as a well founded scientific hypothesis that the pain results from a particular assortment of physical causes. But do we have any good reasons for equating the pain with those physical causes? Well, first, a small child can know she is in pain without knowing anything about the physical causes. Second, if you tried to observe all the physical causes of that child's pain, you would not, for all that, observe the pain itself. To observe the pain directly, surely, you would have to be the child -- for only she can observe her own pain. Pain -- and other mental states, such as thoughts, wishes, emotions, and so on -- seem to possess a special kind of subjectivity or `privacy', which physical processes do not. To identify the pain with the physical processes is, it is claimed, a confusion, the result of failing to think clearly enough about one's own personal experiences.
This, at least, is how people who agree with Descartes' philosophical outlook would argue. We may not feel like agreeing with Descartes that our mental processes reside in a `spiritual substance'. But it does seem difficult, after some reflection, to avoid accepting that mental processes are special in some way, and not completely reducible to physical processes.
Anyone who feels at all sympathetic to the above train of thought (I only ask you to be sympathetic, not to be convinced!) will find it hard to see how even the most intelligent or versatile of artificial intelligence programs could endow a digital computer with real mental states -- with genuine subjectivity.[+] Surely, it may be said, however closely a computer might model the outward manifestations of human thought processes, the machine would not really be thinking, since that involves being able to have subjective, inner processes. We certainly seem to have this subjective, inner experience of ourselves as mental beings. But how could a program provide a computer with similar sorts of subjective experience?
What good AI programs do is make computers perform in certain ways. But we saw in earlier chapters that AI programs do more than that. They also provide computers with an elaborate structure of internal representations, and internal mechanisms for manipulating those representations. But have we any right to conclude that a machine under the control of an AI program (even an extremely complex one) must also be having subjective experiences or conscious states? For surely that is what would be necessary in order for us to be able to say that such machines really do have mental states, and not just capabilities of performing in mind-like ways.
It might be said in reply to this that when we are talking of internal representations in computers we are, by that very token, talking of genuine mental states. But this would surely be a difficult position to defend. By internal representations we mean data-structures -- for example, list structures of various sorts -- which facilitate the transformation of different kinds of inputs (e.g., a sentence like ``What is the quickest way to Marble Arch?'') into appropriate outputs (e.g., the response ``Take the Victoria Line to Oxford Circus ...''). It would be a bad pun on the word `internal' to say that whatever internal representations are operating here must, because they are internal, be subjective states of consiousness. So if it is necessary to have subjective states of awareness in order to qualify as having a mind, it would not follow from the fact that machines can perform intelligently and operate with elaborate structures of internal representations (in the sense outlined), that they therefore must have minds.
It looks at this stage as though we have to accept that, in order to undergo genuine mental processes of any kind, it is necessary to have subjective, conscious states of experience. Should we, then, conclude that computers and AI have nothing useful to tell us about the nature of the human mind, despite the promising and innovative nature of so much that is currently being discovered in AI research? This would be premature.
First, as we have seen at length in this book, and as was explored in detail in chapter 8, AI programs have been instructive in explaining and modelling the functional operations of thinking, even if we were not to accept that machines which are running such programs are literally thinking. Second, it may be that the points we have just looked at, which seem to argue against granting mentality to AI systems under any circumstances, do not survive closer examination. One possibility, for example, might be that there are (at least) two quite different kinds of mental states or mental properties. On the one hand, there would be subjective states of consciousness, which we have already talked about. But on the other hand, when we talk, in an idiomatic way, of `things going on in our minds', we are often talking, surely, not about qualitative states of awareness, but rather of cognitive operations of various kinds, of which we are not, or need not be, fully conscious. Perhaps processes of this latter sort are much closer to the internal processes of working AI systems.