I thought you might find this interesting (and that you might even be tempted to join in). The discussion started with the following passage by Pat Hayes from a Virtual Symposium on the Virtual Mind that will appear in Minds & Machines in a few months. The rest is self-explanatory. I've included only the abstract plus the pertinent passages from the Symposium. (A few messages were unfortunately not saved, but I think they are easily reconstructed from context.)

-- Cheers, Stevan Harnad

--------------------------------------------------------

[To Appear in: "Minds and Machines" 1992]

Virtual Symposium on the Virtual Mind

Patrick Hayes CSLI Stanford University

Stevan Harnad Psychology Department Princeton University

Donald Perlis Department of Computer Science University of Maryland

Ned Block Department of Philosophy and Linguistics Massachussetts Institute of Technology

ABSTRACT: When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual mind" real?

This is the question addressed in this "virtual" symposium, originally conducted electronically among four cognitive scientists: Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: A real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one.

[text deleted]

HAYES: You have heard me make this distinction, Stevan (in the Symposium on Searle's Chinese Room Argument at the 16th Annual Meeting of the Society for Philosophy and Psychology in College Park, Maryland, June 1990). I now think that the answer is, No, Searle isn't a (possible) implementation of that algorithm. Let me start with the abacus, which is clearly not an implementation of anything. There is a mistake here (which is also made by Putnam (1975, p. 293) when he insists that a computer might be realized by human clerks; the same mistake is made by Searle (1990), more recently, when he claims that the wall behind his desk is a computer): Abacusses are passive. They can't actually run a program unless you somehow give them a motor and bead feelers, etc.; in other words, unless you make them into a computer! The idea of the implementation-independence of the computational level does not allow there to be NO implementation; it only suggests that how the program is implemented is not important for understanding what it does.

[text deleted]

Searle, J. R. (1990) Is the Brain a Digital Computer? Presidential Address. Proceedings of the American Philsophical Association.

---------------------------------------------------------


> Date: Wed, 18 Mar 92 08:12:10 -0800
> From: searle@cogsci.Berkeley.EDU (John R. Searle)
> To: harnad@princeton.edu (Stevan Harnad) >
> Subject: Re: "My wall is a computer"
>
> Stevan, I don't actually say that. I say that on the standard Turing
> definition it is hard to see how to avoid the conclusion that
> everything is a computer under some description. I also say that I
> think this result can be avoided by introducing counterfactuals and
> causation into the definition of computation. I also claim that Brian
> Smith, Batali, etc. are working on a definition to avoid this result.
> But it is not my view that the wall behind me is a digital computer.
>
> I think the big problem is NOT universal realizability. That is only a
> SYMPTOM of the big problem. the big problem is : COMPUTATION IS AN
> OBSERVER RELATIVE FEATURE. Just as semantics is not intrinsic to syntax
> (as shown by the Chinese Room) so SYNTAX IS NOT INTRINSIC TO PHYSICS.
> The upshot is that the question : Is the wall (or the brain) a
> digital computer is meaningless, as it stands. If the question is "Can
> you assign a computational interpretation to the wall/brain?" the
> answer is trivially yes. you can assign an interpretation to anything.
>
> If the question is : "Is the wall/brain INTRINSICALLY a digital
> computer?" the answer is: NOTHING is intrisically a digital computer.
> Please explain this point to your colleagues. they seem to think the
> issue is universal realizability. Thus Chrisley's paper for example. >
> Anyhow the reference is to my APA presidential address " IS the Brain a
> Digital Computer?" proceeding of the Am Philos Assoc, for 90 or 91.
> I will send you the paper formatted for troff.
> Best john

John, many thanks for the reference and the details of your view about computers/computation. I think another way to phrase the question is:

(1) What is computation? and

(2) What is the implementation of a computation?

The answer I favor would be that computation is formal symbol manipulation (symbols are arbitrary objects that are manipulated on the basis of formal rules that operate only on their arbitrary shapes).

Syntax is unproblematic (just as it is in mathematics): It consists of rules that apply only to the arbitrary shapes of symbols (symbol tokens), not to their meanings. The problem is deciding what is NONTRIVIAL symbol manipulation (or nontrivially interpretable symbol manipulation): A symbol system with only two states, "0" and "1," respectively interpretable as "Life is like a bagel" and "Life is not like a bagel," is a trivial symbol system. Arithmetic and English are nontrivial symbol systems.

The trick will be to specify formally how to distinguish the trivial kind of symbol system from the nontrivial kind, and I suspect that this will turn out to depend on the property of systematicity: Trivial symbol systems have countless arbitrary "duals": You can swap the interpretations of their symbols and still come up with a coherent semantics (e.g., swap bagel and not-bagel above). Nontrivial symbol systems do not in general have coherently interpretable duals, or if they do, they are a few specific formally provable special cases (like the swappability of conjunction/negation and disjunction/negation in the propositional calculus). You cannot arbitrarily swap interpretations in general, in Arithmetic, English or LISP, and still expect the system to be able to bear the weight of a coherent systematic interpretation.

For example, in English try swapping the interpretations of true vs. false or even red vs. green, not to mention functors like if vs. not: the corpus of English utterances is no longer likely to be coherently interpretable under this arbitrary nonstandard interpretation; to make it so, EVERY symbol's interpretation would have to change in order to systematically adjust for the swap. It is this rigidity and uniqueness of the system with respect to the standard, "intended" interpretation that will, I think, distinguish nontrivial symbol systems from trivial ones. And although I'm not sure, I have an intuition that the difference will be an all-or-none one, rather than a matter of degree.

A computer, then, will be the physical implementation of a symbol system -- a dynamical system whose states and state-sequences are the interpretable objects (whereas in a static formal symbol system the objects are, say, just scratches on paper). A Turing Machine is an abstract idealization of the class of implementations of symbol systems; a digital computer is a concrete physical realization. I think a wall, for example, is only the implementation of a trivial computation, and hence if the nontrivial/trivial distinction can be formally worked out, a wall can be excluded from the class of computers (or included only as a trivial computer).

Best wishes, Stevan

---------

Cc: Allen.Newell@cs.cmu.edu, GOLDFARB%unb.ca@UNBMVS1.csd.unb.ca (Lev Goldfarb), carroll@watson.ibm.com (John M Carroll), dennett@pearl.tufts.edu (Dan Dennett), fb0m+@andrew.cmu.edu (Frank Boyle), haugelan@unix.cis.pitt.edu, hayes@sumex-aim.stanford.edu (Pat Hayes), searle@cogsci.berkeley.edu


> Date: Fri, 20 Mar 92 8:47:13 EST
> From: Herb Simon
> To: Stevan Harnad
> Subject: Re: What is computation?
>
> A non-trivial symbol system is a symbol system that can be programmed to
> perform tasks whose performance by a human would be taken as evidence of
> intelligence.
>
> Herb Simon

Herb, this Turing-like criterion surely fits most cases of computation (though perhaps not all: we might not want to exclude mindless rote-iterations or tasks so unlike human ones that we might not even be able to say whether we would judge them as intelligent if performed by a human). But even if your criterion were extensionally equivalent to nontrivial computation, it still would not tell us what nontrivial computation was, because it does not tell us what "tasks whose performance by a human..." are! In other words, if this were the right criterion, it would not be explicated till we had a theory of what the human mind can do, and how.

In general, although the human element certainly enters our definition of computation (trivial and nontrivial) in that the symbol system must be systematically interpretable (by/to a human), I think that apart from that our definition must be independent of human considerations. I think it should be just as unnecessary to draw upon a theory of how the human mind works in order to explain what computation is as it is unnecessary to draw upon a theory of how the human mind works in order to explain what mathematics (or engineering, or physics) is.

Stevan Harnad


> Date: Fri, 20 Mar 92 08:56:00 EST
> From: "John M. Carroll"
> Subject: What is computation?
>
> Ref: Your note of Wed, 18 Mar 92 14:20:51 EST
>
> through a surprising quirk of intentionality on the part of the
> internet, i got copied on an exchange between you and john searle.
> thanks! it was related i think to my own ontological worries --
> you're interested in whether and how objects can be interpreted as
> computations or their implementations, i'm interested in whether
> and how designed artifacts can be interpreted as theories of their
> intended human users, or as implementations of those theories. since
> i've already eavesdropped, how did searle reply to your 'duals' idea?
> cheers

Hi John (Carroll)! You didn't eavesdrop; I branched it to you and others (by blind CC) intentionally, because I thought you might be interested. I've gotten several responses so far, but not yet from Searle. Dan Dennett wrote that he had published a similar "duals" idea, which he called the "cryptographer's criterion," and Frank Boyle wrote that Haugeland had made a similar rigid interpretability proposal in "AI and the Western Mind." I made the suggestion independently several years ago in a paper called "The Origin of Words: A Psychophysical Hypothesis" and first thought about it in reflecting on Quinean underdetermination of word meaning and inverted spectra several years earlier.

Although the artifact-design/user-theory problem and the problem of what is a computer/computation have some things in common, I suspect they part paths at the same Platonic point where the truths of formal mathematics part paths from the purposes of their creators. (Lev Goldfarb responded with a similar suggestion: that explaining nontrivial computation requires a theory of inductive learning.)

Stevan Harnad

Date: Sat, 21 Mar 92 03:14:43 EST To: roitblat@uhunix.uhcc.Hawaii.Edu (Herb Roitblat)

Herb (Roitblat), we disagree on a lot! I don't think a computer is the class of devices that can simulate other devices, or if it is, then that leaves me as uncertain what that class of devices is as before. I think a computer is a device that implements a nontrivial symbol system, and what makes a symbol system nontrivial is that it can bear the weight of one systematic interpretation (the standard one, and in a few special cases, some provable nonstandard ones). I think a grounded symbol system is one in which the interpretations of its symbols do not just square with what is in the mind of us outside interpreters, but also with what the system does in the real world. The nontrivial grounded symbol system that interests me is the robot that can pass the Total Turing Test (behave indistinguishably from ourselves).

We disagree even more on categories. I think the Roschian view you describe is all wrong, and that the "classical" view -- that categories have invariant features that allow us to categorize in the all-or-none way we clearly do -- is completely correct. Introspections about how we categorize are irrelevant (did we expect introspection to do our theoretical work for us, as cognitive theorists?), as are reaction times and typicality judgments. The performance capacity at issue is our capacity to learn to sort and label things as we do, not how fast we do it, not how typical we find the members we can correctly sort and label, not the cases we CANNOT sort and label, not the metaphysical status of the "correctness" (just its relation to the Skinnerian consequences of MIScategorization), and certainly not how we happen to think we do it. And the categories of interest are all-or-none categories like "bird," not graded ones like "big."

Cheers, Stevan

----------------

Date: Sun, 22 Mar 92 20:45:49 EST From: "Stevan Harnad" Subject: Re: What is computation? Cc: bradley@ivy (Bradley W Dickinson), briansmith.pa@xerox.com, dennett@pearl.tufts.edu (Dan Dennett), has@cs.cmu.edu, hayes@sumex-aim.stanford.edu (Pat Hayes), smk@wjh12.harvard.edu, sontag@gauss.rutgers.edu hatfield@linc.cis.upenn.edu, searle@cogsci.Berkeley.EDU

On: Nontrivial Computation, Nonarbitrary Interpretability, and Complexity


> Date: Sun, 22 Mar 92 17:07:49 -0500
> From: hatfield@linc.cis.upenn.edu (Gary Hatfield)
> To: harnad@Princeton.EDU, searle@cogsci.Berkeley.EDU
> Subject: Re: What is computation?
>
> Stevan: I don't see how you respond to John (Searle)'s point about observer-
> relativity. Indeed, your distinction between between trivial and
> nontrivial symbol systems appeals to an "intended interpretation,"
> which would seem simply to supply fuel for his fire. And your point
> about the wall being an implementation of a "merely trivial"
> computation is not clearly established: it depends on how you
> individuate the wall's computational states. John's claim (which is
> similar is some respects to one that Kosslyn and I made in our _Social
> Research_ paper, 1984, pp. 1025-6, 1032, and to Putnam's discussion in
> the appendix to his _Repn and Reality_) is that some state-assignment
> could be found for the wall in which it was performing any computation
> that you like, including any NONTRIVIAL computation (of course, the
> assignment might carve out physically arbitrary bits of the wall and
> count state transitions arbitrarily, from a physical point of view).
>
> Stephen and I, in our paper, contended that in order to avoid this sort
> of move the symbolist must argue that brains have non-arbitrary
> functional architectures, and that the functional architecture of our
> brain is so organized that it nonarbitrarily instantiates a serial
> digital computer. We then offered reasons for thinking that the brain
> doesn't have such an architecture.
>
> The crux of the matter is the status of function assignments. One might
> say that digital computers nonarbitrarily have a von Neumann
> functional architecture by offering a theory of the functions of
> artifacts according to which such functions are assigned relative to
> the intentions of designers or users. That might make sense of the
> intuition that commercial digital computers "intrinsically" are digital
> computers, though it wouldn't answer John's objection, because it still
> appeals to intentions in the function- assignment. But if one argued
> that there are natural functions, that biological systems
> nonarbitrarily instantiate one function rather than another, then the
> symbolists could claim (as I think Fodor and Pylyshyn did) that certain
> biological systems are naturally organized to compute with an
> architecture similar to that of digital comuters. John denies that
> there are natural functions. However, for his "observer-relativity" and
> "no intrinsic computations" arguments to have bite, he must do more
> than simply assert that there are no natural functions. Indeed, for the
> purpose of arguing against the computationalists, it would seem that he
> should offer them a choice between no-natural-functions and a trivial
> theory, and natural-functions but an empirically false theory.
>
> Best, Gary

Gary, thanks for your comments. Although I can't demonstrate it formally (but then a lot of this is informal and nondemonstrative), I suspect that there is a homology between a nonarbitrary sense in which a system is a computer and (the implementation of) a nontrivial computation, both resting on similar combinatorial, complexity-theoretic considerations. Coherent, systematic alternative interpretations are hard to come by, if at all, precisely because fitting an interpretation to a physical system is not arbitrary. There is, after all, a difference between a random string of symbols (typed by a chimp, say) that is (PERHAPS, and surely tortuously) interpretable as a Shakespearean play and a nonrandom string of symbols that is readily interpretable as a Shakespearean play. The complexity-theoretic difference would be that the algorithm you would need in order to interpret the random string as Shakespeare would be at least as long as the random string itself, whereas in the case of the real Shakespeare it would be orders of magnitude shorter. Moreover, one epsilon of perturbation in the random string, and you're back to square one insofar as its interpretability as Shakespeare is concerned. Not so with nonrandom strings and their interpretations. So interpretations the path to which is NP-complete hardly seem worth more attention than the possibility that this message could be interpreted as Grand Unified Field Theory.

I continue to think that we should be able to specify what (nontrivial) computation and computers are just as observer-independently as we can specify what flying, birds and aiplanes are. The only way the observer ever got into it in the first place was because a nontrivial symbol system must be able to bear the weight of a coherent systematic interpretation, which is something an observer might happen to want to project onto it.

Best wishes,

Stevan

----------


> Date: Mon, 23 Mar 92 00:34:56 -0500
> From: hatfield@linc.cis.upenn.edu (Gary Hatfield)
>
> The claim that there are parsimonious and unpars. ways to "make sense"
> of input-output relations has long seemed to provide support for the
> belief that objective features of particular physical systems
> (computers, organisms) nonarbitrarily constrain content-ascriptions to
> those systems (e.g., Haugeland's Intro to _Mind Design_). The problem
> is that such arguments typically take as given some non-physical
> description of the inputs and outputs. Thus, Haugeland starts with
> "tokens," and in your reply to me you start with a "string of symbols"
> typed by a monkey. That doesn't speak to Searle's (or my) concerns, for
> one wants to know by what (non-observer-dependent) criteria you
> individuated the symbols. You need to argue that the physical state
> instantiating the symbols in the case of an actual Shakespeare play
> (including, say, the whole booklet; you aren't "given" symbol
> boundaries) has an internal coherence lacking in a given
> monkey-produced text *from a strictly physical point of view*. Here
> intuitions may diverge. But in any case, it seems to me that the
> defenders of non-trivial computation and non-arbitrary interpretation
> have the burden of starting their arguments from physical descriptions,
> without taking symbols for free.

Gary, Two-part reply: First, the bit-string generated by the black-white levels on the surface of the pages of a book look like a reasonable default encoding (then, together with a character-recognition algorithm and an English parser the string is parsimoniously reduced to a non-random one). But if that default option strikes you as too "observer-dependent," then pull the observer's MIND out of it entirely and simply allow the CAUSAL interaction -- between the book's surface optical structure (as demonstrably distinct from, say, the molecular structure of its ink) and organisms' visual transducers -- to serve as the objective basis for "picking out" the default encoding.

This uses only the fact that these symbols are parts of "dedicated" systems in the world -- not that any part of the system has a mind or interprets them -- in order to do the nonarbitrary parsing (the NP-completeness of "rival" reductions takes care of the rest). This is no different from the isolation of an experimental system in physics -- and it leaves computation as mind-independent as physics.

And if you rejoin that physics has the same "observer-dependence" problem (perhaps even citing quantum mechanical puzzles as evidence [which I would reject, by the way]), my reply is that computation is in good company then, and computers are no more or less of a natural kind than stones, birds or electrons.

Stevan Harnad

------------------


> Date: Sun, 22 Mar 92 21:58:42 HST
> From: Herbert Roitblat
>
> HR: You're right we do disagree on a lot. But, then, I knew that when I
> signed on to this discussion. What suprised me, however, is that I
> misunderstood what we disagreed about even more than I thought.
> I do not understand the following:
>
> >SH: implements a nontrivial symbol system, and what makes a symbol system
> >nontrivial is that it can bear the weight of one systematic
> >interpretation (the standard one, and in a few special cases, some
> >provable nonstandard ones).
>
> HR: I suspect you mean something special by "one systematic
> interpretation," but I do not know what you mean.

As an example, consider arithmetic, the scratches on paper, consisting of "0", "1", "+" etc., the axioms (strings of scratches) and rules of inference (applying to the scratches and strings of scratches). That's a formal symbol system. The scratches on paper (symbol tokens) are manipulated only on the basis of their shapes, not what they "mean." (e.g., "0" is an arbitrary shape, and we have rules about what we can do with that shape, e.g., "0 + 1 = 1 + 0" etc.).

That's the symbol system, and what we mean by numbers, equality, etc., is the systematic interpretation that can be PROJECTED onto those scratches on paper, and they will bear the weight of that interpretation. The very same scratches can also be given a few provably coherent "nonstandard" interpretations, but in general, rival interpretations simply won't fit. For example, you cannot take the same set of scratches and interpret "=" as addition and "0" as equality and still come up with a coherent interpretation.

The same is true with the Sonnets of Shakespeare as a set of symbols interpretable as English, vs some other putative systematic interpretation of the very same scratches on paper.


> >SH: I think a grounded symbol system is one in which the interpretations
> >of its symbols do not just square with what is in the mind of us
> >outside interpreters, but also with what the system does in the real
> >world.
>
> HR: I agree with most of this, but I think that it does not matter whether
> the system's symbols are interpretable or not, thus it does not matter
> whether they square with our expectations. I entirely endorse the
> idea that what is maximally important is what the system does.

It does matter for this discussion of what computation is, because computation is concerned only with systematically interpretable symbol systems, not random gibberish.


> >SH: The nontrivial grounded symbol system that interests me is the robot
> >that can pass the Total Turing Test (behave indistinguishably from
> >ourselves).
>
> HR: This is one source of our disagreement. I agree that the Turing test
> establishes a very high level of nontriviality, but I think that it is
> too high a level to be useful at this stage (a strategic issue) and
> is so high a level that it excludes much of what I find interesting.
> I would be happy with a system that MERELY (!?) passed the Turing test
> to the level of an ant or a rat or something like that. Why not just
> a gecko? I don't think you mean that only humans are nontrivial
> computers. I cannot hope to live up to such standards in order to
> enter the discussion. I am still basically a comparative psychologist
> with interests in psycholinguistics.
>
> By the way, "trivial" is a conceptually dangerous term. When we fail
> to understand something it is nontrivial. Once we understand it, it
> becomes trivial.

There is more than one Turing Test (TT) at issue, and the differences between them are critical. The standard TT is purely symbolic (symbols in, symbols out) and calls for indistinguishability in all symbolic performance only. The Total Turing Test (TTT) I have proposed in its place (Harnad 1989, 1991) calls for indistinguishability in all symbolic AND robotic (sensorimotor interactions with the world of objects and events) performance. A lot rides on the TT vs. TTT distinction.

Nonhuman species TTT's would of course be welcome, and empirically prior to human TTT's, but unfortunately we lack both the ecological knowledge and the intuitive capacity (based on shared human homologies) to apply the TTT confidently to any species but our own. (This doesn't mean we can't try, of course, but that too is not what's at issue in this discussion, which is about what COMPUTATION is.)

I didn't say humans were computers, nontrivial or otherwise (they might be, but it seems to me they're also a lot of other things that are more relevant and informative). The question was about what COMPUTERS are. And I think "nontrivial" is a very useful term, a reasonable goal for discussion, and does not merely refer to what we have already understood.


> >SH: We disagree even more on categories. I think the Roschian view you
> >describe is all wrong, and that the "classical" view -- that categories
> >have invariant features that allow us to categorize in the all-or-none
> >way we clearly do -- is completely correct.
> >And the categories of interest
> >are all-or-none categories like "bird," not graded ones like "big."
>
> HR: This is a fundamental disagreement. It seems to me that your intent
> to focus on the most clearly classical cases derives from your belief
> that classical cases are the paradigm. Variability from the classical
> case is just "performance error" rather than competence. Am I correct
> on this last point?

Incorrect. I focus on categorical (all-or-none) categories because I think they, rather than graded categories, form the core of our conceptual repertoire as well as its foundations (grounding).


> HR: Bird is no less trivial than mammal, but we are faced with the
> question of whether monotremes are mammals. Living things are an all
> or none category. Are viruses living things? The question is not
> whether you believe viruses to be living things, you could be
> mistaken. Are they living things in the Platonic sense that classical
> theory requires? Bachelor is another classic category. Is a priest a
> bachelor? Is someone cohabiting (with POSSLQ) a bachelor? Is an 18
> year old unmarried male living alone a bachelor? Is a homosexual male
> a bachelor? What are the essential features of a bachelor and can you
> prove that someone either does or does not have them?

Herb, I've trodden this ground many times before. You just said before that you were a comparative psychologist. The ontology of the biosphere is hence presumably not your data domain, but rather the actual categorizing capacity and performance of human beings (and other species). It does not matter a whit to the explanation of the mechanisms of this performance capacity what the "truth" about montotremes, viruses or priests is. Either we CAN categorize them correctly (with respect to some Skinnerian consequence of MIScategorization, not with respect to some Platonic reality that is none of our business as psychologists) or we cannot. If we can, our success is all-or-none: We have not said that cows are 99% mammals whereas monotremes are 80% mammals. We have said that cows are mammals. And montotremes are whatever the biological specialists (hewing to their OWN, more sophisticated consequences of MIScategorization) tell us they are. And if we can't say whether a priest is or is not a bachelor, that too does not make "bachelor" a graded category. It just means we can't successfully categorize priests as bachelors or otherwise!

We're modelling the cognitive mechanisms underlying our actual categorization capacity; we're not trying to give an account of the true ontology of categories. Nor is it relevant that we cannot introspect and report the features (perfectly classical) that generate our success in categorization: Who ever promised that the subject's introspection would do the cognitive theorist's work for him? (These are all lingering symptoms of the confused Roschian legacy I have been inveighing against for years in my writings.)


> The classic conceptualization of concepts is tied closely to the
> notion of truth. Truth can be transmitted syntactically, but not
> inductively. If features X are the definition of bachelor, and if
> person Y has those features then person Y is a bachelor. One problem
> is to prove the truth of the premises. Do you agree that the symbol
> grounding problem has something to do with establishing the truth of
> the premises?

Nope. The symbol grounding problem is the problem that formal symbol systems do not contain their own meanings. They must be projected onto them by outside interpreters. My candidate solution is robotic grounding; there may be others. Leave formal truth to the philosophers and worry more about how organisms (and robots) actually manage to be able to do what they can do.


> The truth of the premises cannot be proved because we have no
> infallible inductive logic. We cannot prove them true because such
> proof depends on proving the truth of the implicit ceteris paribus
> clause, and just established that proof of a premise is not possible.
> We cannot be sure that our concepts are correct. We have no proof
> that any exemplar is a member of a category. I think that these
> arguments are familiar to you. The conclusion is that even classic
> categories have only variable-valued members, even they cannot truly
> be all-or-none.

The arguments are, unfortunately, familiar mumbo-jumbo to me. Forget about truth and ontology and return to the way organisms actually behave in the world (including what absolute discriminations they can and do make, and under what conditions): Successful (TTT-scale) models for THAT is what we're looking for. Induction and "ceteris paribus" has nothing to do with it!


> I think, therefore, that we are not justified in limiting discussion
> to only those categories that seem most clear, but that we would be
> served by developing a theory of conceptual representation that did
> not depend on artificial demarcations. I argue for a naturalistic
> theory of categories that depends on how people use conceptual labels
> (etc.). I argue that such use depends on a certain kind of
> computation, that given enough time, people could endorse a wide range
> of categorizations. The range of categorizations that they can
> endorse is the range of dimensions for which they represent the
> concept as having a value. My hunch is that the number of these
> dimensions that can be used at any one time for performing a given
> task is small relative to the number of dimensions that they know
> about for the concept. You seems more interested in characterizing
> the range of dimensions along which people can use their concept, I am
> more interested in the way in which they select those dimensions for
> use at the moment.

I'm interested in what mechanisms will actually generate the categorization capacity and performance of people (and animals). My own models happen to use neural nets to learn the invariants in the sensory projection of objects that will allow them to be categorized "correctly" (i.e., with respect to feedback from the consequences of MIScategorization). The "names" of these elementary sensorimotor categories are then grounded elementary symbols that can enter into higher-order combinations (symbolic representation), but inheriting the analog constraints of their grounding.


> Finally, I have been thinking about symbol grounding in other
> contexts. Exactly what symbols do you think are grounded in human
> representation? It cannot be letters because no semantics is
> attributed to them. It cannot be words, because we understand
> paraphrases to mean the same thing, the same word has multiple
> meanings, etc. It cannot be sentences, because we are productive in
> our use of sentences and could utter an indefinite number of them.
> The symbols would have to be internal, variably mappable onto surface
> symbols, and as such, not communicable with great confidence to other
> individuals. You would argue (yes?) that they are finite and discrete,
> but highly combinable. You would not argue, I think, that they get
> their meaning through their reference to some specifiable external
> object or event (i.e., you would not get into the Golden Mountain
> conundrum). Is symbol grounding nothing more than whatever relationship
> allows one to avoid unpleasant consequences of misunderstanding and
> misclassification (your allusion to Skinner)?

I don't know what the ground-level elementary symbols will turn out to be, I'm just betting they exist -- otherwise it's all hanging by a skyhook. Nor do I know the Golden Mountain conundrum, but I do know the putative "vanishing intersections" problem, according to which my approach to grounding is hopeless because not even sensory categories (not to mention abstract categories) HAVE any invariants at all: My reply is that this is not an apriori matter but an empirical one, and no one has yet tried to see whether bottom-up sensory grounding of a TTT-scale robot is possible. They've just consulted their own (and their subjects') introspections on the matter. I would say that our own success in categorization is some inductive ground for believing that our inputs are not too underdetermined to provide an invariant basis for that success, given a sufficiently powerful category learning mechanism.


> By the way, I am sorry for constantly putting words into
> your mouth, but it seems to me to be an efficient way to finding out
> what you mean.

In the event, it probably wasn't, but I managed to say what I meant anyway. I have an iron-clad policy of not sending people off to look up chapter and verse of what I've written on a topic under discussion; I willingly recreate it on-line from first principles, as long as my interlocutor does me the same courtesy -- and you haven't sent me off to chapter and verse either. I find this policy easy enough to be faithful to, because I don't have any ideas that cannot be explained in a few paragraphs (nothing longer than a 3-minute idea). Nor have I encountered many others who have longer ideas (though I have encountered many others who have been longer-winded or fuzzier about describing them).


> >SH: Introspections about how we categorize are irrelevant (did we expect
> >introspection to do our theoretical work for us, as cognitive
> >theorists?), as are reaction times and typicality judgments.
>
> HR Introspections play no role in my conceptualizaiton. You must have me
> confused with someone else. I am not even sure that I am conscious,
> let alone capable of introspection.

Regarding the non-role introspections play in your conceptualization, see what you asked me about the essential features of bachelors above. Why should I be able to introspect essential features, and what does it prove if I can't? All that matters is that I can actually sort and label bachelors as I do: Then finding the features I use become's the THEORIST's problem, not the subject's.

I would suggest, by the way, that you abandon your uncertainty about whether anybody's home inside you, experiencing experiences (as I confidently assume there is in you, and am certain there is in me). Cartesian reasons alone should be sufficient to persuade you that the very possibility of experiencing uncertainty about whether there is somebody home in your own case is self-contradictory, because "uncertainty" or "doubt" is itself a experiential state.


> If this discussion is heading off in a direction irrelevant to your
> interests, we can wait for another more opportune time. I think that
> our discssion has taken roughly this course: What is computation?
> Computation is either any regular state change (my position) or it is
> the set of nontrivial operations involving a grounded symbol set
> (fair?). What is a grounded symbol set?
>
> Aloha. Herb

The discussion is about what computers/computation are, and whether there is any principled way to distinguish them from what computers/computation aren't. In one view (not mine), what is a computer is just a matter of interpretation, hence everything is and isn't a computer depending on how you interpret it. In my view, one CAN distinguish computers -- at least those that do nontrivial computation -- on a complexity-theoretic basis, because systematic interpretations of arbitrary objects are as hard to come by as chimpanzees typing Shakespeare.

Now once we have settled on what computers/computation are (namely, nontrivial symbol manipulation systems), we still face the symbol grounding problem: These nontrivially interpretable systems still do not "contain" their own interpretations. The interpretations must be projected onto them by us. A grounded symbol system is one whose robotic performance in the real world of objects and events to which its symbols can be interpreted as referring squares systematically with the interpretation. The symbol interpretations are then grounded in its robotic performance capacity, not just in our projections.

References (nonobligatory) follow.

Cheers, Stevan

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. (1990a) The Symbol Grounding Problem. Physica D 42: 335-346.

Harnad, S. (1990b) Against Computational Hermeneutics. (Invited commentary of Eric Dietrich's Computationalism) Social Epistemology 4: 167-172.

Harnad, S. (1990c) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327.

Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54.

Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag.

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on the Virtual Mind. Minds and Machines (in press)

----------------------

Date: Sun, 29 Mar 92 18:39:31 EST From: "Stevan Harnad"


> Date: Mon, 23 Mar 1992 15:35:43 -0500 (EST)
> From: Franklin Boyle
> To: "Stevan Harnad"
> Subject: Re: What is computation?
>
> The article by Haugeland contains the following text which is relevant to the
> discussion, though it wasn't intended to address the SS issue at hand:
>
> Suppose that, instead of giving the analogy [interpretation],
> I had just spelled out the rules, and then invited you to
> discover the interpretation. That would be a cryptarithmetic
> puzzle, or, more generally, a code cracking assignment. The
> principle of all such efforts, from deciphering ancient inscript-
> tions to military cryptography, is finding a consistent reading
> such that the results reliably *make sense* [footnote to Quine].
> This requirement is by no means trivial or easy to meet; there
> are, for instance, no semantic interpretations attached to the
> chess or checkers systems. Hence, though an interpretation is
> never properly part of a formal system, the structure of a system
> strongly constrains possible interpretations; in other words, the
> relation between a formal system and a viable interpretation is
> not at all arbitrary.
> -- (p27, "Artificial Intelligence and the Western Mind.
> in _The Computer and the Brain: Perspectives on
> Human and Artificial Intelligence_, J.R. Brink &
> C.R. Haden (eds).)
>
> He is speculating about original meaning (rather than the difference
> between trivial and non-trivial SS's) and the fact that computers
> represent a technological solution to the "Paradox of Mechanical
> Reason" ["either meanings matter to the manipulations, in which case
> reasoning is not really mechanical (it presupposes an homunculus); or
> else meanings do not matter, in which case the machinations are not
> really rational (they are just some meaningless "machine-like"
> interactions" (p23)] because they "take care of the formalism" so that
> "any meaning that can be taken care of by taking care of the rules will
> be *automatically* taken care of by the computer -- without any paradox."
> (p27).
>
> -Frank Boyle

Frank, thanks for the passage. As I noted earlier, not only I, but also Dan Dennett came up with something like this independently. But I would stress that the uniqueness (or near-uniqueness, modulo duals) of the standard interpretation of a given symbol system, remarkable though it is (and this remarkable property is at the heart of all of formal mathematics), it is not enough to make that interpretation intrinsic to the system: If the right interpretation is projected onto the system, it will square systematically with the interpretation, but the projection will still be from outside the system. That's good enough for doing formal maths, but not enough for modelling a mind. For the latter you need AT LEAST a grounded symbol system.

Stevan Harnad

Date: Tue, 31 Mar 92 19:38:10 EST From: "Stevan Harnad" To: chrisley@oxford.ac.uk Subject: Re: What is computation?


> From: Ronald L Chrisley
> Date: Wed, 25 Mar 92 16:13:22 GMT
> To: harnad@Princeton.EDU
> Cc: chrisley@oxford.ac.uk, dave@cogsci.indiana.edu
>
> Stevan:
>
> RC: Here are some of my thoughts on the first part of your recent message.
> Could you provide the context for this exchange between you and
> Searle? Did this dialogue take place on the symbol-grounding list?

SH: Ron, no, the "What is Computation" discussion was actually initiated by a reply by John Searle to a passage from a 4-way "skywriting" exchange that will be published in the journal Minds and Machines under the title "Virtual Symposium on the Virtual Mind." The authors are Pat Hayes, Don Perlis, Ned Block and me. The passage in question was by Pat Hayes, in which he cited John Searle as claiming his wall was a computer.

I will send you the full exchange separately. Meanwhile, you wrote:


> > Date: Wed, 18 Mar 92 08:12:10 -0800
> > From: searle@cogsci.Berkeley.EDU (John R. Searle)
> > To: harnad@princeton.edu (Stevan Harnad)
> >
> > Subject: Re: "My wall is a computer"
> >
> > JS: Stevan, I don't actually say that. I say that on the standard Turing
> > definition it is hard to see how to avoid the conclusion that
> > everything is a computer under some description. I also say that I
>
> RC: No, actually Searle argues that the standard notion of computation
> implies that everything is *every* computer. Thus, he claims that his
> wall could be seen as implementing Wordstar. But of course, there are
> good reasons for ruling out such bizarre interpretations: for one,
> they're not causal.
>
> > JS: think this result can be avoided by introducing counterfactuals and
> > causation into the definition of computation. I also claim that Brian
> > Smith, Batali, etc. are working on a definition to avoid this result.
> > But it is not my view that the wall behind me is a digital computer.
>
> RC: Nor is it anyone else's view. That's because the standard view is
> that the physics *does* constrain computational interpretations. If
> it isn't the explicit standard view, it is implicit in the notion of a
> Turing *machine*. And if Searle wants to contest that it isn't even
> implicit, then his arguments only establish the superiority of a
> theory of computer science that is physically grounded, *not* the
> incoherence of the notion that a particular form of computation is the
> essence of mind.

SH: If I may interpolate some commentary: I agree about the physical grounding as picking out this machine running WORDSTAR as a privileged interpretation. I would add only two remarks.

(1) I think (though I can't prove it) that there is probably a complexity-based way of picking out the privileged interpretation of a system as a computer running a program (rather than other, more arbitrary interpretations) based on parsimony alone.

(2) This discussion of what a computer is does not necessarily have any bearing on the question of what the mind is, or whether the brain is a computer. One could argue yes or no that computers/computation pick out a nonarbitrary kind. And one can independently argue yes or no that this has any substantive bearing on what kind of system can have a mind. (E.g., I happen to agree with Searle that a system will not have a mind merely because it implements the right computer program -- because, according to me, it must also be robotically grounded in the world -- but I disagree that there is no nonarbitrary sense in which some systems are computers and others are not. I.e., I agree with him about [intrinsic] semantics but not about syntax.)


> > JS: I think the big problem is NOT universal realizability. That is
> > only a SYMPTOM of the big problem. the big problem is: COMPUTATION IS AN
> > OBSERVER RELATIVE FEATURE. Just as semantics is not intrinsic to syntax
> > (as shown by the Chinese Room) so SYNTAX IS NOT INTRINSIC TO PHYSICS.
> > The upshot is that the question : Is the wall (or the brain) a
> > digital computer is meaningless, as it stands. If the question is "Can
> > you assign a computational interpretation to the wall/brain?" the
> > answer is trivially yes. you can assign an interpretation to anything.
>
> RC: This kind of equivocation is the reason why I delineated 3 ways in
> which one might understand the claim "the brain is a computer".
>
> One is that it admits of any computational description at all. If
> this were the extent of the claim for cognitive science, then it is
> indeed unenlightening, since even a stone could be seen as being a
> Turing Machine with only one state.
>
> A second way of interpreting the claim is that there is a class of
> Turing Machine descriptions that are sufficiently complex that we
> would consider them as descriptions of computers, more conventionally
> understood, and that the brain, as opposed to a stone or Searle's wall
> (they just don't have the right properties of plasticity, input/output
> connections, causally related internal states, etc), admits of one of
> these descriptions.
>
> A third way of understanding the cognitivist claim is: the brain
> admits of a computational description, and anything that has a mind
> must also admit of a similar computational description. This is not
> vacuous, since most things, including not only stones and Searle's
> wall, but also bona fide computers, will not admit of such a
> description.

SH: It seems to me that everything admits of a trivial computational description. Only things with a certain kind of (not yet adequately specified) complexity admit of a nontrivial computational description (and those are computers). Now things that have minds will probably also admit of nontrivial computational descriptions, hence they too will be computers, but only in a trivial sense insofar as their MENTAL capacities are concerned, because they will not be ONLY computers, and their noncomputational robotic properties (e.g., transducers/actuators and other analog structures and processes) will turn out to be the critical ones for their mental powers; and those noncomputational properties will at the same time ground the semantics of the system's symbolic states.


> > JS: If the question is : "Is the wall/brain INTRINSICALLY a digital
> > computer?" the answer is: NOTHING is intrisically a digital computer.
> > Please explain this point to your colleagues. they seem to think the
> > issue is universal realizability. Thus Chrisley's paper for example.

SH: I unfortunately can't explain this for Searle, because I happen to disagree with him on this point, although I do recognize that no one has yet come up with a satisfactory, principled way of distinguishing computers from noncomputers...


> RC: I think that we can make sense of the notion of something
> intrinsically being a digital computer. Searle's argument that we
> cannot seems to be based on the claim that anything can be seen as a
> computer. In that sense, the issue for Searle *is* universal
> realizability. That is, Searle seems to be claiming that since the
> property *digital computer* can be realized by any physical system,
> then nothing is intrinsically a digital computer, and so viewing the
> brain as one will have little value.
>
> I disagree, of course, and on several counts. For one thing, on the
> second way of understanding the claim that something is a computer,
> the property *digital computer* is not universally realizable. But
> even if everything were *some* kind of digital computer (on the first
> or second ways of understanding), that would not invalidate the
> computational approach to understanding the mind, since that approach
> seeks to understand what *kind* of computation is characteristic of
> the mental (the third way). In fact, it would be of some use to
> cognitive science if Searle could show that everything is some kind of
> computer, because there are some critics of cognitive science who
> argue that the brain cannot be viewed as a computer at all (Penrose?).
>
> Searle's only options are to endorse the coherence of the cognitivist
> claim (I am not claiming that it has been shown to be true or false,
> just that it is coherent and non-trivial), find another argument for
> its incoherence, or deny my claims that causality is relevant to
> computational interpretation, thus suggesting that cognitivism is
> vacuous since every physical system can be interpreted as being every
> kind of computer. And even if he argues that causality is irrelevant
> to a *particular* style of computational interpretation, he has to
> show that it is irrelevant to any notion of computation before he can
> rule out any computational approach to mind as being incoherent. Put
> the other way around, he would have to show that a notion of
> computation that takes causality seriously would ipso facto not be a
> notion of computation. This seems impossible. So it looks like
> Searle must try to reject cognitivism some other way, or accept it.
>
> I tried to make all this clear in my paper. Due to publisher's
> delays, there are still chances for revisions, if anyone would like to
> suggest ways that I could make these points more clear.
>
> One last thing: given the reluctance that some AI/CompSci/CogSci
> people have to taking causality, connections to the world, etc.
> seriously, I welcome and encourage Searle's points in some sense. I
> just wish he would see his arguments as establishing one type of
> cognitivism (embodied) to be prefereable to another (formal).
>
> Much of what people do in AI/CompSci/CogSci is the former, it's just
> their theories of what they are doing that are the latter. I think
> the point of Searle's paper is not "Cognitivism is incoherent" but
> rather "If you want to be a cognitivist, your theories better take
> seriously these notions of causality, connections to the world, etc.
> that are implicit in your practice anyway".
>
> Perhaps Searle's points, cast in a different light, would not give
> people reason to abandon cognitivism, but would instead show them the
> way toward its successful development. As I said in my paper, "Searle
> has done us a service".
>
> Ronald L. Chrisley New College Oxford OX1 3BN

SH: I don't think you'll be able to get computer scientists or physicists excited about the factor of "causality" in the abstract, but IMPLEMENTATION is certainly something they think about and have views on, because a program is just as an abstraction until and unless it's implemented (i.e., realized in a dynamical physical ["causal"] system -- a computer). But there's still not much room for a convergence of views there, because good "symbolic functionalists" hold that all the particulars of implementation are irrelevant -- i.e., that the same program can be implemented in countless radically different ways with nothing in common except that they are all implementations of the same computer program. Hence the right level to talk about is again the purely symbolic (copmputational) one. I happen to disagree with these symbolic functionalists insofar as the mind is concerned, but not because I think there is something magic about the "causality" of implementation, but because I think a symbol system is just as ungrounded when it's implemented as when it's just scratches on static paper. The mere implementation of a program on a computer is the wrong kind of "causality" if a mind is what you're interested in implementing (or even if it's an airplane or a furnace). What's needed is the robotic (TTT) power to ground the interpretations of its internal symbols in the robot's interactions with the real world of objects, events and states of affairs that its symbols are interpretable as being "about" (TTT-indistinguishably from our own interactions with the world). (I list some of the publications in which I've been trying to lay this out below.)

Stevan Harnad

------------------------------------------------------------

Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. (1990a) The Symbol Grounding Problem. Physica D 42: 335-346.

Harnad, S. (1990b) Against Computational Hermeneutics. (Invited commentary of Eric Dietrich's Computationalism) Social Epistemology 4: 167-172.

Harnad, S. (1990c) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327.

Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54.

Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag.

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on the Virtual Mind. Minds and Machines (in press)

Andrews, J., Livingston, K., Harnad, S. & Fischer, U. (1992) Learned Categorical Perception in Human Subjects: Implications for Symbol Grounding. Proceedings of Annual Meeting of Cognitive Science Society (submitted)

Harnad, S. Hanson, S.J. & Lubin, J. (1992) Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding. Proceedings of Annual Meeting of Cognitive Science Society (submitted)

Harnad, S. (1993, in press) Icon, Category, Symbol: Essays on the Foundations and Fringes of Cognition. Cambridge University Press.

---------------------------------------------


> Date: Tue, 31 Mar 1992 21:41:36 PST
> From: Pat Hayes
> Subject: Re: What is computation?
> To: Stevan Harnad
> Cc: chrisley@oxford.ac.uk
>
> Stevan-
>
> >SH: It seems to me that everything admits of a trivial computational
> >description.
>
> I have heard others say similar things, and Searle obviously believes
> something similar. Can you explain what you mean by this, and why you
> believe it? I cannot think of any sensible interpretation of this
> remark that makes it true. -- Pat

Pat, I think the trivial case is covered by Church's Thesis and Turing Equivalence. Consider a stone, just sitting there. It has one state, let's call it "0." Trivial computational description. Now consider the door, it has two states, open and shut; let's call one "0" and the other "1." Trivial computational description.

I think that's pretty standard, and has to do with the elementariness of the notion of computation, and hence that it can trivially capture, among other things, every static or simple dynamic description of a physical system. Nontrivial computation, on the other hand, is where Searle and I diverge. I think that if someone can define nontrivial computation in a principled way, it will separate computers from noncomputers (the way trivial computation does not).

Unfortunately, I do not have such a principled criterion for nontrivial computation except that (1) I think it will have a complexity-theoretic basis, perhaps related to NP-Completeness of the search for systematic rival interpretations of nontrivial symbol systems that differ radically from the standard interpretation (or its provable "duals"); and (2), even more vaguely, I feel the difference between trivial and nontrivial computation will be all-or-none rather than a matter of degree.

Earlier in this discussion it was pointed out that both Haugeland and Dennett (and now apparently McCarthy before them, and perhaps even Descartes -- see below) have also proposed a similar "cryptographer's constraint" on the nonarbitrariness of a systematic interpretation of a nontrivial symbol system (like natural language).

Stevan Harnad


> Date: Sun, 29 Mar 1992 21:40 EDT
> From: DDENNETT@PEARL.TUFTS.EDU
> Subject: Re: Cryptographer's Constraint
> To: harnad@Princeton.EDU
>
> I'm not sure when I FIRST discussed the cryptographers' constraint and I
> don't remember whether John McCarthy spoke of it before I did, but probably,
> since in True Believers (1981, reprinted in THE INTENTIONAL STANCE, 1987)
> I cite McCarthy 1979 when I mention it (p.29fn in TIS). By the way, the
> point can also be found in Descartes!!
> DAN DENNETT

Date: Wed, 1 Apr 92 09:09:47 EST From: "Stevan Harnad" To: hayes@sumex-aim.stanford.edu


> Date: Tue, 31 Mar 1992 23:22:44 PST
> From: Pat Hayes
>
> Stevan-
>
> OK, I now see what you mean: but what makes you think that calling the
> state of the stone '0' has anything to do with computation? A computer
> is a mechanism whose behavior is determined (in part) by the symbols
> stored in it. But the behavior of the stone and the door are not
> influenced in any way by the 'symbols' that this exercise in
> state-naming hypothesises. So they aren't computers.
>
> Perhaps I am reaching towards what you are calling nontrivial
> computation: but it might be less confusing to just call this
> computation, and call 'trivial computation' something else, might it
> not? What motivates this trivialisation of the computational idea?
>
> Pat

Pat, alas, what "trivializes" the computational idea is Goedel, Turing, Church, Post, von Neumann, and all the others who have come up with equivalent formulations of what computation is: It's just a very elementary, formal kind of thing, and its physical implementation is equally elementary. And by the way, the same problem arises with defining "symbols" (actually, "symbol-tokens," which are physical objects that are instances of an abstract "symbol-type"): For, until further notice, these too are merely objects that can be interpreted as if they meant something. Now the whole purpose of this exercise is to refute the quite natural conclusion that anything and everything can be interpreted as if it meant something, for that makes it look as if being a computer is just a matter of interpretation. Hence my attempt to invoke what others have apparently dubbed the "cryptographer's constraint" -- to pick out symbol systems whose systematic interpretation is unique and hard to come by (in a complexity-based sense), hence not arbitrary or merely dependent on the way we choose to look at them.

I also share your intuition (based on the programmable digital computer) that a computer is something that is mechanically influenced by its internal symbols (though we differ on two details -- I think it is only influenced by the SHAPE of those symbols, you think it's influenced by their MEANING [which I think would just put as back into the hermeneutic circle we're trying to break out of], and of course you think a conscious human implementation of a symbol system, as in Searle's Chinese Room, somehow does not qualify as an implementation, whereas I think it does). However, I recognize that, unlike in the case of formalizing the abstract notion of computation above, no one has yet succeeded in formalizing this intuition about physical implementation, at least not in such a way as to distinguish computers from noncomputers -- except as a matter of interpretation.

The "cryptographer's constraint" is my candidate for making this a matter of INTERPRETABILITY rather than interpretation, in the hope that this will get the interpreter out of the loop and let computers be computers intrinsically, rather than derivatively. However, your own work on defining "implementation" may turn out to give us a better way. To assess whether it succeeds, however, we're going to have to hear what your definition turns out to be! What you said above certainly won't do the trick.

One last point: As I've said before in this discussion, it is a mistake to conflate the question of what a computer is with the question of what a mind is. Even if we succeed in showing that computers are computers intrinsically, and not just as a matter of interpretation, there remains the independent problem of "intrinsic intentionality" (the fact that our thoughts are about what they are about intrinsically, and not just as a matter of interpretation by someone else). I, as you know, have recast this as the symbol grounding problem, and have concluded that, because of it, the implementation of a mind cannot possibly be merely the implementation of the "right" symbol system. There ARE other things under the sun besides computers, after all (indeed, confirming that is part of the goal of this exercise), and other processes besides (nontrivial) computation, and these will, I hypothesize, turn out to play an essential role in grounding MENTAL symbols, which are NOT sufficiently specified by their systematic interpretability alone: According to me, they must be grounded in the system's robotic interaction with the real world of objects that its symbols are "about," and this must likewise square systematically with the interpretation of its symbols. If you wish, this is a more rigorous "cryptographer's constraint," but this time a physical one rather than a merely a formal one. (Minds will accordingly turn out to be the TTT-scale class of "dedicated" computers, their "situated" "peripherals" and other analog structures and processes being essential substrates for their mental powers.)

Stevan Harnad

----------------


> Date: Wed, 1 Apr 92 09:59:32 -0500
> From: davism@turing.cs.nyu.edu (Martin Davis)
>
> Stevan,
>
> Thanks for keeping me posted on this debate. >
> I don't really want to take sides; however, there is technically no real
> problem in distinguishing "non-trivial" computers. They are "universal"
> if endowed with arbitrarily large memory.
>
> I've written two papers (long long ago) on the definition of universality
> for Turing machines. The first was in the McCarthy-Shannon collection
> "Automata Studies." The second was in the Proc. Amer. Math. Soc.
>
> If you want the exact references I'll be glad to forward them. But you
> may think this not relevant. Martin

Martin, I'm sure what you wrote will be relevant, so please do send me the reference. But can you also tell me whether you believe computers (in the real world) can be distinguished from noncomputers in any way that does not depend merely on how we choose to interpret their "states" (I think they can, Searle and others think they can't)? Do you think memory-size does it? Can we define "memory" interpretation- independently (to exclude, say, ocean tides from being computers)? And would your memory-size criterion mean that everything is a computer to some degree? Or that nothing is a computer, but some things are closer to being one than others? -- Cheers, Stevan

------------------

From: Ronald L Chrisley Date: Wed, 1 Apr 92 16:33:48 +0100

Stevan:

I think there is a large degree of agreement between us:

Date: Tue, 31 Mar 92 19:38:10 EST From: Stevan Harnad

> From: Ronald L Chrisley > Date: Wed, 25 Mar 92 16:13:22 GMT

SH: If I may interpolate some commentary: I agree about the physical grounding as picking out this machine running WORDSTAR as a privileged interpretation. I would add only two remarks.

(1) I think (though I can't prove it) that there is probably a complexity-based way of picking out the privileged interpretation of a system as a computer running a program (rather than other, more arbitrary interpretations) based on parsimony alone.

This may be true, but I think that "parsimony" here will probably have to make reference to causal relations.

(2) This discussion of what a computer is does not necessarily have any bearing on the question of what the mind is, or whether the brain is a computer. One could argue yes or no that computers/computation pick out a nonarbitrary kind. And one can independently argue yes or no that this has any substantive bearing on what kind of system can have a mind. (E.g., I happen to agree with Searle that a system will not have a mind merely because it implements the right computer program -- because, according to me, it must also be robotically grounded in the world -- but I disagree that there is no nonarbitrary sense in which some systems are computers and others are not. I.e., I agree with him about [intrinsic] semantics but not about syntax.)

I agree. I only mentioned the cognitivist's claim for perspective. Searle's claim that physics does not determine syntax is indeed distinct from his claim that syntax does not determine semantics. I'm very sympathetic with a grounded, embodied understanding of cognition. But that doesn't mean that I have to agree with Searle that the claim "mind is computation" is incoherent; it might just be wrong.

SH: It seems to me that everything admits of a trivial computational description.

I pretty much said the same when I said even a stone could admit of a computational description, and that such a notion of compoutation is unenlightening. But consider: perhaps the injustice in South Africa is something that does not even admit of a trivial computational description...

Only things with a certain kind of (not yet adequately specified) complexity admit of a nontrivial computational description (and those are computers). Now things that have minds will probably also admit of nontrivial computational descriptions, hence they too will be computers, but only in a trivial sense insofar as their MENTAL capacities are concerned, because they will not be ONLY computers, and their noncomputational robotic properties (e.g., transducers/actuators and other analog structures and processes) will turn out to be the critical ones for their mental powers; and those noncomputational properties will at the same time ground the semantics of the system's symbolic states.

This might be; but we should nevertheless resist Searle's following claim:

> > JS: If the question is : "Is the wall/brain INTRINSICALLY a digital
> > computer?" the answer is: NOTHING is intrisically a digital computer.
> > Please explain this point to your colleagues. they seem to think the
> > issue is universal realizability. Thus Chrisley's paper for example.

(BTW: was Searle assuming that the others had read/heard of my paper?!)

SH: I unfortunately can't explain this for Searle, because I happen to disagree with him on this point, although I do recognize that no one has yet come up with a satisfactory, principled way of distinguishing computers from noncomputers...

I agree.

SH: I don't think you'll be able to get computer scientists or physicists excited about the factor of "causality" in the abstract, but IMPLEMENTATION is certainly something they think about and have views on, because a program is just as an abstraction until and unless it's implemented (i.e., realized in a dynamical physical ["causal"] system -- a computer).

But Searle and Putnam have a point here: unless causality counts in determining what is and what is not an implementation, then just about anything can be seen as an implementation of anything else. So those interested in implementation will have to pay attention to causality.

But there's still not much room for a convergence of views there, because good "symbolic functionalists" hold that all the particulars of implementation are irrelevant -- i.e., that the same program can be implemented in countless radically different ways with nothing in common except that they are all implementations of the same computer program. Hence the right level to talk about is again the purely symbolic (copmputational) one.

But perhaps the point to be made is that there's a lot more involved in implementation than we previously realized. Symbolic functionalists knew that it placed some restriction on the physics; perhaps they just under-estimated how much.

I happen to disagree with these symbolic functionalists insofar as the mind is concerned, but not because I think there is something magic about the "causality" of implementation, but because I think a symbol system is just as ungrounded when it's implemented as when it's just scratches on static paper. The mere implementation of a program on a computer is the wrong kind of "causality" if a mind is what you're interested in implementing (or even if it's an airplane or a furnace). What's needed is the robotic (TTT) power to ground the interpretations of its internal symbols in the robot's interactions with the real world of objects, events and states of affairs that its symbols are interpretable as being "about" (TTT-indistinguishably from our own interactions with the world). (I list some of the publications in which I've been trying to lay this out below.)

Yes, I'm very sympathetic with your writings on this point. Even though the claim that everything realizes every Turing machine is false, that merely makes the claim "to have a mind is to implement TM No. xxx" coherent and false, not coherent and true. One still needs grounding.

But the reverse is also true. In a section of my paper ("Symbol Grounding is not sufficient"), I pointed out that one thing we can take home from Searle's paper is that without some appeal to causation, etc., in order to justify computational predicates, symbol grounding is mere behaviorism. We can agree with Searle on that and yet believe 1) that we *can* make the necessary appeals to causation in order to make sense of computational predicates (such appeals are implicit in our practice and theory); and 2) that symbol grounding, although not sufficient, is necessary for a computational understanding of mind.

Ronald L. Chrisley New College

---------------


> Date: Wed, 1 Apr 92 18:55:28 -0500
> From: davism@turing.cs.nyu.edu (Martin Davis)
> Subject: Re: What is a computation?
>
> Here are the references:
>
> ``A Note on Universal Turing Machines,'' {Automata Studies}, C.E.
> Shannon and J. McCarthy, editors, Annals of Mathematics Studies,
> Princeton University Press, 1956.
>
> ``The Definition of Universal Turing Machine,'' {Proceedings of the
> American Mathematical Society,} vol.8(1957), pp. 1125-1126.
>
> As for your argument with Searle (which I did try to avoid), my
> tendency is to place the issue in the context of the appropriate
> mathematical [idea] of "computer". I think it is a commonplace among
> philosophers that what appear to be purely empirical questions almost
> always really involve theoretical presuppositions.
>
> The two main contenders are finite automata and Turing machines. I
> suppose anything could be regarded as a finite automaton; I haven't
> really thought about it. But most agree today (this wasn't always the
> case) that it's the TM model that's appropriate. The counter-argument
> that real-world computers have finite memories is answered by noting
> that an analysis that defines a computer as having fixed memory size
> must say what kind of memory (ram? hard disk? floppies? tape?). In
> particular none of the theorems about finite automata have ever been
> applied to computers. If I remember (an increasingly dubious
> proposition) I discussed this in:
>
> ``Computability,'' {Proceedings of the Symposium on System
> Theory,} Brooklyn, N.Y. 1966, pp. 127-131.
>
> I would add (as I suggested in my previous message) that UNIVERSALITY
> is also generally tacitly presumed. This means that the computer can
> run programs embodying arbitrary algorithms.
>
> I think Searle would find it difficult to argue that a rock is a
> universal Turing machine.
>
> It is true that something may be a computer without it being readily
> recognized as such. This is for real. Microprocessors (which are
> universal computers) are part of many devices. Your telephone, your
> thermostat, certainly your VCR are all computers in this sense.
>
> But certainly not a rock!
>
> Here's another (closely related) less theoretical approach:
>
> Make a list of half a dozen simple computational tasks:
>
> E.g.
> 1. Given a positive integer, compute its square root to 5 decimal
> places.
> 2. Given two character strings, produce the string obtained by
> interleaving them, one character from each input at a time.
> 3. Given a positive integer, compute the sum of the positive integers
> less than or equal to the given integer;
>
> etc. etc.
>
> Then ask Searle to explain how to arrange matters so a stone will
> carry out these tasks.
>
> In other words, in order for the term "computer" to be justified, the
> object in question should be able to carry out ordinary computational
> tasks.
>
> Martin Davis

Martin,

Because I tend to agree with you in believing that a principled and interpretation-independent basis can be found for determining what is and is not a computer (and perhaps universality will be part of that basis), I'll leave it to other contributors to contest what you have suggested above. I do want to point out, however, that what we are trying to rule out here is arbitrary, gerrymandered interpretations of, say, the microstructure (and perhaps even the surface blemishes) of a stone according to which they COULD be mapped into the computations you describe. Of course the mapping itself, and the clever mind that formulated it, would be doing all the work, not the stone, but I think Searle would want to argue that it's no different with the "real" computer! The trick would be to show exactly why/how that rejoinder would be incorrect. It is for this reason that I have groped for a complexity-based (cryptographic?) criterion, according to which the gerrymandered interpretation of the stone could somehow be ruled out as too improbable to come by, either causally or conceptually, whereas the "natural" interpretation of the SPARC running WORDSTAR would not.

Stevan Harnad

PS Pat Hayes has furnished yet another independent source for this "cryptographer's constraint":


> Date: Tue, 31 Mar 1992 23:25:36 PST
> From: Pat Hayes
> Subject: Re: What is computation?
>
> PS: I recall McCarthy telling me the idea of the cryptographers
> constraint in 1969 when I first came to the USA (or maybe 1971, on the
> second trip). It didn't seem to be such an important matter then, of
> course.
>
> Pat Hayes

------------

From: Ronald L Chrisley Date: Wed, 1 Apr 92 16:33:48 +0100

Stevan:

I think there is a large degree of agreement between us:

Date: Tue, 31 Mar 92 19:38:10 EST From: Stevan Harnad

> From: Ronald L Chrisley > Date: Wed, 25 Mar 92 16:13:22 GMT

SH: If I may interpolate some commentary: I agree about the physical grounding as picking out this machine running WORDSTAR as a privileged interpretation. I would add only two remarks.

(1) I think (though I can't prove it) that there is probably a complexity-based way of picking out the privileged interpretation of a system as a computer running a program (rather than other, more arbitrary interpretations) based on parsimony alone.

This may be true, but I think that "parsimony" here will probably have to make reference to causal relations.

(2) This discussion of what a computer is does not necessarily have any bearing on the question of what the mind is, or whether the brain is a computer. One could argue yes or no that computers/computation pick out a nonarbitrary kind. And one can independently argue yes or no that this has any substantive bearing on what kind of system can have a mind. (E.g., I happen to agree with Searle that a system will not have a mind merely because it implements the right computer program -- because, according to me, it must also be robotically grounded in the world -- but I disagree that there is no nonarbitrary sense in which some systems are computers and others are not. I.e., I agree with him about [intrinsic] semantics but not about syntax.)

I agree. I only mentioned the cognitivist's claim for perspective. Searle's claim that physics does not determine syntax is indeed distinct from his claim that syntax does not determine semantics. I'm very sympathetic with a grounded, embodied understanding of cognition. But that doesn't mean that I have to agree with Searle that the claim "mind is computation" is incoherent; it might just be wrong.

SH: It seems to me that everything admits of a trivial computational description.

I pretty much said the same when I said even a stone could admit of a computational description, and that such a notion of compoutation is unenlightening. But consider: perhaps the injustice in South Africa is something that does not even admit of a trivial computational description...

Only things with a certain kind of (not yet adequately specified) complexity admit of a nontrivial computational description (and those are computers). Now things that have minds will probably also admit of nontrivial computational descriptions, hence they too will be computers, but only in a trivial sense insofar as their MENTAL capacities are concerned, because they will not be ONLY computers, and their noncomputational robotic properties (e.g., transducers/actuators and other analog structures and processes) will turn out to be the critical ones for their mental powers; and those noncomputational properties will at the same time ground the semantics of the system's symbolic states.

This might be; but we should nevertheless resist Searle's following claim:

> > JS: If the question is : "Is the wall/brain INTRINSICALLY a digital
> > computer?" the answer is: NOTHING is intrisically a digital computer.
> > Please explain this point to your colleagues. they seem to think the
> > issue is universal realizability. Thus Chrisley's paper for example.

(BTW: was Searle assuming that the others had read/heard of my paper?!)

SH: I unfortunately can't explain this for Searle, because I happen to disagree with him on this point, although I do recognize that no one has yet come up with a satisfactory, principled way of distinguishing computers from noncomputers...

I agree.

SH: I don't think you'll be able to get computer scientists or physicists excited about the factor of "causality" in the abstract, but IMPLEMENTATION is certainly something they think about and have views on, because a program is just as an abstraction until and unless it's implemented (i.e., realized in a dynamical physical ["causal"] system -- a computer).

But Searle and Putnam have a point here: unless causality counts in determining what is and what is not an implementation, then just about anything can be seen as an implementation of anything else. So those interested in implementation will have to pay attention to causality.

But there's still not much room for a convergence of views there, because good "symbolic functionalists" hold that all the particulars of implementation are irrelevant -- i.e., that the same program can be implemented in countless radically different ways with nothing in common except that they are all implementations of the same computer program. Hence the right level to talk about is again the purely symbolic (copmputational) one.

But perhaps the point to be made is that there's a lot more involved in implementation than we previously realized. Symbolic functionalists knew that it placed some restriction on the physics; perhaps they just under-estimated how much.

I happen to disagree with these symbolic functionalists insofar as the mind is concerned, but not because I think there is something magic about the "causality" of implementation, but because I think a symbol system is just as ungrounded when it's implemented as when it's just scratches on static paper. The mere implementation of a program on a computer is the wrong kind of "causality" if a mind is what you're interested in implementing (or even if it's an airplane or a furnace). What's needed is the robotic (TTT) power to ground the interpretations of its internal symbols in the robot's interactions with the real world of objects, events and states of affairs that its symbols are interpretable as being "about" (TTT-indistinguishably from our own interactions with the world). (I list some of the publications in which I've been trying to lay this out below.)

Yes, I'm very sympathetic with your writings on this point. Even though the claim that everything realizes every Turing machine is false, that merely makes the claim "to have a mind is to implement TM No. xxx" coherent and false, not coherent and true. One still needs grounding.

But the reverse is also true. In a section of my paper ("Symbol Grounding is not sufficient"), I pointed out that one thing we can take home from Searle's paper is that without some appeal to causation, etc., in order to justify computational predicates, symbol grounding is mere behaviorism. We can agree with Searle on that and yet believe 1) that we *can* make the necessary appeals to causation in order to make sense of computational predicates (such appeals are implicit in our practice and theory); and 2) that symbol grounding, although not sufficient, is necessary for a computational understanding of mind.

Ronald L. Chrisley New College

---------------

Date: Thu, 2 Apr 1992 16:27:18 -0500 From: Drew McDermott Cc: hayes@sumex-aim.stanford.edu, searle@cogsci.Berkeley.EDU,

Here's my two-cents worth on the "everything is a computer" discussion.

From: "Stevan Harnad"

Pat, I think the trivial case is covered by Church's Thesis and Turing Equivalence. Consider a stone, just sitting there. It has one state, let's call it "0." Trivial computational description. Now consider the door, it has two states, open and shut; let's call one "0" and the other "1." Trivial computational description.

> From: Pat Hayes
>
> Stevan-
>
> OK, I now see what you mean: but what makes you think that calling the
> state of the stone '0' has anything to do with computation?

Unfortunately, I think this explanation by Stevan is not what Searle meant. Searle means to say that "computers are in the mind of the beholder." That is, if I take a system, and wish to view it as performing a computational sequence S, I can map the thermal-noise states (or any other convenient ways of partitioning its physical states) into computational states in a way that preserves the sequence. Putnam makes a similar claim in an appendix to, I think, "Representation and Reality." A long discussion about this has been going on in comp.ai.philosophy.

I agree with Stevan that Searle is wrong, and that computation is no more a matter of subjective interpretation than, say, metabolism is. However, I differ on where the problem arises:

[Stevan:]

Pat, alas, what "trivializes" the computational idea is Goedel, Turing, Church, Post, von Neumann, and all the others who have come up with equivalent formulations of what computation is: It's just a very elementary, formal kind of thing, and its physical implementation is equally elementary. And by the way, the same problem arises with defining "symbols" (actually, "symbol-tokens," which are physical objects that are instances of an abstract "symbol-type"): For, until further notice, these too are merely objects that can be interpreted as if they meant something. Now the whole purpose of this exercise is to refute the quite natural conclusion that anything and everything can be interpreted as if it meant something, for that makes it look as if being a computer is just a matter of interpretation. Hence my attempt to invoke what others have apparently dubbed the "cryptographer's constraint" -- to pick out symbol systems whose systematic interpretation is unique and hard to come by (in a complexity-based sense), hence not arbitrary or merely dependent on the way we choose to look at them.

I also share your intuition (based on the programmable digital computer) that a computer is something that is mechanically influenced by its internal symbols (though we differ on two details -- I think it is only influenced by the SHAPE of those symbols, you think it's influenced by their MEANING [which I think would just put as back into the hermeneutic circle we're trying to break out of], and of course you think a conscious human implementation of a symbol system, as in Searle's Chinese Room, somehow does not qualify as an implementation, whereas I think it does). However, I recognize that, unlike in the case of formalizing the abstract notion of computation above, no one has yet succeeded in formalizing this intuition about physical implementation, at least not in such a way as to distinguish computers from noncomputers -- except as a matter of interpretation.

The "cryptographer's constraint" is my candidate for making this a matter of INTERPRETABILITY rather than interpretation, in the hope that this will get the interpreter out of the loop and let computers be computers intrinsically, rather than derivatively. However, your own work on defining "implementation" may turn out to give us a better way.

I don't think it matters one little bit whether the symbols manipulated by a computer can be given any meaning at all. As I hope I've made clear before, the requirement that computers' manipulations have a meaning has been 'way overblown by philosopher types. The real reason why not every system can be interpreted as a computer is that the exercise of assigning interpretations to sequences of physical states of a system does not come near to verifying that the system is a computer. To verify that, you have to show that the states are generated in a lawlike way in response to future events (or possible events). It seems to me that for Searle to back up his claim that his wall can be viewed as a computer, he would have to demonstrate that it can be used to compute something, and of course he can't.

This point seems so obvious to me that I feel I must be missing something. Please enlighten me.

-- Drew

------------

Date: Mon, 6 Apr 92 00:31:21 EDT Message-Id: <9204060431.AA00673@psycho> To: mcdermott-drew@CS.YALE.EDU


> Date: Thu, 2 Apr 1992 16:27:18 -0500
> From: Drew McDermott
>
> Searle means to say that "computers are in the mind of the beholder."
> That is, if I take a system, and wish to view it as performing a
> computational sequence S, I can map the thermal-noise states (or any
> other convenient ways of partitioning its physical states) into
> computational states in a way that preserves the sequence. Putnam makes
> a similar claim in an appendix to, I think, "Representation and
> Reality." A long discussion about this has been going on in
> comp.ai.philosophy.
>
> I agree with Stevan that Searle is wrong, and that computation is no
> more a matter of subjective interpretation than, say, metabolism is.
> However, I differ on where the problem arises...
>
> I don't think it matters one little bit whether the symbols manipulated
> by a computer can be given any meaning at all. As I hope I've made
> clear before, the requirement that computers' manipulations have a
> meaning has been 'way overblown by philosopher types. The real reason
> why not every system can be interpreted as a computer is that the
> exercise of assigning interpretations to sequences of physical states
> of a system does not come near to verifying that the system is a
> computer. To verify that, you have to show that the states are
> generated in a lawlike way in response to future events (or possible
> events). It seems to me that for Searle to back up his claim that his
> wall can be viewed as a computer, he would have to demonstrate that it
> can be used to compute something, and of course he can't.
>
> This point seems so obvious to me that I feel I must be missing
> something. Please enlighten me.
>
> -- Drew McDermott

Drew, I don't think anybody's very interested in uninterpretable formal systems (like Hesse's "Glass Bead Game"). Not just computational theory, but all of formal mathematics is concerned only with interpretable formal systems. What would they be otherwise? Just squiggles and squoggles we can say no more about (except that they follow arbitrary systematic rules like, "after a sguiggle and a squiggle comes a squaggle," etc.)? Now if THAT were all computation was, I would be agreeing with Searle!

It's precisely the fact that it's interpretable as amounting to MORE than just meaningless syntax that makes computation (and formal symbol systems in general) special, and of interest. And you yourself seem to be saying as much when you say "you have to show that the states are generated in a lawlike way in response to future events (or possible events)." For if this can be shown, then, among other things, it will also have been shown that they were interpretable. And, by the way, I don't think that a computer playing, say, backgammon, is going to be shown to be a computer in virtue of "lawlike responses to future and possible events." It's a computer because its states can be systematically interpreted as playing backgammon -- and a lot of other things (as suggested by those who have been stressing the criterion of universality in this discussion).

Now I really don't think anything (even the human mind) can be coherently said to "respond" to future (or possible) events, whether in a lawlike or an unlawlike way (what is "lawlike," anyway, -- "interpretable as if governed by a law"?). So I can't see how your proposed criteria help. But to answer your question about Searle: He didn't say his wall could be USED to compute something, he said it could be DESCRIBED as if it were computing something. And you say as much in your own first paragraph.

Can you take a second pass at making your intutions explicit about this "lawlike performance" criterion, and how it separates computers from the rest? I think your criterion will have to be independent of the uses we may want to put the computer to, because making their computerhood depend on our uses sounds no better than making it depend on our interpretations.

Stevan Harnad

------------------

Date: Fri, 3 Apr 1992 16:31:05 PST From: Pat Hayes Subject: Re: What is computation? To: Stevan Harnad Cc: searle@cogsci.Berkeley.EDU, mcdermott@CS.YALE.EDU, hayes@cs.stanford.edu

Stevan,

I think that you (and others) are making a mistake in taking all the mathematical models of computation to be DEFINITIONS of computation. What makes it tempting to do so, I think, is the remarkable (and surprising) phenomenon of universality: that apparently any computer can be simulated on any other one, with enough resources. Trying to prove this led the theoretical folks in the forties to seek a definition, and it was tempting to choose some very simple device and say that THAT defined computation, since universality meant that this wasn't any kind of restriction, it seemed, on what could (possibly) be computed. This enabled some good mathematics to be developed, but it was only a leap of faith, rather like the P/=NP hypothesis now: indeed it was actually called Church's Thesis, if you recall. And as time has gone by it seems like it must be true, and one can kind of see why.

But to look in the literature and say that this means that computers are DEFINED to be, say, Turing machines, or any other kind of mathematical object, is just a philosophical mistake. You can't run a Turing machine, for one thing, unless its engineered properly. (For example, the symbols on the tape would have to be in a form in which the processing box could read them, which rules out thermodynamic states of walls or rolls of toilet paper with pebbles on, and so forth.)

You might respond, well, what IS a computer, then? And my answer would be that this is essentially an empirical question. Clearly they are remarkable machines which have some properties unlike all other artifacts. What are the boundaries of the concept? Who knows, and why should I really care very much? For example, are neural net programs a form of computer, or are they something completely different? I would be inclined to say the former, but if someone wants to draw sharp lines excluding them, thats just a matter of terminology.

One point from your last message for clarification:


> a computer is something that is mechanically influenced by its
> internal symbols (though we differ on two details -- I think it is only
> influenced by the SHAPE of those symbols, you think it's influenced by
> their MEANING..

No, I don't think that the processor has access to anything other than the shape of the symbols (except when those symbols denote something internal to the machine itself, as when it is computing the length of a list: this point due to Brian Smith). I think we agree on this. But sometimes that suffices to cause the machine to act in a way that is systematically related to the symbol's meaning. All the machine has is some bitstring which is supposed to mean 'plus', but it really does perform addition.

---------

For the record, I agree with you about the need for grounding of symbols to ultimately attach them to the world they purport to denote, but I also think that language enables us to extend this grounding to almost anything in the universe without actually seeing (feeling/hearing/etc) it, to the extent that the sensory basis of the glue is almost abstracted. One could imagine making a program which 'knew' a trmendous amount, could 'converse' well enough to pass the Turing Test in spades, etc., but be blind, deaf, etc.: a brain in a box. I think that its linguistic contact would suffice to say that its internal representations were meaningful, but you would require that it had some sensory contact. If we gave it eyes, you would say that all its beliefs then suddenly acquired meaning: its protests that it could remember the time when it was blind would be denied by you, since it would not have been nailed down sufficiently to the world then. Ah no, you would say to it: you only THOUGHT you knew anything then, in fact I KNOW you knew nothing. While I would have more humility.

best wishes

Pat Hayes

-------


> Date: Fri, 3 Apr 1992 16:31:05 PST
> From: Pat Hayes
>
> You can't run a Turing machine, for one thing, unless its engineered
> properly. (For example, the symbols on the tape would have to be in a
> form in which the processing box could read them, which rules out
> thermodynamic states of walls or rolls of toilet paper with pebbles on,
> and so forth.)
>
> You might respond, well, what IS a computer, then? And my answer would
> be that this is essentially an empirical question. Clearly they are
> remarkable machines which have some properties unlike all other
> artifacts. What are the boundaries of the concept? Who knows, and why
> should I really care very much?

I agree it's an empirical question, but it's an empirical question we better be prepared to answer if there is to be any real substance to the two sides of the debate about whether or not the brain is (or is merely) a computer, or whether or not a computer can have a mind.

If Searle is right about the Chinese Room (and I am right about the Symbol Grounding Problem) AND there ARE things that are computers (implemented symbol-manipulating systems) as well as things that are NOT computers, then the former cannot have minds merely in virtue of implementing the right symbol system.

But if Searle is right about the "ungroundedness" of syntax too (I don't happen to think he is), the foregoing alternatives are incoherent, because everything is a computer implementing any and every symbol system.


> No, I don't think that the processor has access to anything other than
> the shape of the symbols (except when those symbols denote something
> internal to the machine itself, as when it is computing the length of a
> list: this point due to Brian Smith). I think we agree on this. But
> sometimes that suffices to cause the machine to act in a way that is
> systematically related to the symbol's meaning. All the machine has is
> some bitstring which is supposed to mean 'plus', but it really does
> perform addition.

I'm not sure what really performing addition is, but I do know what really meaning "The cat is on the mat" is. And I don't think that when either an inert book or a dynamical TT-passing computer produces the string of symbols that is systematically interpretable as meaning "The cat is on the mat" (in relation to all the other symbols and their combinations) it really means "The cat is on the mat." And that is the symbol grounding problem. I do believe, however, that when a TTT-passing robot's symbols are not only (1) systematically interpretable, but (2) those interpretations cohere systematically with all the robot's verbal and sensorimotor interactions with the world of objects, events and states of affairs that the symbols are interpretable as being about, THEN when that robot produces the string of symbols that is systematically interpretable as "The cat is on the mat," it really means "The cat is on the mat."


> For the record, I agree with you about the need for grounding of
> symbols to ultimately attach them to the world they purport to denote,
> but I also think that language enables us to extend this grounding to
> almost anything in the universe without actually seeing
> (feeling/hearing/etc) it, to the extent that the sensory basis of the
> glue is almost abstracted. One could imagine making a program which
> 'knew' a tremendous amount, could 'converse' well enough to pass the
> Turing Test in spades, etc., but be blind, deaf, etc.: a brain in a
> box. I think that its linguistic contact would suffice to say that its
> internal representations were meaningful, but you would require that it
> had some sensory contact. If we gave it eyes, you would say that all
> its beliefs then suddenly acquired meaning: its protests that it could
> remember the time when it was blind would be denied by you, since it
> would not have been nailed down sufficiently to the world then. Ah no,
> you would say to it: you only THOUGHT you knew anything then, in fact I
> KNOW you knew nothing. While I would have more humility.
>
> best wishes Pat Hayes

Part of this is of course sci-fi, because we're not just imagining this de-afferented, de-efferented entity, but even imagining what capacities, if any, it would or could have left under those conditions. Let me say where I think the inferential error occurs. I can certainly imagine a conscious creature like myself losing its senses one by one and remaining conscious, but is that imagined path really traversable? Who knows what would be left of me if I were totally de-afferented and de-efferented. Note, though, that it would not suffice to pluck out my eye-balls, puncture my ears and peel off my skin to de-afferent me. You would have to remove all the analog pathways that are simply inward extensions of my senses. If you kept on peeling, deeper and deeper into the nervous system, removing all the primary and secondary sensory projections, you would soon find yourself close to the motor projections, and once you peeled those off too, you'd have nothing much left but the "vegetative" parts of the brain, controlling vital functions and arousal, plus a few very sparse and enigmatic sensory and sensorimotor "association" areas (but now with nothing left to associate) -- nor would what was left in any way resemble the requisite hardware for a computer (whatever that might be)!

Sure language is powerful, and once it's grounded, it can take you into abstractions remote from the senses; but I would challenge you to try to teach Helen Keller language if she had not been only deaf and dumb, but she had had no sensory or motor functions at all!

But never mind all that. I will remain agnostic about what the robot has to have inside it in order to have TTT power (although I suspect it resides primarily in that analog stuff we're imagining yanked out here), I insist only on the TTT-passing CAPACITY, not its necessarily its exercise. Mine is not a "causal" theory of grounding that says the word must "touch" its referent through some mystical baptismal "causal chain." The reason, I think, a person who is paralyzed and has lost his hearing, vision and touch might still have a mind is that the inner wherewithal for passing the TTT is still intact. But we know people can pass the TTT. A mystery candidate who can only pass the TT but not the TTT is suspect, precisely because of Searle's Argument and the Symbol Grounding Problem, for if it is just an implemented symbol system (i.e., a "computer" running a program), then there's nobody home in there.

The "need for grounding of symbols" is not merely "to ultimately attach them to the world they purport to denote," it is so that they denote the world on their own, rather than merely because we interpret them that way, as we do the symbols in a book.

Stevan Harnad

-----------

: Thu, 2 Apr 92 10:28:14 EST
> From: lammens@cs.Buffalo.EDU (Joe Lammens)
> Subject: symbol grounding
>
> Re: your article on "The Symbol Grounding Problem" in Physica D
> (preprint). If higher-order symbolic representations consist of symbol
> strings describing category membership relations, e.g. "An X is a Y
> that is Z", then who or what is doing the interpretation of these
> strings? They are just expressions in a formal language again, and I
> assume there is no grounding for the operators of that language like
> "is a" or "that is", whatever their actual representation? Even if
> there is, something still has to interpret the expressions, which seems
> to lead to a homunculus problem, or you'll have to define some sort of
> inferential mechanism that reasons over these strings. The latter seems
> to take us back to the realm of "traditional" AI completely, albeit
> with grounded constant symbols (or at least, some of them would be
> directly grounded). Is that what you had in mind? I don't see how in
> such a setup manipulation of symbols would be co-determined by the
> grounded meaning of the constant symbols, as you seem to require.
>
> Joe Lammens

Joe:

Your point would be valid if it were not for the fact that "Y" and "Z" in the above are assumed (recursively) to be either directly grounded or grounded indirectly in something that is ultimatly directly grounded. "X" inherits its grounding from Y and Z. E.g., if "horse" is directly grounded in a robot's capacity to identify (and discriminate and manipulate) horses on the basis of the sensorimotor interactions of the robot's transducers and effectors with horses, and "stripes" is likewise grounded, then "Zebra" in "A Zebra is a Horse with Stripes" inherits that grounding, and the proof of it is that the robot can now identify (etc.) a zebra upon its very first (sensorimotor) encounter with one. Such is the power of a grounded symbolic proposition.

To put it another way, the meanings of the symbols in a grounded symbol system must cohere systematically not only with (1) the interpretations we outsiders project on them (that's a standard symbol system), but also with (2) all of the robot's interactions with the world of objects, events and states of affairs that the symbols are interpretable as being about. No outsider or homunculus is needed to mediate this systematic coherence; it is mediated by the robot's own (TTT-scale) performance capacity, and in particular, of course, by whatever the internal structures and processes are that underlie that successful capacity. (According to my own particular grounding model, these would be analog projections connected to arbitrary symbols by neural nets that learn to extract the invariant features that make it possible for the robot to categorize correctly the objects of which they are the projections.)

A grounded symbol system is a dedicated symbol system, hence a hybrid one. In a pure symbol system, the "shape" of a symbol is arbitrary with respect to what it can be interpreted as standing for, and this arbitrary shape is operated upon on the basis of formal rules (syntax) governing the symbol manipulations. The only constraints on the manipulations are formal, syntactic ones. The remarkable thing about such pure symbol systems is that the symbols and symbol manipulations can be given a coherent systematic interpretation (semantics). Their short-coming, on the other hand (at least insofar as their suitability as models for cognition is concerned), is that the interpretations with which they so systematically cohere are nevertheless not IN the symbol system (any more than interpretations are in a book): They are projected onto them from the outside by us.

A grounded symbol system, by contrast, has a second set of constraints on it, over and above the syntactic ones above (indeed, this second set of constraints may be so overwhelming that it may not be useful to regard grounded symbol systems as symbol systems at all): The manipulation of both the directly grounded symbols and the indirectly grounded symbols (which are in turn grounded in them) is no longer contrained only by the arbitrary shapes of the symbols and the syntactic rules operating on those shapes; it is also constrained (or "co-determined," as you put it) by the NON-arbitrary shapes of the analog projections to which the ground-level symbols are physically connected by the category-invariance detectors (and, ultimately, to the objects those are the projections of). Indeed, because grounding is bottom-up, the non-arbitrary constraints are primary. ~X" is not free to enter into symbolic combinations except if the category relations the symbols describe square with the analog dictates of the ground-level symbols and their respective connections with the analog world of objects.

And the reason I say that such a dedicated symbol system may no longer even be usefully regarded as a symbol system at all can be illustrated if you try to imagine formal arithmetic -- Peano's Axioms, the formal rules of inference, and the full repertoire of symbols: "0", "1" "=" "+", etc. -- with all the elementary symbols "hard-wired" to the actual real-world quantities and operations that they are interpretable as referring to, with all symbol combinations rigidly constrained by those connections. This would of course not be formal arithmetic any more, but a "dedicated model." (I don't think it would be a good model for arithmetic COGNITION, by the way, because I don't think the elementary arithmetic symbols are directly grounded in this way; I'm just using it to illustrate the radical effects of nonarbitrary shape constraints on a formal system.)

So you see this is certainly not traditional AI. Nor is it homuncular. And what inferences it can make are hewing to more than one drummer -- the "higher" one of syntax and logic, but also the "lower" one of causal causal connections with the analog world of objects. And I do think that categorization is primary, rather than predication; to put it another way, predication and its interpretation is grounded in categorization. There is already categorization involved in "asserting" that an object is "Y." Conjunction may be an innate primitive, or it may be a primitive learned invariant. But once you can assert that this is a horse by reliably identifying it as "Horse" whenever you encounter it, and once you can do the same with "Stripes," then you are just a blank symbol away from identifying whatever has a conjunction of their invariants as "Zebra" (if that's the arbitrary symbol we choose to baptize it with).

Stevan Harnad

NOT POSTED
> Date: Mon, 06 Apr 92 11:01:12 ADT
> From: GOLDFARB%unb.ca@UNBMVS1.csd.unb.ca
>
> Stepa,
> I don't know what exactly Searle had in mind, but I also don't see
> anything interesting behind the idea of "computation". Every "object",
> including an "empty space" ("there is no space 'empty of field" --
> Einstein), might be said to perform many computations, depending
> on interactions with various other "objects", some of the computations
> are highly non-trivial.
>
> I think that "intelligent" computation is a more interesting idea to
> pursue: it is a degree to which a "system" is able to modify
> autonomously and irreversibly its INTERNAL states -- not just some
> auxiliary external objects, or symbols, as does the Turing machine --
> that have effect on all the related consequent computations.
>
> Cheers,
> Lev

Leva, I cannot post this to the list because as it stands it is immediately and trivially satisfied by countless actual computers running countless actual programs. I think you will have to follow the discussion a little longer to see what is at issue with this question of what is computation and what is a computer. Quick, vague, general criteria just won't resolve things. -- Stepa

----------------

Date: Mon, 6 Apr 92 01:25:31 EST From: David Chalmers To: harnad@Princeton.EDU

I don't think there's a big problem here. Of course an answer to the question of whether "everything is a computer" depends on a criterion for when a computer, or a computation, is being physically implemented. But fairly straightforward criteria exist. While there is certainly room for debate about just what should be included or excluded, any reasonable criterion will put strong constraints on the physical form of an implementation: essentially, through the requirement that the state-transitional structure of the physical system mirror the formal state-transitional structure of the computation.

Start with finite state automata, which constitute the simplest formalism for talking about computation. An FSA is fixed by specification of a set of states S1,...,Sn, a set of inputs I1,...,Im, and a set of state-transition rules that map pairs to states. We can say that a given physical system implements the FSA if there is a mapping F from physical states of the system onto states of the FSA, and from inputs to the system onto inputs to the FSA, such that the state-transitional structure is correct: i.e. such that whenever the system is in physical state s and receives input i, it transits into a physical state s' where maps to F(s') according to the specification of the FSA.

This might look complex, but it's very straightforward: the causal structure of the physical system must mirror the formal structure of the FSA, under an appropriate correspondence of states.

Some consequences:

(1) Any physical system will implement various FSAs -- as every physical system has *some* causal structure. e.g. the trivial one-state FSA will be implemented by any system. There's no single canonical computation that a given object is implementing; a given object might implement various different FSAs, depending on the state correspondence that one makes. To that extent, computation is "interest-relative", but that's a very weak degree of relativity: there's certainly a fact of the matter about whether a given system is implementing a given FSA.

(2) Given a particular complex FSA -- e.g. one that a computationalist might claim is sufficient for cognition -- it will certainly not be the case that most objects implement it, as most objects will not have the requisite causal structure. There will be no mapping of physical states to FSA states such that state-transitional structure is reflected.

Putnam has argued in _Representation and Reality_ that any system implements any FSA, but that is because he construes the state-transition requirement on the physical system as a mere material conditional -- i.e. as if it were enough to find a mapping so that pairs are followed by the right s' on the occasions that they happen to come up in a given time interval; and if never comes up, then the conditional is satisfied vacuously. Of course the computationalist should construe the conditional as a strong one, with counterfactual force: i.e. whenever and however comes up, it must be followed by the right s'. Putnam's mappings fail to satisfy this condition -- if were to have come up another time, there's no guarantee that s' would have followed. There has been a long and interesting discussion of this topic on comp.ai.philosophy.

(3) Maybe someone will complain that by this definition, everything is performing some computation. But that's OK, and it doesn't make computation a useless concept. The computationalist claim is that cognition *supervenes* on computation, i.e. that there are certain computations such that any implementation of that computation will have certain cognitive properties. That's still a strong claim, unaffected by the fact that all kinds of relatively uninteresting computations are being performed all over the place.

To the person who says "doesn't this mean that digestion is a computation", the answer is yes and no. Yes, a given digestive process realizes a certain FSA structure; but this is not a very interesting or useful way to see it, because unlike cognition, digestion does not supervene on computation -- i.e. there will be other systems that realize the same FSA structure but that are not performing digestion. So: particular instances of digestion may be computations in a weak sense, but digestion as a type is not. It's only useful to take a computational view for properties that are invariant over the manner in which a computation is implemented. (Of course, Searle argues that cognition is not such a property, but that's a whole different can of worms.)

Finite state automata are a weak formalism, of course, and many if not most people will want to talk in terms of Turing machines instead. The extension is straightforward. We say that a physical system realizes a given Turing machine if we can map states of the system to states of the Turing-machine head, and separately map states of the system to symbols on each Turing-machine tape square (note that there will be a separate mapping for each square, and for the head, and also for the position of the head if we're to be complete), such that the state-transitional structure of the system mirrors the state-transitional structure of the Turing machine. For a Turing machine of any complexity, this will be a huge constraint on possible implementations.

So far, in talking about FSAs and Turing machines, we've really been talking about what it takes to implement a computation, rather than a computer. To be a computer presumably requires even stricter standards -- i.e., that the system be universal. But that is straightforward: we can simply require that the system implement a universal Turing machine, using the criteria above.

Personally I think that the notion of "computation" is more central to cognitive science than the notion of "computer". I don't see any interesting sense in which the human mind is a universal computer. It's true that we have the ability to consciously simulate any given algorithm, but that's certainly not a central cognitive property. Rather, the mind is performing a lot of interesting computations, upon which our cognitive properties supervene. So it's probably most useful to regard cognitive processes as implementing a given non-universal Turing machine, or even an FSA, rather than a universal computer.

So, it seems to me that there are very straightforward grounds for judging that not everything is a computer, and that although it may be true that everything implements some computation, that's not something that should worry anybody.

Dave Chalmers.

------------

From: Stevan Harnad

David Chalmers wrote:


>dc> an answer to the question of whether "everything is a computer" depends
>dc> on a criterion for when a computer, or a computation, is being
>dc> physically implemented. But fairly straightforward criteria exist...
>dc> the causal structure of the physical system must mirror the formal
>dc> structure of the FSA, under an appropriate correspondence of states...
>dc>
>dc> Given a particular complex FSA -- e.g. one that a computationalist
>dc> might claim is sufficient for cognition -- it will certainly not be the
>dc> case that most objects implement it, as most objects will not have the
>dc> requisite causal structure...
>dc>
>dc> Finite state automata are a weak formalism, of course, and many if not
>dc> most people will want to talk in terms of Turing machines instead. The
>dc> extension is straightforward... For a Turing machine of any complexity,
>dc> this will be a huge constraint on possible implementations...
>dc>
>dc> To be a computer presumably requires even stricter standards -- i.e.,
>dc> that the system be universal. But that is straightforward: we can
>dc> simply require that the system implement a universal Turing machine,
>dc> using the criteria above...
>dc>
>dc> ...there are very straightforward grounds for judging that not
>dc> everything is a computer, and that although it may
>dc> be true that everything implements some computation, that's not
>dc> something that should worry anybody.

I agree with Dave Chalmers's criteria for determining what computation and computers are, but, as I suggested earlier, the question of whether or not COGNITION is computation is a second, independent one, and on this I completely disagree:


>dc> The computationalist claim is that cognition *supervenes* on
>dc> computation, i.e. that there are certain computations such that any
>dc> implementation of that computation will have certain cognitive
>dc> properties.
>dc>
>dc> To the person who says "doesn't this mean that digestion is a
>dc> computation", the answer is yes and no. Yes, a given digestive process
>dc> realizes a certain FSA structure; but this is not a very interesting or
>dc> useful way to see it, because unlike cognition, digestion does not
>dc> supervene on computation -- i.e. there will be other systems that
>dc> realize the same FSA structure but that are not performing digestion.
>dc>
>dc> Personally I think that the notion of "computation" is more central to
>dc> cognitive science than the notion of "computer". I don't see any
>dc> interesting sense in which the human mind is a universal computer...
>dc> Rather, the mind is performing a lot of interesting computations, upon
>dc> which our cognitive properties supervene. So it's probably most useful
>dc> to regard cognitive processes as implementing a given non-universal
>dc> Turing machine, or even an FSA, rather than a universal computer.

"Supervenience" covers a multitude of sins (mostly sins of omission). Whatever system turns out to be sufficient for having a mind, mental states will "supervene" on it. I don't feel as if I've said much of a mouthful there.

But it is a much more specific hypothesis that what the mind will "supervene" on is the right computations. We've agreed that what's special about computation is that there are many different ways to implement the same computations. So if a mind supervenes on (the right) computations because of their computational properties (rather than because of the physical details of any particular implementation of them), then it must supervene on ALL implementations of those computations. I think Searle's Chinese Room Argument has successfully pointed out that this will not be so, at least in the case of Searle's own implementation of the hypothetical Chinese-TT-passing computations -- except if we're willing to believe that his memorizing and executing a bunch of meaningless symbols is sufficient to cause a second mind to "supervene" on what's going on in his head -- something I, for one, would not be prepared to believe for a minute.

Because of certain similarities (similarities that on closer scrutiny turn out to be superficial), it was reasonable to have at first entertained the "computationalist" thesis that cognition might be a form of computation (after all, both thoughts and computations are put together out of strings of "symbols," governed by rules, semantically interpretable; both have "systematicity," etc.). But, because of the other-minds problem, there was always a systematic ambiguity about the standard Turing Test for testing whether a candidate system really had a mind.

We thought TT-passing was a good enough criterion, and no more or less exacting than the everyday criterion (indistinguishability from ourselves) that we apply in inferring that any other body than our own has a mind. But Searle showed this test was not exacting enough, because the TT could in principle be passed by computations that were systematically interpretable as a life-long correspondence with a pen pal who was understanding what we wrote to him, yet they could also be implemented without any understanding by Searle. So it turns out that we would have been over-interpreting the TT in this case (understandably, since the TT is predicated on the premise that to pass it is to generate and respond to symbols in a way that is systematically interpretable as -- and indistinguishable in any way from -- a life-long correspondence with a real person who really understands what we are writing). Such a test unfortunately trades on a critical ambiguity arising from the fact that since the TT itself was merely verbal -- only symbols in and symbols out -- there MIGHT have been only computations (symbol manipulations) in between input and output.

Well now that Searle has shown that that's not enough, and the Symbol Grounding Problem has suggested why not, and what might in fact turn out to be enough (namely, a system that passes the robotic upgrade of the TT, the Total Turing Test, able to discriminate, identify and manipulate the objects, events and states of affairs that it's symbols are systematically interpretable as being "about" in a way that is indistinguishable from the way we do), it's clear that the only way to resolve the ambiguity is to turn to abandon the TT for the TTT. But it is clear that in order to pass the TTT a system will have to do more than just compute (it must transduce, actuate, and probably do a lot of analog processing), and the mind, if any, will have to "supervene" on ALL of that -- not just the computations, which have already been shown not to be mindful! Moreover, whatever real computation a TTT-passer will be doing, if any, will be "dedicated" computation, constrained by the analog constraints it inherits from its sensorimotor grounding. And transducers, for example, are no more implementation-independent than digestion is. So not every implementation of merely their computational properties will be a transducer (or gastrointestinal tract) -- some will be mere computational simulations of transducers, "virtual transducers," and no mind (or digestion) will "supervene" on that.

Stevan Harnad

--------------

Date: Mon, 23 Mar 92 18:45:50 EST From: Eric Dietrich

Stevan:

Maybe it's the season: sap is rising, bugs are buzzing, and trees are budding -- but it seems to me that some progress has been made on the question of computers, semantics, and intentionality. (BTW: thank you for bouncing to me your discussion with Searle. I enjoyed it.)

I agree with Searle on two points. First, nothing is intrinsically a computer. And second, the big problem is not universal realizability.

Furthermore, I agree with you that computation and implementation are not the same thing, and that nontrivial symbol systems will not have arbitrary duals because they have a certain complex systematicity.

But, ... 1. Nothing is intrinsically a computer because nothing is intrinsically anything. It's interpretation all the way down, as it were.

2. Therefore, it's lack of imagination that prevents us from swapping interpretations in general in English, arithmetic, and Lisp. This lack of imagination is, though, is part of our epistemic boundedness. We are not stupid, just finite. To keep things coherent, and to swap all the meanings in English is something that we cannot do. Perhaps no intelligent creature could do this because creatures vastly more intelligent than we would have that much more science -- explanations and semantics -- to juggle when trying to invent and swap duals.

3. Still, we arrive at the same point: a wall is only an implementation of a trivial turing machine or computation. . . .

But, ... How can we arrive at the same point if I believe that computers are NOT formal symbol manipulators while you and Searle believe that they are? Because computation is an observer relative feature precisely *because* semantics is. In other words, you can interpret your wall, there just isn't much reason to do so. Planets can be viewed as representing and computing their orbits, but there isn't much reason to do so. Why? Because it involves too much "paper work" for us. Other intelligent entities might prefer to attribute/see such computations to the planets.

For me, computation, systematicity, and semantics are matters of degree. Machines, computation, and meaning are in the eye of the beholder, or more precisely, the explainer.

What recommends this view? Does it give us exactly the same conclusions as your view? No, it is not the same. Interpretationalism provides a different set of problems that must be solved in order to build an intelligent artifact, problems that are prima facie tractable. For example, on the interpretationalist view, you don't have to solve the problem of original intentionality (or, what is the same, the problem provided by the Chinese Room); nor do you have to solve the symbol grounding problem (though you do have to figure out how perception and categorization works). You can instead spend your time searching for the algorithms (equivalently, the architectures) responsible for our intelligence -- architectures for plasticity, creativity and the like.

More deeply, it allows us the explanatory freedom to handle the computational surprises that are no doubt in our future. In my opinion, the semantical view espoused by you and Searle is too rigid to do that.

And finally, interpretationalism holds out the promise that cognitive science will integrate (integrate, NOT reduce) smoothly with our other sciences. If intentionality is a real property of minds, then minds become radically different from rocks. So different that I for one despair of ever explaining them at all. (Where, for example, do minds show up phylogenetically speaking? And why there and not somewhere else? These are questions YOU must answer. I don't have to.)

We don't need to preserve psychology as an independent discipline by giving it phenomena to explain that don't exist anywhere else in nature. Rather, we can preserve psychology because it furthers our understanding in a way that we would miss if we stopped doing it.

Sincerely,

Eric

---------------

"INTERPRETATIONALISM" AND ITS COSTS

Eric Dietrich wrote:


> ed> I agree with Searle [that] nothing is intrinsically a computer [and
> ed> that] the big problem is not universal realizability... I agree with
> ed> you that computation and implementation are not the same thing, and
> ed> that nontrivial symbol systems will not have arbitrary duals because
> ed> they have a certain complex systematicity... But, ... Nothing is
> ed> intrinsically a computer because nothing is intrinsically anything.
> ed> It's interpretation all the way down, as it were.

A view according to which particles have mass and spin ond obey Newton's laws only as a matter of interpretation is undesirable not only because it makes physics appear much more subjective and impressionistic than necessary, but because it blurs a perfectly good and informative distinction between the general theory-ladenness of all scientific inferences and the special interpretation-dependence of the symbols in a computer program (or a computer implementation of it). It is the latter that is at issue here. There is, after all, a difference between my "interpreting" a real plane as flying and my interpreting a computer simulation of a plane as flying.


> ed> ...it's lack of imagination that prevents us from swapping
> ed> interpretations in general in English, arithmetic, and Lisp. This lack
> ed> of imagination, though, is part of our epistemic boundedness. We are
> ed> not stupid, just finite. To keep things coherent, and to swap all the
> ed> meanings in English is something that we cannot do. Perhaps no
> ed> intelligent creature could do this because creatures vastly more
> ed> intelligent than we would have that much more science -- explanations
> ed> and semantics -- to juggle when trying to invent and swap duals.

I don't know any reasons or evidence for believing that it is lack of imagination that prevents us from being able to come up with coherent interpretations for arbitrarily swapped symbols. NP-completeness sounds like a good enough reason all on its own.


> ed> Still, we arrive at the same point: a wall is only an implementation of
> ed> a trivial turing machine or computation. But, ... How can we arrive at
> ed> the same point if I believe that computers are NOT formal symbol
> ed> manipulators while you and Searle believe that they are? Because
> ed> computation is an observer relative feature precisely *because*
> ed> semantics is. In other words, you can interpret your wall, there just
> ed> isn't much reason to do so. Planets can be viewed as representing and
> ed> computing their orbits, but there isn't much reason to do so. Why?
> ed> Because it involves too much "paper work" for us. Other intelligent
> ed> entities might prefer to attribute/see such computations to the
> ed> planets.

I think the reason planets don't compute their orbits has nothing to do with paperwork; it is because planets are not computing anything. They are describable as computing, and the computation is implementable as a computer simulation of planetary motion (to an approximation), but that's just because of the power of formal computation to approximate (symbolically) any physical structure or process at all (this is a variant of Church's Thesis).

Allowing oneself to be drawn into the hermeneutic hall of mirrors (and leaving the virtual/real distinction at the door) can lead to illusory after-effects even when one goes back into the real world. For not only does one forget, while in the hall of mirrors, that the fact that computations are interpretable as planetary motions does not make them real planetary motions, but even when one re-enters the real world one forgets that the fact that planets are describable as computing does not mean they are really computing!


> ed> For me, computation, systematicity, and semantics are matters of
> ed> degree. Machines, computation, and meaning are in the eye of the
> ed> beholder, or more precisely, the explainer.

For me what distinguishes real planetary motion from a computer simulation of it is definitely NOT a matter of degree. Ditto for meaning and mind.


> ed> What recommends this view? Does it give us exactly the same conclusions
> ed> as your view? No, it is not the same. Interpretationalism provides a
> ed> different set of problems that must be solved in order to build an
> ed> intelligent artifact, problems that are prima facie tractable. For
> ed> example, on the interpretationalist view, you don't have to solve the
> ed> problem of original intentionality (or, what is the same, the problem
> ed> provided by the Chinese Room); nor do you have to solve the symbol
> ed> grounding problem (though you do have to figure out how perception and
> ed> categorization works). You can instead spend your time searching for
> ed> the algorithms (equivalently, the architectures) responsible for our
> ed> intelligence -- architectures for plasticity, creativity and the like.

I adopt the simple intermediate position that if the meanings of whatever symbols and computations are actually going on inside a robot are grounded (TTT-indistinguishably) in the robot's sensorimotor interactions (with the real world of objects that its symbols are systematically interpretable as being about), then there are no (solvable) problems left to solve, and the particular branch of reverse bioengineering that is "cognitive science" will have done its work, fully integrably with the rest of pure and applied science.

Of course, as with the computational modelling of planetry motion, a great deal can be found out (empirically and analytically) about how to get a robot to pass the TTT through simulations alone, but the simulation itself is not the TTT and the simulated robot does not have a mind. Alas, "interpretationalism" seems to lose this distinction.


> ed> interpretationalism holds out the promise that cognitive science will
> ed> integrate (integrate, NOT reduce) smoothly with our other sciences. If
> ed> intentionality is a real property of minds, then minds become radically
> ed> different from rocks. So different that I for one despair of ever
> ed> explaining them at all. (Where, for example, do minds show up
> ed> phylogenetically speaking? And why there and not somewhere else? These
> ed> are questions YOU must answer. I don't have to.)

Not at all. The do-able, empirical part of mind-modelling is TTT-modelling, and that can in principle (though not in practice) be accomplished for all species without ever having to answer the (unanswerable) question of where mind starts and who/what does/doesn't have a mind (apart from oneself). "Interpretationalism" can't answer the question either, but it disposes of it at the very high price of supposing (1) that everything has a mind to some degree and (2) that the (real/virtual) difference between having any physical property P and merely being systematically interpretable as having property P is no difference at all -- at the price, in other words, of simultaneously begging the question (2) and answering it by fiat (1)!

Stevan Harnad

----------------------------

----------------------------

Date: Tue, 7 Apr 92 23:56:53 PDT From: sereno@cogsci.UCSD.EDU (Marty Sereno) To: harnad@Princeton.EDU Subject: Cells, Computers, and Minds

hi stevan

I have patiently read the many posts on the symbol-grounding problem with interest for several years now. Many of the comments have floundered around trying to find a clear definition of what it takes to make a symbol-using system "really" understand something. They tend to get tied up with various human artifacts, and it can be extremely difficult to sort out the various sources of meaning-grounding. We can avoid some of these problems by considering cells, which have the distinction of being the first grounded symbol-using system--and one whose grounding does not depend on any human artifact, or on humans at all, for that matter.

The use of symbols strings in cells is well documented and rather different than the use of symbol strings in human-designed computers. The plan is to compare a computer to a cell, and then argue that human symbol use looks more like that in cells.

The basic difference can be quite simply stated. Computers consist of some kind of device that can read code strings and then write code strings in a systematic, programmable way (with due respect to what has been written on this topic). Reading and writing code is to perform some kind of binary-like classification of symbol tokens (e.g., reading 4.8 volts to be the same as 5 volts). Computer designers have found numerous ways to relate these written and read code strings to real world tasks (e.g., A/D and D/A convertors, operators who understand human and computer languages).

A cell reads code strings as well. Each living cell contains somewhere between 1 and 200 megabytes of code. Messenger RNA sequences transcribed from this permanent store are recognized by the cell during the process of protein "translation" to contain codons each containing 3 nucleotides. Each nucleotide can each be described as having two features: "long/short" (A and G [purines] vs. C and T [pyrimidines]) and "2/3 bonds" (A and T vs. G and C). The key point is that there are no other examples of *naturally-occurring* systems that use long code-strings like these that are conceivable *without* protein translation or human thought (this disqualifies the immune system and mathematical notation as independent naturally-occurring, self-maintaining systems, for me at least).

But the way cells put these recognized symbols to work is remarkably different than with computers. Instead of reading code for the purpose of *operating on other code*, cells use the code to make proteins (esp. enzymes), which they then use to maintain a metabolism. Proteins are constructed by simply bonding amino acids into an (initially) 1-D chain that is parallel to the recognized codons (words) in the messenger RNA chain. Amino acids have none of the characteristics of nucleotide symbol segment chains. Objective characteristics of (molecular) symbol segment chains for me are: similar 3-D structure despite 1-D sequence differences; small number of binary-like features for each segment; their use as a 1-D chain in which small groups of segments are taken to stand for a sequence of other, possibly non-symbolic things.

Proteins are extremely complex molecules, each containing thousands of atoms in a precise 3-D arrangement. The DNA sequences in the genome, however, constitute only a trivial portion of what would be required to explicitly specify the 3-D structure of a protein; a single gene typically contains only a few hundred bytes of information. This information goes such a long way because it depends for its interpretation on the existence of elaborate geometrical constraints due to covalent chemical bonding, weak electronic interactions, the hydrophobic effect, the structural details of the 20 amino acids, and so on--a large set of 'hard-wired' effects that the cell harnesses, but cannot change. Once the amino acid chain has been synthesized, its self-assembly (folding) is directed entirely by these prebiotic, non-symbolic chemical constraints.

Certain aspects of the architecture of cellular metabolism is much like a production system. The enzymes ("productions") of metabolism operate on their substrates ("objects") in a cytoplasm ("working memory"), which requires that they have a great deal of specificity to avoid inappropriate interactions. As in some kinds of production systems, enzymes can operate on other enzymes as substrates. The key difference is that the code in the cellular system is used strictly to make the enzyme "productions"; once they are made, they fold up and operate primarily in a non-symbolic milieu and on non-symbolic things in the cytoplasm (this not exclusively the case; some proteins do in fact control which part of the code is read).

No one in their right mind would want to make a computer more like a cell for most of things that computers are currently used for. It is much to hard too make arbitrary local changes in a cell's metabolism; and evolution takes a miserably long time and involves large populations. Molecular biologists, however, might, conversely, like to engineer a cell into a computer by using overzealous error-correcting polymerases to write DNA code. Code manipulations are not very fast and would probably have to be rather local in cells, but it would be easy to get billions or trillions of copies of a bacterial "program" in a short time.

I suggest that we might take a cue from how cellular symbols are grounded in thinking about how human symbols are grounded. Following the cellular architecture, we might conjecture that the main use of symbol strings for humans--in particular, external speech symbol strings--is to construct an internal "mental metabolism". Small groups of speech sounds are first internalized in auditory cortical areas, and then small groups of them are recognized and taken to stand for other non-symbolic internal patterns--e.g., visual category patterns in higher cortical visual areas. Perhaps, human language involves relying on pre-linguistic constraints on how sequentially activated and "bound together" visual category activity patterns interact in higher primate visual cortical areas. We could think of language as a kind of code-directed scene comprehension that relies on implicit harnessing of pre-existing constraints in a way analogous to the use of a complex chemistry by cellular code strings. There is a similar compactness to the code (a few hundred bytes of information specifies an enzyme and the complex meaning of a discourse in the mind of the listener). It is amazing to consider that the genetic code for an entire living, reproducing, self-maintaining E. coli bacterium takes up less space than the code for a decent word processor.

I would argue that a human-like symbol-using system depends on harnessing complex dynamical constraints in a non-symbolic world, just as cellular symbol systems depend on complex chemistry for their grounding. It is not likely to be easy to construct such a "chemistry" in an artificial machine. Real chemistry is extremely complex and the specification of protein structure relies on many intricate details of this complexity; it is not currently possible to predict the 3-D structure of a protein given only the amino acid sequence. The "chemistry" of interacting patterns in human neural networks is undoubtedly even more complex. But there may be no other way to make a grounded symbol-using system.

For a longer exposition of these ideas, see:

Sereno, M.I. (1991) Four analogies between biological and cultural/linguistic evolution. Journal of Theoretical Biology 151:467-507.

Sereno, M.I. (1991) Language and the primate brain. Proceedings, Thirteenth Annual Conference of the Cognitive Science Society, Lawrence Erlbaum Assoc., pp. 79-84.

Though my note is a little long, please print it out before singling out particular sentences for ridicule or praise...

marty

-----------

Date: Fri, 17 Apr 92 17:07:20 EDT From: "Stevan Harnad"

ON SYMBOL SYSTEMS: DEDICATED, GROUNDED AND CELLULAR

Marty Sereno (sereno@cogsci.UCSD.EDU) wrote:

ms> cells... have the distinction of being the first grounded symbol-using ms> system--and one whose grounding does not depend on any human artifact, ms> or on humans at all, for that matter... The use of symbols strings in ms> cells is well documented and rather different [from] the use of symbol ms> strings in human-designed computers... But the way cells put these ms> recognized symbols to work is remarkably different... Instead of ms> reading code for the purpose of *operating on other code*, cells use ms> the code to make proteins (esp. enzymes), which they then use to ms> maintain a metabolism... The key difference is that the code in the ms> cellular system is used strictly to make the enzyme "productions"; once ms> they are made, they fold up and operate primarily in a non-symbolic ms> milieu and on non-symbolic things in the cytoplasm... ms> ms> I would argue that a human-like symbol-using system depends on ms> harnessing complex dynamical constraints in a non-symbolic world, just ms> as cellular symbol systems depend on complex chemistry for their ms> grounding. It is not likely to be easy to construct such a "chemistry" ms> in an artificial machine... But there may be no other way to make a ms> grounded symbol-using system.

A cell seems to be like a dedicated computer. A dedicated computer is one for which the interpretations of some or all of its symbols are "fixed" by the fact that it is hard-wired to its input and output. In this sense, a dedicated chess-playing computer -- one whose inputs and outputs are pysically connected only to a real chess board and chess-men -- is a grounded symbol system (considered as a whole). Of course, a dedicated chess-playing computer, even though it is grounded, is still just a toy system, and toy systems are underdetermined in more ways than one. To ground symbol meanings in such a way as to make them completely independent of our interpretations (or at least no more nor less indeterminate than they are), a symbol system must be not only grounded but a grounded TTT-scale robot, with performance capacity indistinguishable from our own.

In a pure symbol system, the "shapes" of the symbols are arbitrary in relation to what they can be interpreted as meaning; in a dedicated or grounded symbol system, they are not. A cell seems to be more than just a dedicated computer, however, for mere dedicated computers still have sizeable purely computational components whose function is implementation-independent, hence they can be "swapped" for radically different physical systems that perform the same computations. In a dedicated chess-playing computer it is clear that a radically different symbol-manipulator could be hard-wired to the same input and output and would perform equivalent computations. It is not clear whether there are any implementation-independent components that could be swapped for radically different ones in a cell. This may either be a feature of the "depth" of the grounding, or, more likely, an indication that a cell is not really that much like a computer, even a dedicated one. The protein-coding mechanisms may be biochemical modules rather than formal symbols in any significant sense.

There's certainly one sense, however, in which cells and cellular processes are not merely materials for analogies in this discussion, because for at least one TTT-passing system (ourselves) they happen to generate a real implementation! Now, although I am not a "symbolic" functionalist (i.e., I don't believe that mental processes are implementation-independent in the same way that software is implementation-independent), I am still enough of a ("robotic") functionalist to believe that there may be more than one way to implement a mind, perhaps ways that are radically different from the cellular implementation. As long as they have TTT-indistinguishable performance capacity in the real world, I would have no nonarbitrary grounds for denying such robots had minds.

ms> I suggest that we might take a cue from how cellular symbols are ms> grounded in thinking about how human symbols are grounded. Following ms> the cellular architecture, we might conjecture that the main use of ms> symbol strings for humans--in particular, external speech symbol ms> strings--is to construct an internal "mental metabolism". Small groups ms> of speech sounds are first internalized in auditory cortical areas, and ms> then small groups of them are recognized and taken to stand for other ms> non-symbolic internal patterns--e.g., visual category patterns in ms> higher cortical visual areas. Perhaps, human language involves relying ms> on pre-linguistic constraints on how sequentially activated and "bound ms> together" visual category activity patterns interact in higher primate ms> visual cortical areas. We could think of language as a kind of ms> code-directed scene comprehension that relies on implicit harnessing of ms> pre-existing constraints in a way analogous to the use of a complex ms> chemistry by cellular code strings.

This analogy is a bit vague, but I would certainly be sympathetic to (and have indeed advocated) the kind of sensory grounding it seems to point toward.

Stevan Harnad

-------------------

Date: Fri, 17 Apr 92 17:48:35 EDT From: "Stevan Harnad"


> Date: Mon, 6 Apr 92 20:02 GMT
> From: UBZZ011@cu.bbk.ac.uk Todd Moody
> To: HARNAD <@nsfnet-relay.ac.uk:HARNAD@PRINCETON.edu>
>
> Another way to ask the question at hand is to ask whether, given some
> alien object that appeared to be undergoing complex changes in its
> discrete state configurations, is it possible to tell by inspection
> whether it is doing computation? (alien means we don't know the
> "intended interpretation," if there is one, of the states) This
> question is rather strongly analogous to a question about language:
> Given some arbitrary complex performance (dolphin noise, for example),
> is it possible to determine whether it is a linguistic performance
> without also being able to translate at least substantial portions of
> it?
>
> In both cases, I don't see how the questions can be answered other than
> by working from considerations of *parsimony under interpretation*.
> That is, in the case of dolphin noise, you just have to make some
> guesses about dolphin interests and then work on possible
> interpretation/translations. When you reach the point that the simplest
> interpretation of the noise is that it means XYZ, then you have a
> strong case that the noise is language. In the case of the alien
> thing-that-might-be-a-computer, the trick is to describe it as
> following a sequence of instructions (or computing a function) such
> that this description is simpler than a purely causal description of
> its state changes.
>
> A description of an object as a computer is more *compressed* (simpler)
> than the description of it as an arbitrary causal system.
>
> Thus, it is parsimony under interpretation that rules out Searle's
> wall. This is not interpretation-independent, but I think it is as
> good as it gets.
>
> Todd Moody (tmoody@sju.edu)

Todd, I agree with this strategy for judging whether or not something is computing (it is like the complexity-based criterion I proposed, and the "cryptographic criterion" Dennett, Haugeland, McCarthy and perhaps Descartes proposed), but it won't do for deciding whether the interpretation is intrinsic or derived. For that, you need more than interpretability (since it already presupposes interpretability). My candidate is grounding in (TTT-scale) robotic interactions with the world of objects the symbols are interpretable as being about.

Stevan Harnad

----------------------------------

From: Jeff Dalton Date: Mon, 6 Apr 92 18:54:27 BST

Steven Harnad writes:


> that what we are
> trying to rule out here is arbitrary, gerrymandered interpretations of,
> say, the microstructure (and perhaps even the surface blemishes) of a
> stone according to which they COULD be mapped into the computations you
> describe. Of course the mapping itself, and the clever mind that
> formulated it, would be doing all the work, not the stone, but I think
> Searle would want to argue that it's no different with the "real"
> computer! The trick would be to show exactly why/how that rejoinder
> would be incorrect. It is for this reason that I have groped for a
> complexity-based (cryptographic?) criterion, according to which the
> gerrymandered interpretation of the stone could somehow be ruled out as
> too improbable to come by, either causally or conceptually, whereas the
> "natural" interpretation of the SPARC running WORDSTAR would not.

One potential problem with the complexity constraint is that the interpretations are expressed in a particular language (let us say). An interpretation that is more complex in one language might be simpler in another. Putnam makes a similar point about his "cats are cherries" example, that which interpretation is the weird one switches depending on whether you're expressing the interpretation in the language where "cats" means cats or the one in which it means cherries.

As a metaphor for this, consider random dot stereograms as an encoding technique (something suggested to me by Richard Tobin). Someone mails you a picture that consists of (random) dots. Is it a picture of the Eiffel Tower, or a Big Mac? Well, they mail you another picture of random dots and, viewed together with the first, you see a picture of the Eiffel Tower. But they could just as well have mailed you a different second picture that, together with the first, gave a Big Mac.

Moreover, it is not true in general that the simpler interpretation is always the right one. Someone who is encoding something can arrange for there to be a simple interpretation that is incorrect. I suppose an example might be where the encrypted form can be decrypted to an English text, but the actual message can only be found buy taking the (English) words that appear after every third word that contains an "a".

-- jeff

Date: Sat, 18 Apr 92 12:58:51 EDT From: "Stevan Harnad"

COMPLEXITY, PARSIMONY and CRYPTOLOGY

Jeff Dalton wrote:


>jd> Stevan Harnad wrote:


>
>sh> what we are trying to rule out here is arbitrary, gerrymandered
>
>sh> interpretations of, say, the microstructure (and perhaps even the
>
>sh> surface blemishes) of a stone according to which they COULD be mapped
>
>sh> into the computations you describe. Of course the mapping itself, and
>
>sh> the clever mind that formulated it, would be doing all the work, not
>
>sh> the stone, but I think Searle would want to argue that it's no
>
>sh> different with the "real" computer! The trick would be to show exactly
>
>sh> why/how that rejoinder would be incorrect. It is for this reason that I
>
>sh> have groped for a complexity-based (cryptographic?) criterion,
>
>sh> according to which the gerrymandered interpretation of the stone could
>
>sh> somehow be ruled out as too improbable to come by, either causally or
>
>sh> conceptually, whereas the "natural" interpretation of the SPARC running
>
>sh> WORDSTAR would not.


>jd> One potential problem with the complexity constraint is that the
>jd> interpretations are expressed in a particular language (let us say).
>jd> An interpretation that is more complex in one language might be
>jd> simpler in another. Putnam makes a similar point about his "cats
>jd> are cherries" example, that which interpretation is the weird one
>jd> switches depending on whether you're expressing the interpretation
>jd> in the language where "cats" means cats or the one in which it
>jd> means cherries.

As I understand the Chaitin/Kolmogorov complexity-based criterion for parsimony and randomness (Chaitin 1975; Rabin 1977), an algorithm (a string of bits) is nonrandom and parsimonious to the degree that the number of bits in it is smaller than the number of bits in the "random" string (which is usually infinitely long) that it can be used to generate. The measure of parsimony is the relative size of the short ("theory") and long ("data") bit string. It is stressed that language and notational variations may alter the length of the algorithm by a few bits, but that all variants would still be an order of magnitude smaller than the data string (and therein lies the real parsimony).

Now I realize that the C/K criterion is only a thesis, but I think it conveys the intuition that I too would have: that the relative ease with which some things can be expressed in English rather than French (or FORTRAN rather than ALGOL) is trivial relative to the fact that they can be expressed at all, either way.

Two qualifications, however:

(1) The C/K criterion applies to algorithms as uninterpreted strings of bits that "generate" much longer uninterpreted strings of bits. The short and long strings are interpretable, respectively, as theory and data, but -- as usual in formal symbol systems -- the interpretation is external to the system; the counting applies only to the bits. So although I don't think it is circular or irrelevant to invoke the C/K analogy as an argument for discounting linguistic and notational differences, doing so does not go entirely to the heart of the matter of the parsimony of an INTERPRETATION (as opposed to an uninterpreted algorithm).

(2) Another potential objection is more easily handled, however, and again without any circularity (just some recursiveness): When one is assessing the relative complexity of an algorithm string and the (much longer) data string for which it is an algorithm, the potential differences among the languages in which one formulates the algorithm (and the data) clearly cannot include potential gerrymandered languages whose interpretation itself requires an algorithm of the same order of magnitude as the data string! That's precisely what this complexity-based/cryptographic criterion is invoked to rule out!


>jd> As a metaphor for this, consider random dot stereograms as an encoding
>jd> technique (something suggested to me by Richard Tobin). Someone mails
>jd> you a picture that consists of (random) dots. Is it a picture of the
>jd> Eiffel Tower, or a Big Mac? Well, they mail you another picture of
>jd> random dots and, viewed together with the first, you see a picture of
>jd> the Eiffel Tower. But they could just as well have mailed you a
>jd> different second picture that, together with the first, gave a Big
>jd> Mac.

This metaphor may be relevant to the cognitive process by which we DISCOVER an interpretation, but it doesn't apply to the complexity question, which is independent of (or perhaps much bigger than) cognition. If we take the features that make random dots look like the Eiffel Tower versus a Big Mac, those features, and the differences between them, are tiny, compared to the overall number of bits in a random dot array. Besides, to be strictly analogous to the case of the same algorithm formulated in two languages yielding radically different complexities, ALL the random dots would have to be interpretable using either algorithm, whereas the trick with Julesz figures is that only a small subset of the random dots is interpretable (those constituting the figure -- Eiffel Tower or Big Mac, respectively) and not even the same random dots in both cases. (I would also add that the highly constrained class of perceptually ambiguous figures (like the Necker Cube) is more like the rare cases of "dual" interpretability I've already noted.)


>jd> Moreover, it is not true in general that the simpler interpretation is
>jd> always the right one. Someone who is encoding something can arrange
>jd> for there to be a simple interpretation that is incorrect. I suppose
>jd> an example might be where the encrypted form can be decrypted to an
>jd> English text, but the actual message can only be found by taking the
>jd> (English) words that appear after every third word that contains an "a".
>jd>
>jd> Jeff Dalton

Again, this seems to have more to do with the cognitive problem of how to DISCOVER an interpretation than with the question of whether radically different alternative interpretations (for the same symbol system) exist and are accessible in real time. I would also say that the differences in complexity between variant (but coherent) interpretations of the kind you cite here would be tiny and trivial compared to the complexity required to interpret a symbol system after swapping the interpretations of an arbitrary pair of symbol types (such as "if" and "not").

Once you've successfully decrypted something as English, for example, it is trivial to add a second-oder decryption in which a particular message (e.g., in English, or even in French) is embedded after every third word containing an "a." All that would require (if this analogy between algorithms and interpretations is tenable at all) is a few more bits added to the original interpretative algorithm -- which would still leave both algorithms MUCH closer to one another than to the infinite corpus that they both decrypt.

Now there is an analogous argument one might try to make for if/not swapping too: Take the standard English interpretative algorithm and interpret "not" as if it meant "if" and vice versa: Just a few extra bits! But this is not what radical alternative interpretation refers to. It's not just a matter of using real English, but with the symbols "if" and "not" swapped (i.e., it's not just a matter of decoding "Not it rains then you can if go out" as "If it rains then you can not go out"). You must have another interpretative algorithm altogether, a "Schmenglish" one, in which the INTERPRETATION of "if" and "not" in standard English strings like "If it rains then you can go out" (plus all the rest of standard English) are given a coherent systematic alternative interpretation in which "if" MEANS "not" and vice versa: A much taller order, and requiring a lot more than a few bits tacked on!

Stevan Harnad

-------

Chaitin, G. (1975) Randomness and mathematical proof. Scientific American 232: 47 - 52.

Rabin, M. O. (1977) Complexity of computations. Communications of the Association of Computer Machinery 20:625-633.

-------------------------------------------------

Date: Sun, 12 Apr 1992 07:58:41 -0400 From: Drew McDermott

Let's distinguish between a computer's states' being "microinterpretable" and "macrointerpretable." The former case is what you assume: that if we consider the machine to be a rewrite system, the rewrite rules map one coherently interpretable state into another. Put another way, the rewrite rules specify a change in belief states of the system. By contrast, the states of a macrointerpretable system "sort of line up" with the world in places, but not consistently enough to generate anything like a Tarskian interpretation. What I think you've overlooked is that almost all computational processes are at best macrointerpretable.

Take almost any example, a chess program, for instance. Suppose that the machine is evaluating a board position after a hypothetical series of moves. Suppose the evaluation function is a sum of terms. What does each term denote? It is not necessary to be able to say. One might, for instance, notice that a certain term is correlated with center control, and claim that it denotes "the degree of center control," but what does this claim amount to? In many games, the correlation will not hold, and the computer may as a consequence make a bad move. But the evaluation function is "good" if most of the time the machine makes "good moves."

The chess program keeps a tree of board positions. At each node of this tree, it has a list of moves it is considering, and the positions that would result. What does this list denote? The set of moves "worth considering"? Not really; it's only guessing that these moves are worth considering. We could say that it's the set the machine "is considering," but this interpretation is trivial.

We can always imose a trivial interpretation on the states of the computer. We can say that every register denotes a number, for instance, and that every time it adds two registers the result denotes the sum. The problem with this idea is that it doesn't distinguish the interpreted computers from the uninterpreted formal systems, because I can always find such a Platonic universe for the states of any formal system to "refer" to. (Using techniques similar to those used in proving predicate calculus complete.)

More examples: What do the states of a video game refer to? The Mario brothers? Real asteroids?

What do the data structures of an air-traffic control system refer to? Airplanes? What if a blip on the screen is initially the result of thermal noise in the sensors, then tracks a cloud for a while, then switches to tracking a flock of geese? What does it refer to in that case?

Halfway through an application of Newton's method to an optimization problem involving process control in a factory, what do the various inverted Hessian matrices refer to? Entities in the factory? What in the world would they be? Or just mathematical entities?

If no other argument convinces you, this one should: Nothing prevents a computer from having inconsistent beliefs. We can build an expert system that has two rules that either (a) cannot be interpreted as about medical matters at all; or (b) contradict each other. The system, let us say, happens never to use the two rules on the same case, so that on any occasion its advice reflects a coherent point of view. (Sometimes it sounds like a homeopath, we might say, and sometimes like an allopath.) We would like to say that overall the computer's inferences and pronouncements are "about" medicine. But there is no way to give a coherent overall medical interpretation to its computational states.

I could go on, but the point is, I hope, clear. For 99.9% of all computer programs, either there is only a trivial intepretation of a program's state as referring to numbers (or bit strings, or booleans); or there is a vague, unsystematic, error-prone interpretation in terms of the entities the machine is intended to concern itself with. The *only* exceptions are theorem-proving programs, in which these two interpretations coincide. In a theorem prover, intermediate steps are about the same entities as the final result, and the computational rules getting you from step to step are isomorphic to the deductive rules that justify the computational rules. But this is a revealing exception. It's one of the most pervasive fallacies in computer science to see the formal-systems interpretation of a computer has having some implications for the conclusions it draws when it is interpreted as a reasoning system. I believe you have been sucked in by this fallacy. The truth is that computers, in spite of having trivial interpretations as deductive systems, can be used to mimic completely nondeductive systems, and that any semantic framework they approximate when viewed this way will bear no relation to the low-level deductive semantics.

I suspect Searle would welcome this view, up to a point. It lends weight to his claim that semantics are in the eye of the beholder. One way to argue that an air-traffic control computer's states denote airplanes is to point out that human users find it useful to interpret them this way on almost every occasion. However, the point at issue right now is whether semantic interpretability is part of the definition of "computer." I argue that it is not; a computer is what it is regardless of how it is interpreted. I buttress that observation by pointing out just how unsystematic most interpretations of a computer's states are. However, if I can win the argument about whether computers are objectively given, and uninterpreted, then I can go on to argue that unsystematic interpretations of their states can be objectively given as well.

-- Drew McDermott

---------------

From: Stevan Harnad

Drew McDermott wrote:


>dm> Let's distinguish between a computer's states' being
>dm> "microinterpretable" and "macrointerpretable." The former case is what
>dm> you assume: that if we consider the machine to be a rewrite system, the
>dm> rewrite rules map one coherently interpretable state into another. Put
>dm> another way, the rewrite rules specify a change in belief states of the
>dm> system. By contrast, the states of a macrointerpretable system "sort of
>dm> line up" with the world in places, but not consistently enough to
>dm> generate anything like a Tarskian interpretation. What I think you've
>dm> overlooked is that almost all computational processes are at best
>dm> macrointerpretable.

Drew, you won't be surprised by my immediate objection to the word "belief" above: Until further notice, a computer has physical states, not belief states, although some of those physical states might be interpretable -- whether "macro" or "micro" I'll get to in a moment -- AS IF they were beliefs. Let's pretend that's just a semantic quibble (it's not, of course, but rather a symptom of hermeneutics creeping in; however, let's pretend).

You raise four semi-independent issues:

(1) Does EVERY computer implementing a program have SOME states that are interpretable as referring to objects, events and states of affairs, the way natural language sentences are?

(2) Are ALL states in EVERY computer implementing a program interpretable as referring... (etc.)?

(3) What is the relation of such language-like referential interpretability and OTHER forms of interpretability of states of a computer implementing a program?

(4) What is the relation of (1) - (3) to the software hierarchy, from hardware, to machine-level language, to higher-level compiled languages, to their English interpretations?

My answer would be that not all states of a computer implementing a program need be interpretable, and not all the interpretable states need be language-like and about things in the world (they could be interpretable as performing calculations on numbers, etc.), but ENOUGH of the states need to be interpretable SOMEHOW, otherwise the computer is just performing gibberish (and that's usually not what we use computers to do, nor do we describe them as such), and THAT's the interpretability that's at issue here.

Some of the states may have external referents, some internal referents (having to do with the results of calculations, etc.). And there may be levels of interpretation, where the higher-level compiled languages have named "chunks" that are (macro?)interpretable as being about objects, whereas the lower-level languages are (micro?)interpretable only as performing iterative operations, comparisons, etc. Although it's easy to get hermeneutically lost in it, I think the software hierarchy, all the way up to the highest "virtual machine" level, does not present any fundamental mysteries at all. Low-level operations are simply re-chunked at a higher level so more general and abstract computations can be performed. I can safely interpret a FORTRAN statement as multiplying 2 x 2 without worrying about how that's actually being implemented at the machine-language or hardware level -- but it IS being implemented, no matter how complicated the full hardware story for that one operation would be.


>dm> Take almost any example, a chess program, for instance. Suppose that
>dm> the machine is evaluating a board position after a hypothetical series
>dm> of moves. Suppose the evaluation function is a sum of terms. What does
>dm> each term denote? It is not necessary to be able to say. One might, for
>dm> instance, notice that a certain term is correlated with center control,
>dm> and claim that it denotes "the degree of center control," but what does
>dm> this claim amount to? In many games, the correlation will not hold, and
>dm> the computer may as a consequence make a bad move. But the evaluation
>dm> function is "good" if most of the time the machine makes "good moves."

I'm not sure what an evaluation function is, but again, I am not saying every state must be interpretable. Even in natural language there are content words (like "king" and "bishop") that have referential interpretations and function words ("to" and "and") that have at best only syntactic or functional interpretations. But some of the internal states of a chess-plying program surely have to be interpretable as referring to or at least pertaining to chess-pieces and chess-moves, and those are the ones at issue here. (Of course, so are the mere "function" states, because they too will typically have something to do with (if not chess then) calculation, and that's not gibberish either.


>dm> The chess program keeps a tree of board positions. At each node of this
>dm> tree, it has a list of moves it is considering, and the positions that
>dm> would result. What does this list denote? The set of moves "worth
>dm> considering"? Not really; it's only guessing that these moves are worth
>dm> considering. We could say that it's the set the machine "is
>dm> considering," but this interpretation is trivial.

And although I might make that interpretation for convenience in describing or debugging the program (just as I might make the celebrated interpretation that first got Dan Dennett into his "intentional stance," namely, that "the computer thinks it should get it's queen out early"), I would never dream of taking such interpretations literally: Such high level mentalistic interpretations are simply the top of the as-if hierarchy, a hierarchy in which intrinsically meaningless squiggles and squoggles can be so interpreted that (1) they are able to bear the systematic weight of the interpretation (as if they "meant" this, "considered/believed/thought" that, etc.), and (2) the interpretations can be used in (and even sometimes hard-wired to) the real world (as in interpreting the squiggles and squoggles as pertaining to chess-men and chess-moves).


>dm> We can always impose a trivial interpretation on the states of the
>dm> computer. We can say that every register denotes a number, for
>dm> instance, and that every time it adds two registers the result denotes
>dm> the sum. The problem with this idea is that it doesn't distinguish the
>dm> interpreted computers from the uninterpreted formal systems, because I
>dm> can always find such a Platonic universe for the states of any formal
>dm> system to "refer" to. (Using techniques similar to those used in
>dm> proving predicate calculus complete.)

I'm not sure what you mean, but I would say that whether they are scratches on a paper or dynamic states in a machine, formal symbol systems are just meaningless squiggles and squoggles unless you project an interpretation (e.g., numbers and addition) onto them. The fact that they will bear the systematic weight of that projection is remarkable and useful (it's why we're interested in formal symbol systems at all), but certainly not evidence that the interpretation is intrinsic to the symbol system; it is only evidence of the fact that the system is indeed a nontrivial symbol system (in virtue of the fact that it is systematically interpretable). Nor (as is being discussed in other iterations of this discussion) are coherent, systematic "nonstandard" alternative interpretations of formal symbol systems that easy to come by.


>dm> More examples: What do the states of a video game refer to? The Mario
>dm> brothers? Real asteroids?

They are interpretable as pertaining (not referring, because there's no need for them to be linguistic) to (indeed, they are hard-wireable to) the players and moves in the Mario Brothers game, just as in chess. And the graphics control component is interpretable as pertaining to (and hard-wireable to the bit-mapped images of) the icons figuring in the game. A far cry from uninterpretable squiggles and squoggles.


>dm> What do the data structures of an air-traffic control system refer to?
>dm> Airplanes? What if a blip on the screen is initially the result of
>dm> thermal noise in the sensors, then tracks a cloud for a while, then
>dm> switches to tracking a flock of geese? What does it refer to in that
>dm> case?

I don't know the details, but I'm sure a similar story can be told here: Certain squiggles and squoggles are systematically interpretable as signaling (and mis-signaling) the presence of an airplane, and the intermediate calculations that lead to that signaling are likewise interpretable in some way. Running computer programs are, after all, not black boxes inexplicably processing input and output. We design them to do certain computations; we know what those computations are; and what makes them computations rather than gibberish is that they are interpretable.


>dm> Halfway through an application of Newton's method to an optimization
>dm> problem involving process control in a factory, what do the various
>dm> inverted Hessian matrices refer to? Entities in the factory? What in
>dm> the world would they be? Or just mathematical entities?

The fact that the decomposition is not simple does not mean that the intermediate states are all or even mostly uninterpretable.


>dm> If no other argument convinces you, this one should: Nothing prevents
>dm> a computer from having inconsistent beliefs. We can build an expert
>dm> system that has two rules that either (a) cannot be interpreted as
>dm> about medical matters at all; or (b) contradict each other. The system,
>dm> let us say, happens never to use the two rules on the same case, so
>dm> that on any occasion its advice reflects a coherent point of view.
>dm> (Sometimes it sounds like a homeopath, we might say, and sometimes like
>dm> an allopath.) We would like to say that overall the computer's
>dm> inferences and pronouncements are "about" medicine. But there is no way
>dm> to give a coherent overall medical interpretation to its computational
>dm> states.

I can't follow this: The fact that a formal system is inconsistent, or can potentially generate inconsistent performance, does not mean it is not coherently interpretable: it is interpretable as being inconsistent, but as yielding mostly correct performance nevertheless. [In other words, "coherently interpretable" does not mean "interpretable as coherent" (if "coherent" presupposes "consistent").]

And, ceterum sentio, the system has no beliefs; it is merely systematically interpretable as if it had beliefs (and inconsistent ones, in this case). Besides, since even real people (who are likewise systematically interpretable, but not ONLY systematically interpretable: also GROUNDED by their TTT-powers in the real world) can have inconsistent real beliefs, I'm not at all sure what was meant to follow from your example.


>dm> I could go on, but the point is, I hope, clear. For 99.9% of all
>dm> computer programs, either there is only a trivial interpretation of a
>dm> program's state as referring to numbers (or bit strings, or booleans);
>dm> or there is a vague, unsystematic, error-prone interpretation in terms
>dm> of the entities the machine is intended to concern itself with. The
>dm> *only* exceptions are theorem-proving programs, in which these two
>dm> interpretations coincide. In a theorem prover, intermediate steps are
>dm> about the same entities as the final result, and the computational
>dm> rules getting you from step to step are isomorphic to the deductive
>dm> rules that justify the computational rules. But this is a revealing
>dm> exception. It's one of the most pervasive fallacies in computer science
>dm> to see the formal-systems interpretation of a computer as having some
>dm> implications for the conclusions it draws when it is interpreted as a
>dm> reasoning system. I believe you have been sucked in by this fallacy.
>dm> The truth is that computers, in spite of having trivial interpretations
>dm> as deductive systems, can be used to mimic completely nondeductive
>dm> systems, and that any semantic framework they approximate when viewed
>dm> this way will bear no relation to the low-level deductive semantics.

My view puts no special emphasis on logical deduction, nor on being interpretable as doing logical deduction. Nor does it require that a system be interpretable as if it had only consistent beliefs (or any beliefs at all, for that matter). It need be interpretable only in the way symbol strings in English, arithmetic, C or binary are interpretable.


>dm> I suspect Searle would welcome this view, up to a point. It lends
>dm> weight to his claim that semantics are in the eye of the beholder.
>dm> One way to argue that an air-traffic control computer's states denote
>dm> airplanes is to point out that human users find it useful to
>dm> interpret them this way on almost every occasion. However, the point
>dm> at issue right now is whether semantic interpretability is part of the
>dm> definition of "computer." I argue that it is not; a computer is what
>dm> it is regardless of how it is interpreted. I buttress that
>dm> observation by pointing out just how unsystematic most interpretations
>dm> of a computer's states are. However, if I can win the argument about
>dm> whether computers are objectively given, and uninterpreted, then I
>dm> can go on to argue that unsystematic interpretations of their states
>dm> can be objectively given as well.
>dm>
>dm> -- Drew McDermott

If you agree with Searle that computers can't be distinguished from non-computers on the basis of interpretability, then I have to ask you what (if anything) you DO think distinguishes computers from non-computers? Because "Everything is a computer" would simply eliminate (by fiat) the substance in any answer at all to the question "Can computers think?" (or any other question about what can or cannot be done by a computer, or computationally). Some in this discussion have committed themselves to universality and a complexity-based criterion (arbitrary rival interpretations are NP-complete). Where do you stand?

Stevan Harnad

------------------

From: Brian C Smith Date: Thu, 16 Apr 1992 11:43:32 PDT

I can't help throwing in a number of comments into this discussion:

1) ON UNIVERSALITY: All metrics of equivalence abstract away from certain details, and focus on others. The metrics standardly used to show universality are extraordinarily coarse-grained. They are (a) essentially behaviourist, (b) blind to such things as timing, and (c) (this one may ultimately matter the most), promiscuous exploiters of implementation, modelling, simulation, etc. Not only does it strike me as extremely unlikely that (millenial versions of) "cognitive", "semantic", etc., will be this coarse-grained, but the difference between a model and the real thing (ignored in the standard equivalence metrics) is exactly what Searle and others are on about. It therefore does not follow, if X is cognitive, and Y provably equivalent to it (in the standard theoretic sense), that Y is cognitive.

This considerations suggest not only that universality may be of no particular relevance to cognitive science, but more seriously that it is somewhere between a red herring and a mine field, and should be debarred from arguments of cognitive relevance.

2) ON ORIGINAL INTENTIONALITY: Just a quick one. In some of the notes, it seemed that *intrinsic* and *attributed* were being treated as opposites. This is surely false. Intrinsic is presumably opposed to something like extrinsic or relational. Attributed or observer- supplied is one particular species of relational, but there are many others. Thus think about the property of being of average height. This property doesn't inhere within an object, but that doesn't make it ontologically dependent on observation or attribution (at least no more so that anything else [cf. Dietrich]).

There are lots of reasons to believe that semantics, even original semantics, will be relational. More seriously, it may even be that our *capacity* for semantics is relational (historical, cultural, etc. -- this is one way to understand some of the deepest arguments that language is an inexorably cultural phenomenon). I.e., it seems to me a mistake to assume that *our* semantics is intrinsic in us. So arguing that computers' semantics is not intrinsic doesn't cut it as a way to argue against computational cognitivism.

3) ON FORMAL SYMBOL MANIPULATION: In a long analysis (20 years late, but due out soon) I argue that actual, real-world computers are not formal symbol manipulators (or, more accurately, that there is no coherent reading of the term "formal" under which they are formal). Of many problems, one that is relevant here is that the inside/ outside boundary does not align with the symbol/referent boundary -- a conclusion that wreaks havoc on traditional notions of transducers, claims of the independence of syntax and semantics, the relevance of "brain in a vat" thought experiments, etc.

4) ON THE "ROBOTIC" SOLUTION: Imagine someone trying to explain piano music by starting with the notion of a melody, then observing that more than one note is played at once, and then going on to say that there must also be chords. Maybe some piano music can be described like that: as melody + chords. But not a Beethoven sonata. The consequence of "many notes at once" is not that one *adds* something (chords) to the prior idea of a single-line melody. Once you've got the ability to have simultaneous notes, the whole ball game changes.

I worry that the robotic reply to Searle suffers the same problem. There's something right about the intuition behind it, having to do with real-world engagement. But when you add it, it is not clear whether the original notion (of formal symbol manipulation, or even symbol manipulation at all) survives, let alone whether it will be a coherent part of the expanded system. I.e., "symbol + robotic grounding" seems to me all too similar to "melody + chords".

If this is true, then there is a very serious challenge as to what notions *are* going to explain the expanded "engaged with the real world" vision. One question, the one on the table, is whether or not they will be computational (my own view is: *yes*, in the sense that they are exactly the ones that are empirically needed to explain Silicon Valley practice; but *no*, in that they will neither be an extension to nor modification of the traditional formal symbol manipulation construal, but will instead have to be redeveloped from scratch). More serious than whether they are computational, however, is what those notions *will actually be*. I don't believe we know.

5) ON TYPES: On March 22, Gary Hatfield raised a point whose importance, I believe, has not been given its due. Over the years, there have been many divisions and distinctions in AI and cognitive science: neat vs. fuzzy; logicist vs. robotic; situated vs. non-situated; etc. I have come to believe, however, that far and away the most important is whether people assume that the TYPE STRUCTURE of the world can be taken as explanatorily and unproblematically given, or whether it is something that a theory of cognition/computation /intentionality/etc. must explain. If you believe that the physical characterisation of a system is given (as many writers seem to do), or that the token characterisation is given (as Haugeland would lead us to believe), or that the set of states is given (as Chalmers seems to), or that the world is parsed in advance (as set theory & situation theory both assume), then many of the foundational questions don't seem to be all that problematic.

Some of us, however, worry a whole lot about where these type structures come from. There is good reason to worry: it is obvious, once you look at it, that the answers to all the interesting questions come out different, if you assume different typing. So consider the disussions of physical implementation. Whether there is a mapping of physical states onto FSA states depends on what you take the physical and FSA states to be. Not only that, sometimes there seems to be no good reason to choose between different typings. I once tried to develop a theory of representation, for example, but it had the unfortunate property that the question of whether maps were isomorphic representations of territory depended on whether I took the points on the maps to be objects, and the lines to be relations between them, or took the lines to be objects and the points to be relations (i.e., intersections) between *them*. I abandoned the whole project, because it was clear that something very profound was wrong: my analysis depended far too much on my own, inevitably somewhat arbitrary, theoretic decisions. I, the theorist, was implicitly, and more or less unwittingly, *imposing* the structure of the solution to my problem onto the subject matter beforehand.

Since then, I have come to believe that explaining the rise of ontology (objects, properties, relations, types, etc.) is part and parcel of giving an adequate theory of cognition. It's tough sledding, and this is not the place to go into it. But it is important to get the issue of whether one believes that one can assume the types in advance out onto the table, because I think implicit disagreement over this almost methodological issue can subvert communication on the main problems of the day.

Brian Smith

(P.S.: Is there a reason not to have a mailing list that each of us can post to directly?)

[The symbol grounding list is not an unmoderated list; it is moderated by me. I post all substantive messages, but if it were unmoderated it would quickly degenerate into what goes on on comp.ai. -- SH]

------------------------------------------------

Date: Sat, 18 Apr 92 17:29:50 MDT To: mcdermott-drew@CS.YALE.EDU Cc: harnad%Princeton.EDU.hayes@cs.stanford.edu

Drew, clearly you have an antisemantic axe to grind, but its not very sharp.

First of all, of course you are right that many computational processes don't have a constant coherent interpretation. But not 99% of them. Let's look at your examples. First the chess program's list of moves. That this list denotes any list of chess moves - that is, moves of actual chess - is already enough of an interpretation to be firmly in the world of intentionality. You might ask, what IS a move of actual chess, and I wouldn't want to have to wait for a philosopher's answer, but the point here is that it certainly isn't something inside a computer: some kind of story has to be told in which that list denotes (or somehow corresponds to, or has as its meaning) something other than bit-strings. And this kind of story is an essential part of an account of, for example, the correctness of the chess-playing code. Your point that the heuristics which choose a particular set of moves (or which assign particular values of some evaluation function to a move) are in some sense ill-defined is correct, but that is not to say they are uninterpretable. A bitstring has many interpretations which are not numbers and have nothing at all to do with chess, so to claim that these are the meanings is to say something significant.

Suppose I were to build a machine which treated its bit-strings like base-1 integers, so that N was represented by a consecutive string of ones N long. Now your interpretation of addition will fail. So it isn't completely trivial.

Consider again the air traffic control system which gets confused by thermal noise, then clouds, then geese. This is a familiar situation, in which a knower has confused knowledge. But the model theory accounts for this perfectly well. Its beliefs were false, poor thing, but they had content: it thought there was an airplane there. To give a proper account of this requires the use of modality and a suitable semantics for it, as I know you know. One has to say something like its blip denoted an airplane in the possible worlds consistent with its beliefs. But look, all this is semantics, outside the formal syntactic patterns of its computational memory. And just CALLING it an "air-traffic control system" implies that its computational states have some external content.

Your inconsistent-beliefs point misses an important issue. If that expert system has some way of ensuring that these contradictory rules never meet, then it has a consistent interpretation, trivially: we can regard the mechanism which keeps them apart as being an encoding of a syntactic difference in its rule-base which restores consistency. Maybe one set of rules is essentially written with predicates with an "allo-" prefix and the others with a "homeo-". You might protest that this is cheating, but I would claim not: in fact, we need a catalog of such techniques for mending consistency in sets of beliefs, since people seem to have them and use them to 'repair' their beliefs constantly, and making distinctions like this is one of them (as in, "Oh, I see, must be a different kind of doctor"). If on the other hand the system has no internal representation of the distinction, even implici t, but just happens to never bring the contradiction together, then it is in deep trouble as it will soon just happen to get its knowledge base into total confusion. But in any case, it is still possible to interpret an inconsistent set of beliefs as meaningful, since subsets of it are. We might say of this program, as we sometimes do of humans, that it was confused, or it seemed to keep changing its mind about treatment procedures: but this is still ABOUT medicine. A very naive application of Tarskian models to this situation would not capture the necessary subtlety of meaning, but that doesn't make it impossible.

Finally, there is no need to retreat to this idea of the interpretation being a matter of human popularity. The reason the states of an autopilot denote positions of the airplane is not because people find it useful to interpret them that way, but because (with very high probability) the airplane goes where it was told to.

Pat Hayes

-----------------

Date: Mon, 20 Apr 92 09:58:09 EDT From: "Stevan Harnad"

bd> Date: Sun, 19 Apr 92 20:06:37 PDT bd> From: dambrosi@research.CS.ORST.EDU (Bruce Dambrosio) bd> bd> Stevan: bd> bd> I am puzzled by one thing, which perhaps was discussed earlier: why do bd> you, of all people, believe that a definition satisfying your bd> requirements might exist? This seems to me quite a quixotic quest. bd> bd> A definition is a symbolically specified mapping from the objects bd> denoted by some set of symbols to the objects denoted by the symbol bd> being defined. But if, as you claim, the process by which relationship bd> is established (grounding) is such that it cannot be adequately bd> described symbolically (I take this to be the heart of the symbol bd> grounding position), then how can one ever hope to describe the bd> relationship between two groundings symbolically? At best, one can only bd> hope for a rough approximation that serves to guide the hearer in the bd> right direction. I may know what a computer is, but be quite unable to bd> give you a definition that stands up to close scrutiny. Indeed, such a bd> situation would seem to be evidence in favor of symbol grounding as a bd> significant issue. Am I naive or has this already been discussed? bd> bd> Bruce D'Ambrosio

Bruce,

This has not been explicitly discussed, but unfortunately your description of the symbol grounding problem is not quite correct. The problem is not that we cannot give adequate definitions symbolically (e.g., linguistically); of course we can: We can define adequately anything, concrete or abstract, that we understand adequately enough to define.

The symbol grounding problem is only a problem for those (like mind modelers) who are trying to design systems in which the meanings of the symbols are INTRINSIC to the system, rather than having to be mediated by our (grounded) interepretations. There is nothing whatsoever wrong with ungrounded symbol systems if we want to use them for other purposes, purposes in which our interpretations are free to mediate. A dictionary definition is such a mediated use, and it does not suffer from the symbol grounding problem. The example of the Chinese-Chinese Dictionary-Go-Round that I described in Harnad (1990) was one in which the dictionary was being used by someone who knew no Chinese! For him the ungroundedness of the dictionary (and the fact that its use cannot be mediated by his own [nonexistent] grounded understanding of Chinese) is indeed a problem, but not for a Chinese speaker.

If our notions of "computer" and "computation" are coherent ones (and I suspect they are, even if still somewhat inchoate) then there should be no more problem with defining what a computer is than in defining what any other kind of object, natural or artificial, is. The alternatives (that everything is a computer, or everything is a computer to some degree, or nothing is a computer), if they are the correct ones, would mean that a lot of the statements we make in which the word "computer" figures (as in "computers can/cannot do this/that") would be empty, trivial, or incoherent.

One pass at defining computation and computer would be as, respectively, syntactic symbol manipulation and universal syntactic symbol manipulator, where a symbol system is a set of objects (symbols) that is manipulated according to (syntactic) rules operating only on their shapes (not their meanings), the symbols and symbol manipulations are systematically interpretable as meaning something (and the interpretation is cryptologically nontrivial), but the shapes of the elementary symbol tokens are arbitrary in relation to what they can be interpreted as meaning. I, for one, could not even formulate the symbol grounding problem if there were no way to say what a symbol system was, or if everything was a symbol system.

As to the question of approximate grounding: I discuss this at length in Harnad (1987). Sensory groundings are always provisional and approximate (because they are relative to the sample of confusable alternatives encountered to date). Definitions may be provisional and empirical ones, or they may be stipulative and analytical. If the latter, they are not approximate, but exact "by definition." I would argue, however, that even high-level exact definitions depend for our understanding on the grounding of their symbols in lower-level symbols, which are in turn grounded ultimately in sensory symbols (which are indeed provisional and approximate). This just suggests that symbol grounding should not be confused with ontology.

There are prominent philosophical objections to this kind of radical bottom-uppism, objections of which I am quite aware and have taken some passes at answering (Harnad 1992). The short answer is that bottom-uppism cannot be assessed by introspective analysis alone and has never yet been tried empirically; in particular, no one knows HOW we actually manage to sort, label and describe objects, events and states of affairs as we do, but we can clearly do it; hence, until further notice, input information (whether during our lifetimes or during the evolutionary past that shaped us) is the only candidate source for this remarkable capacity.

Stevan Harnad

Harnad, S. (1987) The induction and representation of categories. In: In: S. Harnad (ed.) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press.

Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.

Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag.

------------------------------------------------------------

Date: Tue, 21 Apr 92 18:22:32 EDT From: "Stevan Harnad"

ARE "GROUNDED SYMBOL SYSTEMS" STILL SYMBOL SYSTEMS?


> Brian C Smith wrote:


>bs> It... does not follow [that] if X is cognitive, and Y provably
>bs> equivalent to it (in the standard theoretic sense), that Y is
>bs> cognitive.

Of course not; in fact one wonders why this even needs to be said! Equivalence "in the standard sense" is computational equivalence, not physical-causal equivalence. Whether someone is drowned in water or in beer is equivalent insofar as drowning is concerned, because the drowning is real in both cases. But if the drowning is "virtual" (i.e., a computer-simulated person is "drowned" in computer-simulated water) there is no drowning at all going on, no matter how formally equivalent the symbols may be to real drowning.


>bs> In some of the notes, it seemed that *intrinsic* and *attributed* were
>bs> being treated as opposites. This is surely false. Intrinsic is
>bs> presumably opposed to something like extrinsic or relational.
>bs> Attributed or observer-supplied is one particular species of
>bs> relational, but there are many others.

I've never understood why so much emphasis is placed by philosophers on the difference between monadic ("intrinsic") and polyadic ("relational") properties. Surely that's not the real issue in mind modeling. What we want is that symbols should mean X not just because we interpret them as meaning X but because they (also) mean X independently of our interpretations. Their meaning has to be autonomously GROUNDED in something other than just their being able to bear the systematic weight of our interpretations.

The string of symbols "the cat is on the mat," whether it is instantiated on the inert pages of a book or as a dynamic state in a computer running a LISP program, is systematically interpretable as meaning "the cat is on the mat" (in relation to the rest of the symbol system) but it does not mean "the cat is on the mat" on its own, autonomously, the way I do when I think and mean "the cat is on the mat," because I, unlike the book or the computer, don't mean "the cat is on the mat" merely in virtue of the fact that someone else can systematically interpret me as meaning that.

So the real problem is how to ground meaning autonomously, so as not to leave it hanging from a skyhook of mere interpretation or interpretability. The solution may still turn out to be "relational," but so what? According to my own robotic grounding proposal, for example, a robot's symbols would have autonomous meaning (or, to be noncommittal, let's just say they would have autonomous "grounding") because their use would be governed and constrained by whatever it takes to make the robot capable of interacting TTT-indistinguishably with the very objects to which its symbols were interpretable as referring. The meaning of the robot's symbols is grounded in its robotic capacity instead of depending only on how the symbols can be or actually are interpreted by us. But note that this is merely a case of one set of "relations" (symbol/symbol relations and their interpretations) being causally constrained to be coherent with another set of "relations" (symbol/object relations in the world).

The source, I think, of the undue preoccupation with monadic properties is the (correct) intuition that our thoughts are meaningful in and of themselves, not because of how their interrelations are or can be interpreted by others. Probably the fact that all thoughts are the thoughts of a conscious subject (and that their meaning is a meaning to that conscious subject) also contributed to the emphasis on the autonomy and "intrinsic" nature of meaning.


>bs> There are lots of reasons to believe that semantics, even original
>bs> semantics, will be relational... it may even be that our
>bs> *capacity* for semantics is relational (historical, cultural, etc)...
>bs> it seems to me a mistake to assume that *our* semantics is intrinsic in
>bs> us. So arguing that computers' semantics is not intrinsic doesn't cut
>bs> it as a way to argue against computational cognitivism.

To agree that the meanings of the symbols inside a robot are grounded in (say) the robot's actual relations to the objects to which its symbols can be interpreted as referring is still not to agree that the locus of those meanings is any wider -- in either time or space -- than the robot's body (which includes the projections and effects of real world objects on its sensorimotor surfaces).


>bs> [In a forthcoming paper ] I argue that actual, real-world computers are
>bs> not formal symbol manipulators (or, more accurately, that there is no
>bs> coherent reading of the term "formal" under which they are formal).
>bs> Of many problems, one that is relevant here is that the inside/
>bs> outside boundary does not align with the symbol/referent boundary --
>bs> a conclusion that wreaks havoc on traditional notions of transducers,
>bs> claims of the independence of syntax and semantics, the relevance of
>bs> "brain in a vat" thought experiments, etc.

One would have to see this forthcoming paper, but my intuition is that a lot of red herrings have been and continue to be raised whenever one attempts to align (1) the internal/external distinction for a physical system with (2) what is going on "inside" or "outside" a mind. The first. I think, is largely unproblematic: We can safely (though not always usefully) distinguish the inside and the outside of a computer or a robot, as well as the I/O vs. the processing of a symbol system. What is inside and outside a mind is another story, one that I think is incommensurable with anything but the grossest details of the physical inside/outside story.

As a first pass at "formal," how about: A symbol system consists of a set of objects (elementary symbols and composite symbols) plus rules for manipulating the symbols. The rules operate only on the physical shapes of the symbols, not their meanings (and the shapes of the elementary symbols are arbitrary), yet the symbols are systematically interpretable as meaning something. The rules for manipulating the symbols on the basis of their shapes are called "syntactic" or "formal" rules.

A computer is a dynamical system that mechanically implements the symbols, the symbol manipulations, and the rules (constraints) governing the symbol manipulations.


>bs> Imagine someone trying to explain piano music by starting with the
>bs> notion of a melody, then observing that more than one note is played at
>bs> once, and then going on to say that there must also be chords. Maybe
>bs> some piano music can be described like that: as melody + chords. But
>bs> not a Beethoven sonata. The consequence of "many notes at once" is not
>bs> that one *adds* something (chords) to the prior idea of a single-line
>bs> melody. Once you've got the ability to have simultaneous notes, the
>bs> whole ball game changes.

Some symbols can be indirectly grounded this way, using propositions with symbols that either have direct sensory grounding or are near to their sensory grounding (e.g., "A `zebra' is a horse with stripes"), but many symbols cannot be adequately grounded by symbolic description alone and require direct sensory acquaintance. This is just more evidence for the importance of sensorimotor grounding.


>bs> I worry that the robotic reply to Searle suffers the same problem.
>bs> There's something right about the intuition behind it, having to do
>bs> with real-world engagement. But when you add it, it is not clear
>bs> whether the original notion (of formal symbol manipulation, or even
>bs> symbol manipulation at all) survives, let alone whether it will be a
>bs> coherent part of the expanded system. I.e., "symbol + robotic
>bs> grounding" seems to me all too similar to "melody + chords".

The standard robot reply to Searle is ineffectual, because it retains the (symbols-only) Turing Test (TT) as the crucial test for having a mind and simply adds on arbitrary peripheral modules to perform robotic functions. My own "robot" reply (which I actually call the "Total" reply) rejects the TT altogether for the "Total Turing Test" (TTT) and is immune to Searle's argument because the TTT cannot be passed by symbol manipulation alone, and Searle (on pain of the "System Reply," which normally fails miserably, but not in the case of the TTT) can fully implement only pure implementation-independent symbol manipulation, not implementation-dependent nonsymbolic processes such as transduction, which are essential for passing the TTT.

On the other hand, I agree that grounded "symbol systems" may turn out to be so radically different from pure symbol systems as to make it a different ballgame altogether (the following passage is from the Section entitled "Analog Constraints on Symbols" in Harnad 1992):

"Recall that the shapes of the symbols in a pure symbol system are arbitrary in relation to what they stand for. The syntactic rules, operating on these arbitrary shapes, are the only constraint on the manipulation of the symbols. In the kind of hybrid system under consideration here, however, there is an additional source of constraint on the symbols and their allowable combinations, and that is the nonarbitrary shape of the categorical representations that are "connected" to the elementary symbols: the sensory invariants that can pick out the object to which the symbol refers on the basis of its sensory projection. The constraint is bidirectional. The analog space of resemblances between objects is warped in the service of categorization -- similarities are enhanced and diminished in order to produce compact, reliable, separable categories. Objects are no longer free to look quite the same after they have been successfully sorted and labeled in a particular way. But symbols are not free to be combined purely on the basis of syntactic rules either. A symbol string must square not only with its syntax, but also with its meaning, i.e., what it, or the elements of which it is composed, are referring to. And what they are referring to is fixed by what they are grounded in, i.e., by the nonarbitrary shapes of the iconic projections of objects, and especially the invariants picked out by the neural net that has accomplished the categorization."


>bs> If this is true, then there is a very serious challenge as to what
>bs> notions *are* going to explain the expanded "engaged with the real
>bs> world" vision. One question, the one on the table, is whether or not
>bs> they will be computational (my own view is: *yes*, in the sense that
>bs> they are exactly the ones that are empirically needed to explain
>bs> Silicon Valley practice; but *no*, in that they will neither be an
>bs> extension to nor modification of the traditional formal symbol
>bs> manipulation construal, but will instead have to be redeveloped from
>bs> scratch). More serious than whether they are computational, however, is
>bs> what those notions *will actually be*. I don't believe we know.

I think I agree: The actual role of formal symbol manipulation in certain dedicated symbol systems (e.g., TTT-scale robots) may turn out to be so circumscribed and/or constrained that the story of the constraints (the grounding) will turn out to be more informative than the symbolic story.


>bs> the most important [distinction in AI and cognitive science] is whether
>bs> people assume that the TYPE STRUCTURE of the world can be taken as
>bs> explanatorily and unproblematically given, or whether it is something
>bs> that a theory of cognition/computation /intentionality/etc. must
>bs> explain. If you believe that the physical characterisation of a system
>bs> is given (as many writers seem to do), or that the token
>bs> characterisation is given (as Haugeland would lead us to believe), or
>bs> that the set of states is given (as Chalmers seems to), or that the
>bs> world is parsed in advance (as set theory & situation theory both
>bs> assume), then many of the foundational questions don't seem to be all
>bs> that problematic... Some of us, however, worry a whole lot about where
>bs> these type structures come from... [E]xplaining the rise of ontology
>bs> (objects, properties, relations, types, etc.) is part and parcel of
>bs> giving an adequate theory of cognition.
>bs>
>bs> Brian Smith

It is for this reason that I have come to believe that categorical perception and the mechanisms underlying our categorizaing capacity are the groundwork of cognition.

Stevan Harnad

Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag.

--------------------------------------------------------------

Date: Tue, 21 Apr 92 20:34:02 EDT From: "Stevan Harnad"

From: Pat Hayes Date: Wed, 15 Apr 92 16:02:09 MDT


>sh> ...if a mind supervenes on (the right)
>sh> computations because of their computational
>sh> properties (rather than because of the physical
>sh> details of any particular implementation of
>sh> them), then it must supervene on ALL
>sh> implementations of those computations. I think
>sh> Searle's Chinese Room Argument has successfully
>sh> pointed out that this will not be so ...


>ph> No, only if you believe that what Searle in the room is doing is
>ph> letting a program run on him, which I think is clearly false. Searle's
>ph> Chinese Room argument doesn't SHOW anything. It can be used to bolster
>ph> a belief one might have about computations, but if one doesn't accept
>ph> that as a premise, than it doesn't follow as a conclusion either. The
>ph> "argument" is just an intuition pump, as Dennett observed a decade
>ph> ago.

Pat, "intuition pump" is not a pejorative, if it pumps true. I will be happy to consider the implications of the fact that Searle, doing everything the computer does, does not count as a valid implementation of the same computer program -- as soon as you specify and argue for what you mean by implementation and why Searle's would not qualify. Until then, I don't see why EVERY system that processes the same symbols, follows the same (syntactic) rules and steps through the same states doesn't qualify as a valid implementation of the same program.


>sh> transducers, for example, are no more
>sh> implementation-independent than digestion is.


>ph> Well, I see what you mean and agree, but one does have to be careful.
>ph> The boundary of implementation-independence can be taken very close to
>ph> the skin. For example, consider a robot with vision and imagine
>ph> replacing its tv cameras with more modern ones which use an array of
>ph> light-sensitive chips rather than scanning something with an electron
>ph> beam. It really doesn't matter HOW it works, how the physics is
>ph> realised, provided it sends the right signals back along its wires. And
>ph> this functional specification can be given in terms of the physical
>ph> energies which are input to it and the syntax of its output. So we are
>ph> in supervenience from the skin inwards.

I agree that the layer between the "shadow" that objects cast on our transducers and the symbols they are cashed into at the very next layer could in principle be VERY thin -- if indeed the rest of the story were true, which is that the signals are just hurtling headlong toward a symbolic representation. However, I don't believe the rest of the story is true! I think most of the brain is preserving sensory signals in various degrees of analog form (so we would probably do well to learn from this). In fact, I think it's as likely that a mind is mostly symbolic, with just a thin analog layer mediating input and output to the world, as that a plane or a furnace are mostly symbolic, with a thin analog layer mediating input and output.

But even if the transducer layer WERE that thin, my point would stand (and that thin layer would then simply turn out to be critically important for the implementation of mental states). Although I don't get much insight from the concept of "supervenience," it would be the analog-plus-symbolic system on which mental states would "supervene," not the symbolic part alone, even if the analog layer was only one micron thick.

I AM a functionalist about analog systems though. There's more than one way to skin an analog cat: As long as devices are analog, and support the same I/O, they don't have to be physically identical: The omatidia of the horseshoe crab transduce light just as literally as mammalian retinae (or synthetic optical transducers) do; as long as they really transduce light and generate the same I/O, they're functionally equivalent enough for me, as transducers. Same is true for internal A/A transforms, with retinal signals that code light in the intensity domain going into some other continuous variable (or even A/D into the frequency domain) as long as they are functionally equivalent and invertible at the I/O end.


>ph> This is like my point about language. While I think you are ultimately
>ph> correct about the need for a TTT to pin down meaning, the need seems
>ph> almost a piece of philosophical nitpicking, since one can get so far -
>ph> in fact, can probably do all of the science - without ever really
>ph> conceding it. In terms of thinking about actual AI work, the difference
>ph> between TT and TTT doesn't really MATTER. And by the way, if one talks
>ph> to people in CS, they often tend to regard the term 'computation' as
>ph> including for example real-time control of a lime kiln.

I don't think you need the TTT to "pin down" meaning. I think you need the structures and processes that make it possible to pass the TTT in order to implement meaning at all. And that definitely includes transduction.

We don't disagree, by the way, on the power of computation to capture and help us understand, explain, predict and build just about anything (be it planes or brains). I just don't think computation alone can either fly or think.


>ph> Heres a question: never mind transducers (I never liked that concept
>ph> anyway), how about proprioception? How much of a sense of pain can be
>ph> accounted for in terms of computations? This is all internal, but I
>ph> bet we need a version of the TTT to ultimately handle it properly, and
>ph> maybe one gets to the physics rather more immediately, since there isn't
>ph> any place to draw the sensed/sensing boundary.
>ph>
>ph> Pat Hayes

As I wrote in my comment on Brian Smith's contribution, this conflating (i) internal/external with respect to a robot's BODY (which is no problem, and may involve lots of "internal" transducers -- for temperature, voltage, etc. -- that are perfectly analog rather than symbolic, despite their internal locus) with (ii) internal/external with respect to the robot's MIND:

(1) What is "in" the mind is certainly inside the body (though "wide intentionalists" tend to forget this); but

(2) what is in the mind is not necessarily symbolic;

(3) what is inside the body in not necessarily symbolic;

(4) what is inside the body is not necessarily in the mind.

The question is not how to "account for" or "handle" proprioception or pain, but how to EMBODY them, how to implement them. And I'm suggesting that you can't implement them AT ALL with computation alone -- not that you can't implement them completely or unambiguously that way, but that you can't implement them AT ALL. (Or, as an intuition pump, you can implement pain or proprioception by computation alone to the same degree that you can implement flying or heating by computation alone.)

Stevan Harnad

------------

Date: Tue, 21 Apr 92 20:48:01 EDT From: "Stevan Harnad"

Below is a contribution to the symbol grounding discussion from Mike Dyer. I will not reply here, because the disagareement between Mike and me has already appeared in print (Dyer 1990, Harnad 1990 in the same issue of JETAI; my apologies for not having the page span for Mike's article at hand).

I will just point out here that Mike seems prepared to believe in some rather radical neurological consequences following from the mere memorization of meaningless symbols. To me this is tantamount to sci-fi. Apart from this, I find that the variants Mike proposes on Searle's Argument seem to miss the point and change the subject.

Stevan Harnad

Harnad, S. (1990) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327.

-------------------

Date: Tue, 14 Apr 92 23:19:41 PDT From: Dr Michael G Dyer Subject: networks gating networks: minds supervening on minds

Stevan,

It appears that your unfortunate blind acceptance of Searle's Chinese Room Agument (CRA) keeps leading you astray. In your analysis of Chalmers's observations you at least correctly grasp that

"So if a mind supervenes on (the right) computations because of their computational properties (rather than because of the physical details of any particular implementation of them), then it must supervene on ALL implementations of those computations."

But then you get derailed with:

"I think Searle's Chinese Room Argument has successfully pointed out that this will not be so..."

But Searle's CRA has NOT been "successful". CRA is quite FLAWED, but you don't seem to entertain any notions concerning how two brains/minds might be intertwined in the same body.

Scenario #1: Suppose the Chinese instructions that Searle follows actually cause Searle to simulate the activation of a complex network of artificial neurons, equal in complexity to the neural network of a human brain (just what's in those instruction books is never specified, so I can imagine anything I want). We then take those instructions and build a specialized computer "neuro-circuit" that realizes those instructions -- call it C. We then enlarge Searle's head and install C so that, when "turned on" it takes over Searle's body. With a remote control device we turn C on and suddenly Searle starts acting like some particular Chinese individual. Once turned on, the Chinese persona requests to maintain control of the body that once was Searle's.

Scenario #2: We build, on a general purpose multiprocessor, a simulation of the neuro-circuitry of C -- let's call this Simu-C -- such that SC takes over the body when we flip a switch.

Scenario #3: This one is a bit trickier. We examine carefully Searle's own neuro-circuitry and we design an artificial neural network -- call it C-supervene -- that gates the circuits of Searle's brain such that, when C-supervene is turned on, the gating of Searle's circuitry causes Searle's own brain circuitry to turn into the Chinese person circuitry. Thus, Searle's own neurocircuitry is being used (in a rather direct way -- i.e. no symbols) to help create the Chinese personage.

Scenario #4; But now we replace C-supervene with a general multi- processor that runs a simulation (ie. symbol manipulations) that gates Searle's own neuro-circuitry to produce the Chinese persona.

In scenario #1 there are two distinct sets of neuro-circuits: Searle's and the Chinese person's. Whichever one controls the body depends on our switch.

In scenario #2 the Chinese neurocircuitry is replaced by a simulation of that circuitry with a more general purpose hardware.

In scenario #3 the Chinese neurocircuitry actually makes use of Searle's neurocircuitry to do its computations, but it is the "master" and Searl'es circuitry is the "slave".

In scenario #4, again, the specialized Chinese "neuro-controller" of scenario #3 is replaced by a simulation on a more general purpose hardware.

Finally, we can give the Chinese person (in whichever incarnation above that you want) a set of instructions that allows it to simulate Searle's entire neuro-circuitry, so that, when we flip our switch, it's the Searle persona who gains control of the body. So we can multiple levels of Searles and the Chinese persons simulating each other.

Now, WHICH person should we listen to? When in control of the body, the Searle persona says he's the real person and he has no experience of being the Chinese person. When the Chinese person is in control, this person claims to have no experience of being Searle (and so uses Searle's own argument against Searle).

Now, it is still possible that there is more to having a mind than having the right sort of computations, but Searle has NOT given any kind of refutation with his CRA. And your own "grounding" argument is insufficient also since it leads to either one of two absurd situations:

either you have to claim that (a) remove the eyes and the mind goes. or (b) the entire brain has to count as the eyes, so that you get to remove the entire brain whenever anyone requests that you remove the eyes.

On the other side, if we accept Chalmers's position (and the working hypothesis of the Physical Symbol System Hypothesis of AI), then we have the advantage of being able to compare minds by observing how similar their computations are (at some level of abstraction) and we can develop, ultimately, a non-chauvinistic theory of mind (e.g. alien intelligences in different substrata).

Notice, connectionism fits within the PSSH (because every neuron and synapse and dendritic compartment, etc. can be represented by a symbol and its behavior modeled by symbol manipulation).

Chalmers (and AI researchers) may be wrong, but Searle (and you) have not given any kind of ironclad arguments that they are.

So let's hear something from you OTHER than your overly and oft used argument of the form:

"X..., but we know Searle's CRA is right, so X can't be..."

(There are large numbers of AI researchers out here who are not convinced even one whit by Searle's CRA or your "out of sight out of mind" argument. So you and Searle need to come up with something new. Searle's INABILITY to "feel" what it's like to "be" the Chinese person he's bringing into existence IS TO BE EXPECTED. Nor do we expect the the Lisp interpreter to know what it's like to be the particular expert system that "supervenes" upon it.)

-- Michael Dyer

---------------------

Date: Wed, 22 Apr 92 13:06:13 EDT From: "Stevan Harnad"

Date: Wed, 22 Apr 92 12:00:06 +0200 From: tim@arti1.vub.ac.be (Tim Smithers)

I have two responses to Mike Dyer's recent contribution to your symbol grounding discussion.

First, the actual practice of (symbol processing) AI research makes it very difficult to talk about the Physical Symbol System Hypothesis (PSSH) of Newell and Simon as being "a working hypothesis". It is much more a widely accepted and unquestioned dogma than it is a hypothesis. For it to be a hypothesis, in the normal scientific sense (symbol processing) AI research would need to be conducting a broad experimental investigation of its validity (or otherwise). Very little, if any, research is either presented as, or can be properly understood to be, a contribution to such a research programme. Talk of constructing theories of mind (even non-chauvinistic ones) given the current practice thus seems to me to be unsupportable.

Second, even if there are lots of AI researchers out there who are not convinced by Searle's and your arguments, does Mike Dyer think that they are going to be convinced by such in principle possibly true but in practice right out of the window aguments as he offers? I think all you guys should spend some time trying to build real robots (not simulations!) that reliably do real things (even very simple things) in the real world. This might introduce some reality constraints on this whole discussion from which I think it would benefit enormously--it might `ground' the discussion, if you don't mind my misusing your term.

Tim Smithers

---------------

From: Pat Hayes Date: Sun, 19 Apr 92 15:08:06 MDT To: Brian C Smith

Brian-

As usual, you say things clearly. But I can't resist a few comments. I follow your numbering.

1. I agree wholeheartedly with your observations about the limitations of universality. But not with your conclusions. Searle talks about the distinction between a model and the real thing, but the moral of the classical work on universality (and of CS practice - not just in Silicon Valley, by the way!) is exactly that a computational simulation of a computation IS a computation. Thus, a LISP interpreter running LISP really is running LISP: it's no less really computation than if one had hardware devoted to the task.

That is I think a crucial insight, perhaps the central one of CS, and one which was historically very surprising. That's why computers work, why we can run LISP and Word-4 on the same machine. To put that aside as a red herring is to simply regard computers as fast switching devices. It carries possible implications for biology, for example it suggests an account of why evolution produced so much cognition so quickly. While this idea and its implications is a minefield, I think it's one we need to be treading through, and definitely not any kind of colored fish.

2. [on original intentionality] Oh, I agree! This deserves expanding. Many of the Searlean writers have taken it as somehow axiomatic that human thinking just has this vital property of being meaningful, something that only human, or maybe organic, thinking has been observed to possess. Whatever this is, it isn't anything like a color or a mass that human thought has.

I have a number of beliefs about ancient Rome. How are these thoughts connected to the Rome of 2000 years ago? The answer is probably very complicated, involving texts written by others and translated from language to another, to historians best attempts to reconstruct facts from flimsy evidence, and so forth. The connection between me and Caesar goes through an entire society, indeed a historical chain of societies. My thoughts about Julius Caesar are not somehow intrinsically about him by virtue of their being in my head; but they are in fact about him. But I can't see any reason why a machine could not have almost the same (very complicated) relationship to him that I have, whatever it is, since it is mediated almost entirely by language.

5. [on types] I agree that there is a danger of imposing too much structure on the world, but have a couple of caveats. First, what makes you think that this will ever be completely avoidable? We must use some concepts to build our theories from, and to try to avoid it altogether is not just tough sledding but I think trying to walk into the snow naked (and you know what happened to him.) We have to be conscious of what we are doing, but we must use something, surely. And second, I don't think you are right to dismiss set theory so quickly. Set theory doesn't preparse the world: it only insists that some parsing is made. Others have tried to get by with less, and indeed several other alternatives are available, as well as several alternative foundations. But again, you have to stand somewhere, and these carefully developed, thoroughly tested and well-understood patches of intellectual ground have a lot to recommend them. And I don't think one does have to have the types all specified in advance.

Take maps for example. One can give an account of how maps relate to terrain which assumes that maps have some kind of parsing into meaningful symbols (towns, roads, etc) which denote...well, THINGS in the territory, and talk about a certain class of (spatial) relations between these things which is reflected by a (more-or-less) homomorphic image of them holding between the syntactic objects in the map. Now, there are all sorts of complexities, but the essential idea seems coherent and correct. But notice it has to assume that the terrain can be somehow divided into pieces which are denoted by the map's symbols (or better, that appropriate structures can be found in the terrain). You might object at this point that this is exactly what you are complaining about, but if so I would claim not. Here, the theorist is only assuming that SOME parsing of the terrain can be made: it was the maker of the map who parsed his territories, and the semantic account has to reflect this ontological perspective. So what the theorist needs is some way of describing these ontological complexities which imposes a minimum of structure of its own, and structure which is well-understood so that we can consciously allow for possible distortions it might introduce. And that is just what is provided by the idea of a set. To be sure, there are some artifacts produced by, for example, the conventional extensional representation of functions. But these are immediately recognisable when they occur and have known ways around them.

I once tried to focus on the hardest case I could find for a set-theoretic account, which was the idea of a piece of liquid (since what set theory does seem to assume is the notion of an individual thing conceptually distinct from others, and liquids are very intermergable). And to my surprise, the little intellectual discipline imposed by the use of sets actually clarified the semantic task: it was as though the sets imposed distinctions which were a useful ontological discovery. I have since come to think not that a particular set of types is fixed in advance, but that what does seem to be fixed in us, in our way of thinking, is a propensity to individuate. The world is a continuum, but we see it and think of it as made of things, maybe overlapping in complex ways, but conceptually separate entities that we can name and classify.

This may not be the place to indulge in this discussion, since it is getting away from what computation is, but you asked for things onto the table...

Pat Hayes

-----------

From: Pat Hayes To: Stevan Harnad (harnad@princetone.edu) Date: Tue, 21 Apr 92 17:56:09 MDT


>
>sh> As a first pass at "formal," how about: A symbol system
>
>sh> consists of a set of objects (elementary symbols and composite
>
>sh> symbols) plus rules for manipulating the symbols. The rules
>
>sh> operate only on the physical shapes of the symbols, not their
>
>sh> meanings (and the shapes of the elementary symbols are
>
>sh> arbitrary), yet the symbols are systematically interpretable as
>
>sh> meaning something. The rules for manipulating the symbols on
>
>sh> the basis of their shapes are called "syntactic" or "formal"
>
>sh> rules.

Heres an example adapted from one of Brian's. Take a set of rules which encode (a formal system for) arithmetic, together with a formal predicate 'lengthof', and the rules

lengthof('0') -> 1 lengthof(n<>x) -> lengthof(n) + lengthof(x)

Now, these rules make 'lengthof(n)' evaluate to (a numeral which means) the number of digits in the formal representation of n: ie, the length of that numeral in digits. Notice this is the ACTUAL length of that piece of syntax. Now, is this 'formal'? It is according to your definition, and perhaps you are happy with that, but it has some marks which successfully refer to physical properties of part of the world.


>
>sh> "intuition pump" is not a pejorative, if it pumps true.

It IS a pejorative if the pump is claimed to be a conclusive argument from obvious assumptions. My intuition tells me clearly that when I debug a piece of code by pretending to be an interpreter and running through it 'doing' what it 'tells' me to do, that the program is not being run, and certainly not run on, or by, me. So we are left with your intuition vs. my intuition, and they apparently disagree.


>
>sh> I will be happy to consider the implications of the fact that
>
>sh> Searle, doing everything the computer does, does not count as a
>
>sh> valid implementation of the same computer program -- as soon as
>
>sh> you specify and argue for what you mean by implementation and
>
>sh> why Searle's would not qualify. Until then, I don't see why
>
>sh> EVERY system that processes the same symbols, follows the same
>
>sh> (syntactic) rules and steps through the same states doesn't
>
>sh> qualify as a valid implementation of the same program.

The key is that Searle-in-the-room is not doing everything the computer 'does', and is not going through the same series of states. For example, suppose the program code at some point calls for the addition of two integers. Somewhere in a computer running this program, a piece of machinery is put into a state where a register is CAUSED to contain a numeral representing the sum of two others. This doesn't happen in my head when I work out, say, 3340 plus 2786, unless I am in some kind of strange arithmetical coma. If Searle-in-the-room really was going through the states of an implementation of a chinese-speaking personality, then my intuition, pumped as hard as you like, says that that Chinese understanding is taking place. And I haven't yet heard an argument that shows me wrong.


>
>sh> I think most of the brain is preserving sensory signals in
>
>sh> various degrees of analog form (so we would probably do well to
>
>sh> learn from this).

While we should have awe for what Nature has wrought, we also must keep our wits about us. The reason elephants have big brains is that they have a lot of skin sending in signals which need processing, and the neurons come in at a certain density per square inch. This is evolution's solution to the bandwidth problem: duplication. Similarly, that the motor and sensory cortex use 'analogical' mappings of bodily location is probably more due to the fact that this fits very nicely with the way the information is piped into the processor, where location is encoded by neuroanatomy, than by any profound issue about symbolic vs. analog. It has some nice features, indeed, such as localisation of the effects of damage: but we are now in the language of computer engineering.


>
>sh> In fact, I think it's as likely that a mind is mostly symbolic,
>
>sh> with just a thin analog layer mediating input and output to the
>
>sh> world, as that a plane or a furnace are mostly symbolic, with a
>
>sh> thin analog layer mediating input and output.

You are exhibiting here what I might call Searleanism. Of course a furnace is not symbolic. But hold on: thats the point, right? Furnaces just operate in the physical world, but minds (and computers) do in fact react to symbols: they do what you tell them, or argue with you, or whatever: but they respond to syntax and meaning, unlike furnaces and aircraft. That's what needs explaining. If you are going to lump furnaces and minds together, you are somehow missing the point that drives this entire enterprise. (Aircraft are actually a borderline case, since they do react to the meanings of symbols input to them, exactly where they have computers as part of them.)


>
>sh> ... I'm suggesting that you can't implement them AT ALL with
>
>sh> computation alone -- not that you can't implement them
>
>sh> completely or unambiguously that way, but that you can't
>
>sh> implement them AT ALL.

I agree, because with what you mean by computation, I couldn't even run Wordstar with computation ALONE. I need a computer.


>
>sh> (Or, as an intuition pump, you can implement pain or
>
>sh> proprioception by computation alone to the same degree that you
>
>sh> can implement flying or heating by computation alone.)

I bet computational ideas will be centrally involved in a successful understanding of pain and proprioception, probably completely irrelevant to understanding lime chemistry, but important in reasonably exotic flying.

But now we are just beating our chests at one another. Like I said, its only a pump, not an argument.

Pat Hayes

----------

Date: Wed, 22 Apr 92 15:12:37 EDT From: "Stevan Harnad"

ON SYNTHETIC MINDS AND GROUNDED "ABOUTNESS"

Pat Hayes (phayes@nmsu.edu) wrote:


>ph> Many of the Searlean writers have taken it as somehow axiomatic that
>ph> human thinking just has this vital property of being meaningful,
>ph> something that only human, or maybe organic, thinking has been observed
>ph> to possess. Whatever this is, it isn't anything like a color or a mass
>ph> that human thought has.

Not sure who "Searlean" writers are, but this writer certainly does not claim that only organic thinking is possible (e.g., I am working toward TTT-scale grounded robots). No one has given any reason to believe that synthetic minds can't be built. Only one candidate class of synthetic minds has been ruled out by Searle's argument, and that is purely computational ones: stand-alone computers that are merely running the right software, i.e., any and all implementations of symbol systems that allegedly think purely because they are implementations of the right symbol system: the symbol system on which the mind "supervenes" (with the implementational particulars being inessential, hence irrelevant).

But there are plenty of other candidates: Nonsymbolic systems (like transducers and other analog devices), hybrid nonsymbolic/symbolic systems (like grounded robots), and even implemented symbol systems in which it is claimed that specific particulars of the implementation ARE essential to their having a mind (Searle's argument can say nothing against those, because he couldn't BE the system unless its implementational details were irrelevant!).


>ph> I have a number of beliefs about ancient Rome. How are these thoughts
>ph> connected to the Rome of 2000 years ago? The answer is probably very
>ph> complicated... a historical chain... My thoughts about Julius Caesar
>ph> are not somehow intrinsically about him by virtue of their being in my
>ph> head; but they are in fact about him. But I can't see any reason why a
>ph> machine could not have almost the same (very complicated) relationship
>ph> to him that I have, whatever it is, since it is mediated almost
>ph> entirely by language.

I see no reason why a grounded (TTT-indistinguishable) robot's thoughts would not be just as grounded in the objects they are systematically interpretable as being about as my own thoughts are. I diverge from Searle and many other post-Brentano/Fregean philosophers in denying completely that there are two independent mind/body problems, one being the problem of consciousness (qualia) and the other being the problem of "aboutness" (intentionality). In a nutshell, there would be no problem of thoughts having or not having real "aboutness" if there were not something it was like to think (qualia). The reason the symbols in a computer are not "about" anything is because there's nobody home in there, consciously thinking thoughts!

[This is what is behind the force of Searle's simple reminder that he would surely be able to state, with complete truthfulness, that he had no idea what he was talking about when he "spoke" Chinese purely in virtue of memorizing and executing the very same syntactic symbol-manipulation operations that are performed by the TT-passing computer. We each know exactly what it is LIKE to understand English, what it is LIKE to mean what we mean when we speak English, what it is LIKE for our words to be about what they are about; no such thing would be true for Searle, in Chinese, under those conditions. Hence the fact that the Chinese input and output was nevertheless systematically (TT) interpretable AS IF it were about something would merely show that that "aboutness" was not "intrinsic," but derivative, in exactly the same sense that it would be derivative in the case of the symbols in an inert book, in which there is likewise nobody home.]

On the other hand, there is still the POSSIBILITY that grounded TTT-scale (performance indistinguishable) robots or even grounded TTTT-scale (neurally indistinguishable) robots fail to have anybody home in them either. Now that IS the (one, true) mind/body problem, but we should be ready to plead no contest on that one (because the TTT and the TTTT take us to the limits of empiricism in explaining the mind).


>ph> I... think not that a particular set of types is fixed in advance, but
>ph> that what does seem to be fixed in us, in our way of thinking, is a
>ph> propensity to individuate. The world is a continuum, but we see it and
>ph> think of it as made of things, maybe overlapping in complex ways, but
>ph> conceptually separate entities that we can name and classify.
>ph>
>ph> Pat Hayes

Hence we should investigate and model the structures and processes underlying our capacity to categorize inputs (beginning with sensory projections). Those structures and processes will turn out to be largely nonsymbolic, but perhaps symbols can be grounded in the capacity those nonsymbolic structures and processes give us to pick out the objects they are about.

Stevan Harnad

--------------

Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and the Evolution of Supervised Learning in Neural Nets. In: Working Papers of the AAAI Spring Symposium on Machine Learning of Natural Language and Ontology (DW Powers & L Reeker, Eds.) pp. 65-74. Presented at Symposium on Symbol Grounding: Problems and Practice, Stanford University, March 1991; also reprinted as Document D91-09, Deutsches Forschungszentrum fur Kuenstliche Intelligenz GmbH Kaiserslautern FRG.

Andrews, J., Livingston, K., Harnad, S. & Fischer, U. (1992) Learned Categorical Perception in Human Subjects: Implications for Symbol Grounding. Proceedings of Annual Meeting of Cognitive Science Society (submitted)

Harnad, S. Hanson, S.J. & Lubin, J. (1992) Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding. Proceedings of Annual Meeting of Cognitive Science Society (submitted)

------------------------------------------------

From: Pat Hayes To: Stevan Harnad (harnad@princeton.edu) Date: Tue, 21 Apr 92 17:56:09 MDT


>
>sh> As a first pass at "formal," how about: A symbol system
>
>sh> consists of a set of objects (elementary symbols and composite
>
>sh> symbols) plus rules for manipulating the symbols. The rules
>
>sh> operate only on the physical shapes of the symbols, not their
>
>sh> meanings (and the shapes of the elementary symbols are
>
>sh> arbitrary), yet the symbols are systematically interpretable as
>
>sh> meaning something. The rules for manipulating the symbols on
>
>sh> the basis of their shapes are called "syntactic" or "formal"
>
>sh> rules.

Heres an example adapted from one of Brian's. Take a set of rules which encode (a formal system for) arithmetic, together with a formal predicate 'lengthof', and the rules

lengthof('0') -> 1 lengthof(n<>x) -> lengthof(n) + lengthof(x)

Now, these rules make 'lengthof(n)' evaluate to (a numeral which means) the number of digits in the formal representation of n: ie, the length of that numeral in digits. Notice this is the ACTUAL length of that piece of syntax. Now, is this 'formal'? It is according to your definition, and perhaps you are happy with that, but it has some marks which successfully refer to physical properties of part of the world.


>
>sh> "intuition pump" is not a pejorative, if it pumps true.

It IS a pejorative if the pump is claimed to be a conclusive argument from obvious assumptions. My intuition tells me clearly that when I debug a piece of code by pretending to be an interpreter and running through it 'doing' what it 'tells' me to do, that the program is not being run, and certainly not run on, or by, me. So we are left with your intuition vs. my intuition, and they apparently disagree.


>
>sh> I will be happy to consider the implications of the fact that
>
>sh> Searle, doing everything the computer does, does not count as a
>
>sh> valid implementation of the same computer program -- as soon as
>
>sh> you specify and argue for what you mean by implementation and
>
>sh> why Searle's would not qualify. Until then, I don't see why
>
>sh> EVERY system that processes the same symbols, follows the same
>
>sh> (syntactic) rules and steps through the same states doesn't
>
>sh> qualify as a valid implementation of the same program.

The key is that Searle-in-the-room is not doing everything the computer 'does', and is not going through the same series of states. For example, suppose the program code at some point calls for the addition of two integers. Somewhere in a computer running this program, a piece of machinery is put into a state where a register is CAUSED to contain a numeral representing the sum of two others. This doesn't happen in my head when I work out, say, 3340 plus 2786, unless I am in some kind of strange arithmetical coma. If Searle-in-the-room really was going through the states of an implementation of a chinese-speaking personality, then my intuition, pumped as hard as you like, says that that Chinese understanding is taking place. And I haven't yet heard an argument that shows me wrong.


>
>sh> I think most of the brain is preserving sensory signals in
>
>sh> various degrees of analog form (so we would probably do well to
>
>sh> learn from this).

While we should have awe for what Nature has wrought, we also must keep our wits about us. The reason elephants have big brains is that they have a lot of skin sending in signals which need processing, and the neurons come in at a certain density per square inch. This is evolution's solution to the bandwidth problem: duplication. Similarly, that the motor and sensory cortex use 'analogical' mappings of bodily location is probably more due to the fact that this fits very nicely with the way the information is piped into the processor, where location is encoded by neuroanatomy, than by any profound issue about symbolic vs. analog. It has some nice features, indeed, such as localisation of the effects of damage: but we are now in the language of computer engineering.


>
>sh> In fact, I think it's as likely that a mind is mostly symbolic,
>
>sh> with just a thin analog layer mediating input and output to the
>
>sh> world, as that a plane or a furnace are mostly symbolic, with a
>
>sh> thin analog layer mediating input and output.

You are exhibiting here what I might call Searleanism. Of course a furnace is not symbolic. But hold on: thats the point, right? Furnaces just operate in the physical world, but minds (and computers) do in fact react to symbols: they do what you tell them, or argue with you, or whatever: but they respond to syntax and meaning, unlike furnaces and aircraft. That's what needs explaining. If you are going to lump furnaces and minds together, you are somehow missing the point that drives this entire enterprise. (Aircraft are actually a borderline case, since they do react to the meanings of symbols input to them, exactly where they have computers as part of them.)


>
>sh> ... I'm suggesting that you can't implement them AT ALL with
>
>sh> computation alone -- not that you can't implement them
>
>sh> completely or unambiguously that way, but that you can't
>
>sh> implement them AT ALL.

I agree, because with what you mean by computation, I couldn't even run Wordstar with computation ALONE. I need a computer.


>
>sh> (Or, as an intuition pump, you can implement pain or
>
>sh> proprioception by computation alone to the same degree that you
>
>sh> can implement flying or heating by computation alone.)

I bet computational ideas will be centrally involved in a successful understanding of pain and proprioception, probably completely irrelevant to understanding lime chemistry, but important in reasonably exotic flying.

But now we are just beating our chests at one another. Like I said, its only a pump, not an argument.

Pat Hayes

---------------

Date: Wed, 22 Apr 92 17:39:44 EDT From: "Stevan Harnad"

Pat Hayes wrote:


>ph> Here's an example adapted from one of Brian [Smith's]. Take a set
>ph> of rules which encode (a formal system for) arithmetic, together with
>ph> a formal predicate 'lengthof', and the rules
>ph>
>ph> lengthof('0') -> 1
>ph> lengthof(n<>x) -> lengthof(n) + lengthof(x)
>ph>
>ph> Now, these rules make 'lengthof(n)' evaluate to (a numeral which means)
>ph> the number of digits in the formal representation of n: ie, the length
>ph> of that numeral in digits. Notice this is the ACTUAL length of that
>ph> piece of syntax. Now, is this 'formal'? It is according to your
>ph> definition, and perhaps you are happy with that, but it has some marks
>ph> which successfully refer to physical properties of part of the world.

It is a very interesting and useful feature of symbol systems that some can be formulated so as to be INTERPRETABLE as referring to themselves (as in the sentence "this sentence has five words") or to physical properties (especially numerical ones) of other symbols and symbol strings within the same system. Symbol systems that go on to USE the nonarbitrary analog properties of their symbol tokens as data are special in certain respects (as in "numeric" versus "symbolic" computation) and may cast just a bit more light on the dynamics of dedicated hybrid symbolic/analog systems, and perhaps even on symbol grounding. I don't know.

But note that in your example above, even though the computation yields a symbol that is interpretable as the number of symbols in the string, this is in principle no different from a computation that yields a symbol that is interpretable as the number of planets in the solar system. It is just a systematic correspendence (and hence interpretable as such). But "interpretable as meaning X" (as in the case of a book, interpretable by a thinking mind) is not the same as "meaning X" (as in the case of thoughts, in a mind). Failing to distinguish the two seems to be another instance of conflating physical inner/outer and mental inner/outer, as discussed earlier.


>ph> My intuition tells me clearly that when I debug a piece of code by
>ph> pretending to be an interpreter and running through it "doing" what it
>ph> "tells" me to do, that the program is not being run, and certainly not
>ph> run on, or by, me. So we are left with your intuition vs. my intuition,
>ph> and they apparently disagree.

But isn't the real question whether there is any relevant difference between what you think is a "real" implementation by a machine and what you think is a "pseudo-implementation" by a person? Certainly the computer is not stepping through the states consciously and deliberately, as you are. But is there anything else that's different? If we speak only of the "motions gone through" and their I/O conditions in the two cases, they are exactly the same. In the case of the machine, the motions are mechanical; no choice is involved. In the case of the person, their elective. But so what? Even apart from the vexed questions associated with free will and causality, what is there about taking IDENTICAL motions under identical I/O conditions and making their causal basis mindless and mechanical that could possibly effect a transition INTO the mental (rather than OUT of it, which is the much more obvious feature of the transition from the human implementation to the machine one)?


>ph> The key is that Searle-in-the-room is not doing everything the computer
>ph> "does," and is not going through the same series of states. For
>ph> example, suppose the program code at some point calls for the addition
>ph> of two integers. Somewhere in a computer running this program, a piece
>ph> of machinery is put into a state where a register is CAUSED to contain
>ph> a numeral representing the sum of two others. This doesn't happen in my
>ph> head when I work out, say, 3340 plus 2786, unless I am in some kind of
>ph> strange arithmetical coma. If Searle-in-the-room really was going
>ph> through the states of an implementation of a chinese-speaking
>ph> personality, then my intuition, pumped as hard as you like, says that
>ph> that Chinese understanding is taking place. And I haven't yet heard an
>ph> argument that shows me wrong.

It's always useful, in this sort of hermeneutic puzzle, to de-interpret and reduce things to gibberish as much as possible: Suppose the computer was doing all the requisite summation in binary, and you were too, and all it did, and all you did, was compare zero's and one's and erase and carry, just like a Turing Machine. Is it still so obvious that you're not doing everything the computer is doing? If anything, the computer is doing less than you rather than more (because it has no choice in the matter). Why should I interpret less as more?


>ph> that the motor and sensory cortex use 'analogical' mappings of bodily
>ph> location is probably more due to the fact that this fits very nicely
>ph> with the way the information is piped into the processor, where
>ph> location is encoded by neuroanatomy, than by any profound issue about
>ph> symbolic vs. analog. It has some nice features, indeed, such as
>ph> localisation of the effects of damage: but we are now in the language
>ph> of computer engineering.

I too am thinking of this only as (reverse bio-)engineering. But brains can do so much more than any machine we've yet engineered, and they seem to do so much of it in analog. It seems that this might be a useful cue to take, but maybe not. It's an empirical question.


>ph> Of course a furnace is not symbolic. But hold on: that's the point,
>ph> right? Furnaces just operate in the physical world, but minds (and
>ph> computers) do in fact react to symbols: they do what you tell them, or
>ph> argue with you, or whatever: but they respond to syntax and meaning,
>ph> unlike furnaces and aircraft. That's what needs explaining. If you are
>ph> going to lump furnaces and minds together, you are somehow missing the
>ph> point that drives this entire enterprise. (Aircraft are actually a
>ph> borderline case, since they do react to the meanings of symbols input
>ph> to them, exactly where they have computers as part of them.)

I don't think I'm missing the point. Computation has been able to generate some very fancy and flexible performance -- certainly fancier and more flexible than that of a furnace or plane (except "smart," computer-aided planes, as you indicate). Computation also seems to resemble thought in its syntactic structure. It was accordingly quite reasonable to hypothesize that thinking -- that unobservable process going on in our heads -- might actually be a form of computation. But here we are discussing reasons why, despite promising initial appearances, that hypothesis is turning out to be WRONG, and what is going on in our heads is something else, not computation (or not just computation).

By the way, minds and computers may both respond to syntax, but only minds respond to meaning. Computers are merely INTERPRETABLE as if they responded to meaning...


>ph> with what you mean by computation, I couldn't even run
>ph> Wordstar with computation ALONE. I need a computer.

Pat, you know I stipulated that the computation had to be physically implemented; I just stressed that the particulars of the implementation (apart from the fact that they stepped through the right states with the right I/O) were irrelevant.


>ph> I bet computational ideas will be centrally involved in a successful
>ph> understanding of pain and proprioception, probably completely
>ph> irrelevant to understanding lime chemistry, but important in reasonably
>ph> exotic flying.
>ph>
>ph> Pat Hayes

And I bet a lot of the essential features of pain and proprioception will be in the analog properties of the hardware that implements it, which will be more like exotic chemistry.

Stevan Harnad

---------------

Date: Mon, 20 Apr 92 03:23:41 -0400 From: yee@envy.cs.umass.edu Subject: Don't talk about "computers"

I would like to follow up on some of Brian Smith's recent comments regarding universal computations and formal/non-formal symbol processing. I propose that we try to avoid using the terms "computer" and "program" because they are misleading with regard to questions of the computability of mind. For "computer" I would use "Turing machine" and I generally would not discuss programs because they are just descriptions of TM's.

The things we usually refer to as "computers" are physical instantiations of universal Turing machines (UTM's), a particular subclass of Turing machine (TM). Unfortunately, philosophical discussions about computers (UTM's) generally carry an implicit extension to all TM's. Presumably, this occurs because UTM's are "universal." But as Brian indicated, UTM universality refers to a very special type of *weak equivalence* (Pylyshyn, 1984) between TM's and UTM's. Universality merely means partial I/O equivalence. This is insufficient for many discussions about the computability of mind---e.g., the Chinese Room---because such discussions consider, not only I/O behavior, but also *how* the behavior is achieved, and UTM's are far from "typical" in their manner of computation. In particular, although UTM's process certain input symbols purely formally, not all TM's need behave this way.

To review briefly, any program P describes a Turing machine Tp that maps inputs x to outputs y (as shown below in A). Any UTM U (shown in B) is special in that its inputs z are composites of a program P and a nominal-input x', i.e., z=(P,x').

+----+ x'-+ +---+ x -->| Tp |--> y +- z ->| U |--> y +----+ P -+ +---+

(A) a TM. (B) a UTM.

Formal symbol processing of nominal-inputs by UTM's is a special consequence of their being given input programs. A UTM U can always produce output y by processing nominal-input x'=x purely formally because P completely controls the processing of x', independently of U. That is, U's computation on z simply instantiates Tp's computation x --> y.

Clearly, U's formal treatment of x' does not imply that Tp's processing of x is necessarily formal. Such a conclusion would require a special proof. For all we know, Tp might associate x with internally stored information and produce output y accordingly. One might try to show that all TM's are restricted to formal symbol processing, but this would not follow automatically from the fact that UTM's can get away with formally processing (a portion of) their inputs. (Actually, in a paper cited below I argue that, in general, TM's can process symbols non-formally.)

+-----{ CR }-------+ | | x'-------+ +---+ | | +- z ->| U |----> y | p -+ +---+ | +------------------+

(C) a UTM viewed as the CR.

The implications of the TM/UTM distinction for the Chinese Room (CR) argument are straightforward. The person in the CR is a UTM U that is given a program P (the rules). (Note that "memorizing" the rules does not change U into Tp. Any set of rules could be memorized, and the memorized rules remain an input to U.) To answer the question of *how* the Chinese symbols x' are being processed inside the room, one must consider what *Tp* is doing to the symbols. Considering only U's activity is useless because U is computing z=(P,x')--> y. Thus, without specific knowledge of the rules P, one simply cannot answer the question of whether the Chinese input symbols are being understood in the CR or are only being formally manipulated. Both possibilities remain open (unless, of course, one advocates the Turing Test for understanding, but that is an independent argument).

In general, if one wants to know how a program P really operates, then one should only consider the corresponding Turing machine Tp. If you build a UTM U, give it P, and then look at what U is doing, you will be looking in the wrong place. Eliminate the middleman, and build Tp directly.

Finally, one concludes that an assertion such as

Every computer has property X. (1)

is generally ambiguous and should be replaced by either

Every TM has property X. (2a) or Every UTM has property X. (2b)

Clearly, (2a) implies (2b), but not conversely. At best, the Chinese Room argument shows that UTM computations are not good candidates for minds. However, there remain plenty of non-universal TM computations, and---absent any proof to the contrary---some of them might be minds. To find out, one should forget about computers and think about instantiated programs. If one's real interest is the entire class of TM's, then it is dangerous to form intuitions and conclusions revolving around the special properties of UTM's.

Many debates about computers and minds pit critics of purely formal symbol processing (which UTM's perform) against proponents of computation (which all TM's perform). A failure to clearly maintain the TM/UTM distinction means that, not surprisingly, discussants often appear to talk past each other. Nevertheless, it remains entirely consistent to believe (the *correct* :-) portions of both the "semantophiles'" and the "computationalists'" arguments. That is, intentionality, symbol-grounding, meaning, etc. (of the type desired by Searle, Harnad, Penrose and others) is necessary for (human-like) minds, and such semantics is Turing-computable.

Richard Yee

--------------------------------

@Book{Pylyshyn:84, author = "Pylyshyn, Z. W.", title = "Computation and Cognition: Toward a Foundation for Cognitive Science", publisher = "Bradford Books/MIT Press", year = "1984", address = "Cambridge, MA",

@Unpublished{Yee:rcssp, author = "Yee, Richard", title = "Real Computers and Semantic Symbol Processing", note = "Dept.\ of Computer Science, Univ. of Massachusetts, Amherst, MA 01003. E-mail: yee@cs.umass.edu", year = "1991", month = "March"

------------------------------

Date: Wed, 22 Apr 92 19:14:07 EDT From: "Stevan Harnad"

SO WHAT IS COMPUTATION?

In his comment entitled "Don't talk about computers," Richard Yee (yee@envy.cs.umass.edu) wrote:


>ry> as Brian [Smith] indicated, UTM universality refers to a very special
>ry> type of *weak equivalence* (Pylyshyn, 1984) between TM's and UTM's.
>ry> Universality merely means partial I/O equivalence. This is insufficient
>ry> for many discussions about the computability of mind---e.g., the
>ry> Chinese Room---because such discussions consider, not only I/O
>ry> behavior, but also *how* the behavior is achieved, and UTM's are far
>ry> from "typical" in their manner of computation. In particular, although
>ry> UTM's process certain input symbols purely formally, not all TM's need
>ry> behave this way.

Much of Yee's comment is based an a distinction between formal and nonformal "computation," whereas my arguments are based completely on computation as formal symbol manipulation. We will need many examples of what nonformal computation is, plus a clear delineation of what is NOT nonformal computation, if this is to help us with either the question of what is and is not a computer (or computation) or the question of whether or not mental processes are computational and whether or not computers can have minds. (It would also seem hard to pose these questions without talking about computers, as Yee enjoins us!)


>ry> The implications of the TM/UTM distinction for the Chinese Room (CR)
>ry> argument are straightforward. The person in the CR is a UTM U that is
>ry> given a program P (the rules). (Note that "memorizing" the rules does
>ry> not change U into Tp. Any set of rules could be memorized, and the
>ry> memorized rules remain an input to U.) To answer the question of *how*
>ry> the Chinese symbols x' are being processed inside the room, one must
>ry> consider what *Tp* is doing to the symbols. Considering only U's
>ry> activity is useless because U is computing z=(P,x')--> y. Thus, without
>ry> specific knowledge of the rules P, one simply cannot answer the
>ry> question of whether the Chinese input symbols are being understood in
>ry> the CR or are only being formally manipulated. Both possibilities
>ry> remain open (unless, of course, one advocates the Turing Test for
>ry> understanding, but that is an independent argument).

The Turing Test has been intimately involved in Searle's Argument from the beginning. The Argument is directed against a position Searle dubbed "Strong AI," according to which a computer program that could pass the Turing Test (in Chinese) would understand (Chinese) no matter how it was implemented. Searle simply points out to us that if he himself implemented the program (by memorizing the symbols and symbol manipulation rules) he would not understand Chinese, hence neither would any computer that implemented the same program. So much for the Turing Test and the computationality of understanding.

The only thing that is critical for Searle's argument is that he be able to DO with the input and output exactly the same (RELEVANT) things the computer does. The implementational details are irrelevant; only the program is relevant. And the TT is simply an I/O criterion.

Now I have no idea what YOU are imagining the computer to be doing; in particular, what would it be doing if it were doing "nonformal computation"? If it would be doing something that was not implementation-independent, then you've simply changed the subject (and then even a transducer would be immune to Searle's argument). If it IS doing something implementation-independent, but not "formal," then again, what is it, and can Searle do it or not?


>ry> At best, the Chinese Room argument shows that UTM computations are not
>ry> good candidates for minds. However, there remain plenty of
>ry> non-universal TM computations, and---absent any proof to the
>ry> contrary---some of them might be minds. To find out, one should forget
>ry> about computers and think about instantiated programs. If one's real
>ry> interest is the entire class of TM's, then it is dangerous to form
>ry> intuitions and conclusions revolving around the special properties of
>ry> UTM's.

This won't do at all, because for all I know, I can think of an airplane or a planetary system as an "instantiated program" on a "non-universal TM," and that would make the question of what computers/computation can/cannot do pretty empty. Please give examples of what are and are not "non-universal TM computations" and a principled explanation of why they are or are not.


>ry> Many debates about computers and minds pit critics of purely formal
>ry> symbol processing (which UTM's perform) against proponents of
>ry> computation (which all TM's perform)... intentionality,
>ry> symbol-grounding, meaning, etc. (of the type desired by Searle, Harnad,
>ry> Penrose and others) is necessary for (human-like) minds, and such
>ry> semantics is Turing-computable.
>ry>
>ry> Richard Yee

One cannot make coherent sense of this until the question "What is computation?", as posed in the header to this discussion, is answered. Please reply in ordinary language before turning again to technical formalisms, because this first pass at formalism has merely bypassed the substantive questions that have been raised.

Stevan Harnad

-----------------------

-----------------------

Date: Thu, 23 Apr 92 17:12:30 EDT From: "Stevan Harnad"

SEARLE'S PERISCOPE


>ph> From: Pat Hayes
>ph> Date: Wed, 22 Apr 92 15:30:28 MDT
>ph> To: tim@arti1.vub.ac.be (Tim Smithers)
>ph> Subject: Re: Smithers on Dyer on the physical symbol hypothesis (PSH)
>ph>
>ph> Dear Tim Smithers,
>ph>
>ph> First, the PSH is as much a hypothesis as, say, the hypothesis of
>ph> continental drift. Nobody could observe continental drift or
>ph> conduct a 'broad experimental investigation' of its validity.
>ph> It is a general idea which makes sense of a large number of
>ph> observations and provides a framework within which many empirical
>ph> results can be fitted. Most of the hypotheses of science are like
>ph> this: they aren't tested by little well-designed experiments, and
>ph> indeed couldn't be. There are whole areas of investigation,
>ph> such as cosmology, which couldn't be done in this simplistic
>ph> textbook way of (idea->design experiment->test->next idea),
>ph> and whole methodologies, such as ecological psychology, which
>ph> explicitly reject it. People who have been trained to perform
>ph> little experiments to test (often rather silly) little ideas
>ph> [cannot] lay [exclusive] claim to the use of words like 'hypothesis'.
>ph>
>ph> And in any case, the whole practice of AI can be regarded as the
>ph> empirical testing of the hypothesis. Of course those who are working
>ph> under its aegis do not constantly question it, but take it as an
>ph> assumption and see how much science can be developed under it.
>ph> That is the way that science makes progress, in fact, as Kuhn has
>ph> argued convincingly. The world has plenty of serious people who reject
>ph> the PSH and are using other frameworks to develop and test theories of
>ph> mentality, and a large number of vocal and argumentative critics, so
>ph> there is no risk of its not being tested.
>ph>
>ph> Turning now to your second paragraph. You accuse Dyer of making
>ph> arguments which are 'in principle possible but in practice right
>ph> out of the window'. This, in a discussion which flows from a
>ph> hypothesis in which a human being memorises the code of a
>ph> program which can pass the Turing Test in Chinese, while preserving
>ph> his equanimity to the extent that he can simultaneously discuss
>ph> the code! If we are to reject unrealistic examples, then we can
>ph> all surely agree that the whole matter is a complete waste of
>ph> time, and just forget about it, starting now.
>ph>
>ph> Pat Hayes

PSH is certainly an empirical hypothesis if it is construed as a hypothesis about how "cognitive" engineers might successfully generate mind-like performance computationally (and people may differ in their judgments about how successful computation has been in doing that so far). But PSH is more like an untestable conjecture if is construed as the claim that the successful generators of that mind-like performance (if there are any) will have real minds (i.e., somebody will be at home in there), because normally the only way to know whether or not a system has a mind is to BE the system. Hence, for the very same reason that one can suppose that a stone (or any other body other than one's own) does or does not have a mind, as one pleases, without any hope of ever being any the wiser, the PSH is shielded from refutation by the impenetrability of the other-minds barrier.

Now Searle has figured out a clever way (I've dubbed it "Searle's Periscope") in which he could peek through the other-minds barrier and BE the other system, thus testing what would normally be an untestable conjecture. Searle's Periscope works ONLY for the special case of PSH (implementation-independent symbol manipulation): He has simply pointed out that if we (1) SUPPOSE (arguendo) that a physical symbol system alone could pass the Turing Test in Chinese, and from this we wish to (2) INFER that that physical symbol system would therefore be understanding Chinese (purely in virtue of implementing the TT-passing symbol system), THEN it is intuitively obvious that if (3) Searle himself implemented that same symbol system by memorizing all the symbols and rules and then performing the same symbol manipulations on the same inputs, then (4) he would NOT be understanding Chinese; therefore the inference to (2) (and hence the PSH) is false.

What makes this example unrealistic is much more the supposition (1) that a symbol system could pass the TT (there's certainly no such system in empirical sight yet!) rather than (3) that (if so, then) Searle could himself memorize and perform the same symbol manipulations. So maybe life is too short and memory too weak for a person to memorize and perform all those symbols and rules: So memorize and perform a few of them, and then a few more, and see if that kind of thing gives you a LITTLE understanding of Chinese! What is intuitively obvious is that there's nothing in the scenario of doing THAT kind of mindless thing till doomsday that would even faintly justify believing that that's the road to understanding.

No, the real sci-fi in this example comes from (1), not (3); and dwelling instead on the unrealistic features of (3) is motivated only by the yearning to re-establish the barrier that normally makes it impossible to block the conjecture that a system other than oneself has (or does not have, as the case may be) a mind. Mike Dyer tries to resurrect the barrier by supposing that Searle would simply develop multiple-personality syndrome if he memorized the symbols and rules (but why on earth would we want to believe THAT); you, Pat, try to resurrect the barrier by denying that Searle would really be a valid implementation of the same symbol system despite passing the same TT, using the same symbols and rules! And in response to "why not?" you reply only that his free will to choose whether or not to follow the rules is what disqualifies him. (Actually, I think it's his capacity to talk back when we project the PSH conjecture onto him that's the real problem; because that's something the poor, opaque first-order physical symbol system, slavishly following the very same rules and passing the same TT, is not free to do, any more than a stone is.)

Still others try to resurrect the other-minds barrier by invoking the fact that it is unrealistic to suppose that Searle could have the speed or the memory capacity to implement the whole symbol system (as if somewhere in the counterfactual realm of greater memory and greater speed there would occur a phase transition into the mental!).

To my mind, all these strained attempts to reject (4) at all costs are simply symptomatic of theory-saving at a mounting counterfactual price. I, like Tim Smithers, simply prefer taking the cheaper (and, I think, more realistic and down-to-earth) road of grounded robotics, abandoning pure computation, PSH, and the expanding ring of epicycles needed to keep them impenetrable to Searle's Periscope.

Stevan Harnad

------------

Date: Thu, 23 Apr 92 00:34:42 PDT From: Dr Michael G Dyer Subject: physicality

Here are some responses and comments to whoever is willing to read them:


>
>sh> But if the drowning is "virtual" (i.e., a computer-simulated
>
>sh> person is "drowned" in computer-simulated water)
>
>sh> there is no drowning at all going on, no matter how formally
>
>sh> equivalent the symbols may be to real drowning.

I agree that there's no physical drowning, but what if we build an artifical neural network circuitry (with ion flow and/or action potential timings identical to those of some person's brain, etc.) and then give it the same inputs that a drowning person would receive? Who is to say that this artificial neural network won't have the subjective experience of drowning?


>
>sh> ... as an intuition pump you can
>
>sh> implement pain or proprioception by computation alone to the
>
>sh> same degree that you can implement flying or heating by
>
>sh> computation alone.)...
>
>sh> I just don't think computation alone can either fly or think.

Here is where my intuition pumps diverges quite sharply. Is the physical act of something flying thru the air a computation? I think not (unless we imagine the entire universe as being a simulation God's computer -- then it is, but we'll never know :-). But does the EXPERIENCE of flying fall into a certain class of computations? No one really knows, but my bet is "yes". In that case, the actually physical act of flying is irrelevant. For a mind, what is important is the experience of flying.

I think that certain classes of computations actually have subjective inner experiences. At this point in time science simply has no way of even beginning to formulate a "theory" of what the subjective-point-of-view might be like for different types of computations, whether in VLSI, on tapes, optical or biochemical. Given that we can't tell, the safest strategy is to make judgements about the inner life of other entities based on their behavior.

ts> First, the actual practice of (symbol processing) AI research ts> makes it very difficult to talk about the Physical Symbol System ts> Hypothesis (PSSH) of Newell and Simon as being "a working ts> hypothesis". It is much more a widely accepted and unquestioned ts> dogma than it is a hypothesis. For it to be a hypothesis, in the ts> normal scientific sense (symbol processing) AI research would ts> need to be conducting a broad experimental investigation of its ts> validity (or otherwise). Very little, if any, research is either ts> presented as, or can be properly understood to be, a contribution ts> to such a research programme.

It is common for paradigm-level hypotheses to go unquestioned by those who are working within that paradigm (i.e. they accept the hypothesis and so don't spend time questioning or re-questioning it.)

In this context I think that Harnad and Searle play a very useful role in forcing some of us (more philosophically oriented) AI researchers to reexamine this hypothesis.

ts> ...does Mike Dyer think that they are going to be convinced by such ts> in principle possibly true but in practice right out of the window ts> aguments as he offers?

Mmmm.... so it's ok to have Searle do all symbol manipulations (that might require a level of granularity where each symbol represents a synapse or something lower) all in his head(!), but it's NOT ok for me to examine how one network (i.e. Searle's neurons) might be intertwined with another network (i.e. artificial VLSI circuitry isomorphic to the symbol relations and manipulations that make up a Chinese persona)??? My students and I happen to design connectionist-style networks of various sorts, to process language, make inferences, etc. and the issue of how one network gates and/or is composed with another etc. we think is rather relevant to understanding ultimately how minds might reside in brains.

Tough questions, however, are: "What's it's feel like to BE a particular sort of network?" "What's it feel like to BE a particular sort of (software) system? Harnad and Searle seem to assume that, no matter how complex, any kind of software system has no feelings. How do they know? Harnad claims we can simply ASK Searle to find out what it's like to understand English but he won't allow us to simply ASK the Chinese persona to find out what it's like to understand Chinese.

ts> I think all you guys should spend some time trying to build real ts> robots (not simulations!) that reliably do real things (even very ts> simple things) in the real world.

Yes, that's laudable, but we can't all be roboticists. However, I just saw one of those history of computer science shows and they had a nice demonstration of "virtual reality". VR is getting pretty good. You tilt the helmet and quite realistic images get updated, with proper perspective, etc. What if the robot received visual input from a VR world rather than the real world? (Oops! There goes those vision input transducers Stevan Harnad needs so badly! :-)


>
>sh> By the way, minds and computers may both respond to syntax,
>
>sh> but only minds respond to meaning. Computers are merely
>
>sh> INTERPRETABLE as if they responded to meaning...

This is quite a claim! What evidence is there for such a claim? In contrast, each neuron appears to respond to its inputs (including its local chemical environment) without requiring any sort of thing called "meaning". The term "meaning", as far as I can tell, is simply used to refer to incredibly complex syntactic types of operations. If a robot (or person) is organized to behave in certain, very complex ways, then we tend to take (as Dennett says) an "intentional stance" toward it, but that doesn't mean there is anything other than syntax going on. (Biologists have also abandoned "life force" notions for the incredibly complex but syntactic operations of biochemistry.) The notion of "meaning" is useful for human-human folk interactions but the hypothesis of AI (and cognitive science in general) is that "meaningful" behavior is the result of a great many "mindless" (i.e. syntactic) operations (whether they are directly in circuitry or data structures in the memory of more general intepretation circuitry).

A simple example of meaning-from-syntax is the use of state space heuristic search (totally syntactic) to give an overall behavior that one might call "purposive" (e.g. a chess playing program "wanting" to checkmate its opponent).

The only "evidence" for meaning is probably from introspection. Of course, I can (and do) describe myself as having "meanings" because I can use that word to describe a certain class of complex behaviors and I happen to also exhibit those complex behaviors. But because I describe myself in this way does not require that I actually have some magical "meanings" that are something other than syntactic operations. Likewise, any really complex robot, capable of forming models of itself and of others, will take an intentional stance, both toward those other complex agents, AND toward itself -- i.e. attributing "meanings" to itself! So what? It's all ultimately just syntax. The trick is to figure out what class of marvelously complex syntactic operations brings about behaviors that deserve the folk psychological term of "meaning". (This reductionist approach in cognitive science is similar to that in the physical/natural sciences.)


>
>sh> Searle simply points out to us that if he
>
>sh> himself implemented the program (by memorizing the symbols
>
>sh> and symbol manipulation rules) he would not understand Chinese,
>
>sh> hence neither would any computer that implemented the same
>
>sh> program.

Ask the LISP interpreter (that's executing code that creates some natural language understanding system S) if it "understands" anything and, of course, it doesn't. Ask S, however, and you will get an answer. We don't expect the LISP interpreter to "understand" what it's doing, so why should we EXPECT Searle to understand Chinese??? However, if we ask the Chinese persona what it's like to understand Chinese we will get an answer back (in Chinese).

For all my disagreements with Harnad, I think that he is concerned with an extremely interesting question, namely, what is the role that physicality plays in cognition? As we know, two "functionally identical" computations on two machines with different architectures are only similar in their computations at some level of abstraction. Below that level of abstraction, what the machines are doing physically may be very different. AI researchers believe that "consciousness" can (ultimately) reside on many different physical substrata as long as the computations are similar at some (as yet unspecified) level of abstraction and this level of abstraction can be modeled by symbol manipulation. The support for this view is that there seems to be no limit to the granularity of symbols and symbol manipulation (i.e. they can be made to correspond to the foldings of individual proteins if these are deemed essential in constructing the operations of a mind). Also, since we can only judge intentionality via behavior, pragmatically we never have to consider any level of abstraction below that level of computation that gives us behavior that appears intentional.

One final comment. There are two different uses of the term "grounding": 1. that representations should be rich enough to encode any perceptual information. 2. that physical transducers are required for intentionality.

I accept 1. but not 2. If a simulated robot could pass the TTT test within a virtual reality world, it would be grounded in that world but there would be no physical transducers. (I have never figured out why Harnad rejects out of hand the possibility of a "brain in a vat" whose I/O channels are wired up to a computer so that the brain thinks it's seeing, standing, etc. Perhaps he rejects it because, if he doesn't, then his whole requirement for physical transducers falls apart.)

Since Harnad plugs his own writings in this area, I will also:

Dyer, M. G. Intentionality and Computationalism: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence, Vol. 2, No. 4, 1990.

Dyer, M. G. Finding Lost Minds (Author's reply to S. Harnad's "Lost in the Hermeneutic Hall of Mirrors"). Journal of Experimental and Theoretical Artificial Intelligence, Vol. 2, No. 4, 1990.

--------------------

Date: Thu, 23 Apr 92 20:21:46 EDT From: "Stevan Harnad"

ON MEANING: VIRTUAL AND REAL

I will preface my reply to Mike Dyer with a few points that should be kept in mind in my response:

(1) By "meaning," I mean subjective meaning, e.g., what it is like for a real English speaker (and not for a non-English-speaker or a stone) to hear and understand what spoken English is about. When I say that there is no real meaning in a symbol system, just symbols that are systematically interpretable as if they meant something, I mean that subjective meaning is absent: Either nobody is home at all (as in a stone) or certain particular symbols happen to have no subjective meaning for the system (as in Searle's Chinese Room).

(2) "Grounding" is a robot's capacity to interact with the real world of objects, events and states of affairs that its symbols are interpretable as being about. The semantic interpretations of the symbols and the robotic interactions with the objects must cohere TTT-indistinguishably with one another. This means that the symbol use is not constrained by syntax alone.

Grounding is not provably a necessary or a sufficient condition for subjective meaning (though it may in reality be necessary).

(3) The trouble with "brains in vats" is that they equivocate between (a) real de-afferented brains (with sensory surfaces removed, but all the rest of the neural hardware -- most of it analog -- intact) and (b) physical symbol systems (computers without any peripheral I/O devices). These two are radically different, and projecting assumptions about one onto the other leads nowhere. Brain-in-vat arguments usually further equivocate on the two senses of internal/external discussed earlier: inside/outside the "body" and inside/outside the "mind." Apart from the insistence on not conflating any of these things, I have no objections to brain-in-vat talk.

(4) As in (3) above, I insist on maintaining the distinction between real physical objects (like planes, furnaces, neurons and transducers) and their "virtual" counterparts (computer-simulated planes, etc.), be they ever so computationally equivalent to one another. It is the perpetual blurring of this boundary in particular that leaves me no choice but to keep repeating boringly to Mike Dyer that he seems to be hopelessly lost in a hermeneutic hall of mirrors he has created by overinterpreting systematically interpretable computations and then reading off the systematic interpretations themselves by way of evidence that the virtual world is as as real as the real one.

Michael G Dyer wrote:

md> what if we build an artificial neural network circuitry (with ion flow md> and/or action potential timings identical to those of some person's md> brain, etc.) and then give it the same inputs that a drowning person md> would receive? Who is to say that this artificial neural network won't md> have the subjective experience of drowning?

Is this a purely computational simulation of a neural network (i.e., a bunch of squiggles and squoggles that are interpretable as if they were ions, action potentials, etc.)? Or is a synthetic neural network with the same causal powers as the the organic neural network (i.e., the capacity to transduce all the same real physical input the neural network gets)? If it's the former, then it's really all just squiggles and squoggles, no matter how you can systematically interpret it. If it's the latter then it's a real artificial neuronal circuit and can in principle have real subjective experiences (but then it's irrelevant to this discussion of pure computation and whether or not pure computation can have subjective experiences).

md> Is the physical act of something flying thru the air a computation? I md> think not... But does the EXPERIENCE of flying fall into a certain md> class of computations? No one really knows, but my bet is "yes". I md> think that certain classes of computations actually have subjective md> inner experiences.

The trouble with experiences is that all but your own are out of sight, so you are free to interpret any external object or process as if it had experiences. Trouble is, there's a right and wrong of the matter (even though the other-minds barrier normally prevents us from knowing what it is). There were reasons, for a while, for entertaining the hypothesis that experiences might be implemented computations. A lot of this discussion is about reasons why that hypothesis has to be reconsidered and discarded.

md> [why is it] ok to have Searle do all symbol manipulations (that might md> require a level of granularity where each symbol represents a synapse md> or something lower) all in his head(!), but... NOT ok for me to md> examine how one network (i.e. Searle's neurons) might be intertwined md> with another network (i.e. artificial VLSI circuitry isomorphic to the md> symbol relations and manipulations that make up a Chinese persona)?

Until further notice, real neurons have nothing to do with this. What Searle and the TT-passing computer he is duplicating are doing is implementation-independent. We don't know what the real brain does; let us not presuppose anything. Nor do I know what you are imagining intertwining: virtual neurons and what? It's all just squiggles and squoggles!

[Here's a prophylactic against hermeneutics: "When in certainty, de-interpret all symbols and see what's left over."]

md> Harnad and Searle seem to assume that, no matter how complex, any kind md> of software system has no feelings. How do they know? Harnad claims we md> can simply ASK Searle to find out what it's like to understand English md> but he won't allow us to simply ASK the Chinese persona to find out md> what it's like to understand Chinese.

Because of the other-minds problem we cannot KNOW that anyone else but ourselves has feelings (or no feelings, as the case be). I am prepared to believe other people do, that animals do, and that various synthetic systems might too, particularly TTT-scale robots. I'm even prepared to believe a computer might (particularly since I can't KNOW that even a stone does not). There is only one thing I am not prepared to believe, and that is that a computer has feelings PURELY IN VIRTUE OF RUNNING THE RIGHT PROGRAM (i.e., the physical symbol system hypothesis). But, unfortunately, that's precisely what's at issue here.

You fault me for believing Searle (and his quite reasonable explanation of what is going on -- meaningless symbol manipulation) rather than the Chinese squiggles and squoggles. But you are prepared to believe that Searle has gotten multiple personality merely as a consequence of having memorized and performed a bunch of symbol manipulations, just because of what the symbols are interpretable as meaning.

Finally (although I don't want to push the premise that such a TT-passing computer program is even possible too hard, because we've accepted it for the sake of argument), you don't seem too troubled by the fact that the Chinese "persona" couldn't even tell you what Searle was wearing at the moment. Any self-respecting multiple personality could manage that. Doesn't this suggest that there might be a bit more to real-world grounding and the TTT than is apparent from the "just ask the simulation" perspective?

md> What if the robot received visual input from a VR world rather than the md> real world? ... There go those visual input transducers Stevan Harnad md> needs so badly!)

Real robots have real sensory surfaces. I have no objection to those real sensory surfaces being physically stimulated by stimulation generated by a simulated world, itself generated by a computer. (My robot would then be like a kid sitting in a driving simulator.) That would show nothing one way or the other. But please don't talk about de-afferenting my robot and reducing him to a "brain-in-vat" and then piping the computer-generated input straight to THAT, because, as I said before, neither of us knows what THAT would be, To assume otherwise (e.g., that it would be a computer) is simply to beg the question!

md> each neuron appears to respond to its inputs (including its local md> chemical environment) without requiring any sort of thing called md> "meaning". The term "meaning", as far as I can tell, is simply used to md> refer to incredibly complex syntactic types of operations. [If] a robot md> (or person) is organized to behave in certain, very complex ways, then md> we tend to take (as Dennett says) an "intentional stance" toward it, md> but that doesn't mean there is anything other than syntax going on.

Being interpretable (by an outsider) as having subjective meaning, no matter how practical or useful, is still not the same as (and certainly no guarantor of) actually having subjective meaning. Subjective meaning does NOT simply refer to "incredibly complex syntactic types of operations"; and, as usual, neurons have nothing to do with this (nor are their activities "syntactic"). And where subjective meaning is going on there is definitely more than (interpretable) syntax going on.

md> The only "evidence" for meaning is probably from introspection. Of md> course, I can (and do) describe myself as having "meanings" because I md> can use that word to describe a certain class of complex behaviors and md> I happen to also exhibit those complex behaviors. But because I md> describe myself in this way does not require that I actually have some md> magical "meanings" that are something other than syntactic operations.

You don't really understand English and fail to understand Chinese because you "describe [yourself] as having `meanings'" but because there's real subjective understanding of English going on in your head, along with real subjective experience of red, pain, etc. Besides really experiencing all that, you're also describable as experiencing it; but some systems are describable as experiencing it WITHOUT really experiencing it, and that's the point here! Explanatory convenience and "stances" -- by outsiders or by yourself -- have nothing whatsoever to do with it. There's nothing "magic" about it either; just something real!

md> Ask the LISP interpreter (that's executing code that creates some md> natural language understanding system S) if it "understands" anything md> and, of course, it doesn't. Ask S, however, and you will get an answer. md> We don't expect the LISP interpreter to "understand" what it's doing, md> so why should we EXPECT Searle to understand Chinese??? However, if we md> ask the Chinese persona what it's like to understand Chinese we will md> get an answer back (in Chinese).

Taking S's testimony about what it's like to understand Chinese as evidence against the claim that there is no real subjective understanding going on in there is like taking the fact that it "burns" (simulated) marshmallows as evidence against the claim that a (simulated) fire is not really hot. This is precisely the type of hermeneutic credulity that is on trial here. One can't expect to gain much credence from simply citing the credulity in its own defense (except from someone else who is caught up in the same hermeneutic circle).

md> If a simulated robot could pass the TTT test within a virtual reality md> world, it would be grounded in that world but there would be no md> physical transducers. I have never figured out why Harnad rejects out md> of hand the possibility of a "brain in a vat" whose I/O channels are md> wired up to a computer so that the brain thinks it's seeing, standing, md> etc.

md> Michael G Dyer

Virtually grounded, not really grounded, because of course that's only a virtual TTT, not a real one. But the whole point of the TT/TTT distinction was to distinguish the merely virtual from the real!

Stevan Harnad

----------------

From: Pat Hayes Date: Wed, 22 Apr 92 14:25:29 MDT


>
>sh> the structures and processes
>
>sh> underlying our capacity to categorize inputs (beginning with sensory
>
>sh> projections).... will turn out to be
>
>sh> largely nonsymbolic, but perhaps symbols can be grounded in the
>
>sh> capacity those nonsymbolic structures and processes give us to pick out
>
>sh> the objects they are about.

If we include (as we should) linguistic input, it seems clear that the structures and processes will be largely symbolic. I think that vision and other perceptual modes involve symbols from an early stage, but I agree that's just one intuition against another.

I think there is something important (though vague) here:


>
>ph> Here's an example adapted from one of Brian [Smith's]. Take a set
>
>ph> of rules which encode (a formal system for) arithmetic, together with
>
>ph> a formal predicate 'lengthof', and the rules
>
>ph>
>
>ph> lengthof('0') -> 1
>
>ph> lengthof(n<>x) -> lengthof(n) + lengthof(x)
>
>ph>
>
>ph> Now, these rules make 'lengthof(n)' evaluate to (a numeral which means)
>
>ph> the number of digits in the formal representation of n: ie, the length
>
>ph> of that numeral in digits. Notice this is the ACTUAL length of that
>
>ph> piece of syntax. Now, is this 'formal'? It is according to your
>
>ph> definition, and perhaps you are happy with that, but it has some marks
>
>ph> which successfully refer to physical properties of part of the world. >
>
>sh> But note that in your example above, even though the computation yields
>
>sh> a symbol that is interpretable as the number of symbols in the string,
>
>sh> this is in principle no different from a computation that yields a
>
>sh> symbol that is interpretable as the number of planets in the solar
>
>sh> system. It is just a systematic correspendence (and hence interpretable
>
>sh> as such)

No, you have missed the point of the example. The difference is that in this example, the sytematicity is between the syntax of one numeral and the actual (physical?) length of another. This is not the same kind of connection as that between some symbols and a piece of the world that they can be interpreted as referring to. It requires no external interpreter to make it secure, the system itself guarantees that this interpretation will be correct. It is a point that Descartes might have made: I don't need to be connected to an external world in any way in order to be able to really count.


>
>sh> ... But "interpretable as meaning X" (as in the case of a book,
>
>sh> interpretable by a thinking mind) is not the same as "meaning X" (as in
>
>sh> the case of thoughts, in a mind). Failing to distinguish the two seems
>
>sh> to be another instance of conflating physical inner/outer and mental
>
>sh> inner/outer, as discussed earlier.

I am distinguishing them, and claiming to have a case of the latter. Now of course if you insist a priori that meaning can only take place in a mind, and a system like this isn't one, then you have the day; but that seems to beg the question.
>
>
>ph> My intuition tells me clearly that when I debug a piece of code by
>
>ph> pretending to be an interpreter and running through it "doing" what it
>
>ph> "tells" me to do, that the program is not being run, and certainly not
>
>ph> run on, or by, me. So we are left with your intuition vs. my intuition,
>
>ph> and they apparently disagree.
>
>
>sh> But isn't the real question whether there is any relevant difference
>
>sh> between what you think is a "real" implementation by a machine and what
>
>sh> you think is a "pseudo-implementation" by a person? Certainly the
>
>sh> computer is not stepping through the states consciously and
>
>sh> deliberately, as you are. But is there anything else that's different?
>
>sh> If we speak only of the "motions gone through" and their I/O conditions
>
>sh> in the two cases, they are exactly the same. In the case of the
>
>sh> machine, the motions are mechanical; no choice is involved. In the case
>
>sh> of the person, their elective. But so what?

Well, that is a very good question. That is exactly what computer science is all about. What is different in having a machine that can run algorithms from just being able to run algorithms? I take it as obvious that something important is, and that answering that question is, pace Brian Smith's recent message, essentially an empirical matter. We are discovering so what.


>
>sh> Even apart from the vexed
>
>sh> questions associated with free will and causality, what is there about
>
>sh> taking IDENTICAL motions under identical I/O conditions and making
>
>sh> their causal basis mindless and mechanical that could possibly effect a
>
>sh> transition INTO the mental (rather than OUT of it, which is the much
>
>sh> more obvious feature of the transition from the human implementation to
>
>sh> the machine one)?

I agree it seems almost paradoxical. But as I emphasised, the key is that these ARENT identical sequences of states. Thats what computers do. They put algorithms into the physical world, give them a life of their own, enable them to become real in some important sense. Its a hard sense to get exactly clear, but it seems very real. The difficulty is illustrated well by the awful trouble software is giving to legal concepts, for example. Since they are textual and can be copied, and do nothing until 'performed', they seem like things to be copyrighted. But in many ways they are more like pieces of machinery suitable for patenting. They are both, and neither: they are something new.


>
>sh> It's always useful, in this sort of hermeneutic puzzle, to de-interpret
>
>sh> and reduce things to gibberish as much as possible

Ah, maybe that is a bad heuristic sometimes. Clearly if you insist that this can always be done to computer insides but not always to human insides, then you are never going to see meaning in a machine.


>
>sh> Suppose the computer was doing all the requisite
>
>sh> summation in binary, and you were too,
>
>sh> and all it did, and all you did, was compare zero's and one's and erase
>
>sh> and carry, just like a Turing Machine. Is it still so obvious that
>
>sh> you're not doing everything the computer is doing? If anything, the
>
>sh> computer is doing less than you rather than more (because it has no
>
>sh> choice in the matter). Why should I interpret less as more?

The computer is doing less than me, but thats my point: the PROGRAM is more responsible for what is happening. The computer is essentially BECOMING the program, one might almost say, giving its symbolic patterns momentary flesh so that they act in the world. And thats what a human reader of the code is not doing (unless hypnotised or somehow in its grip in some unreal way).


>
>sh> By the way, minds and computers may both respond to syntax, but only
>
>sh> minds respond to meaning. Computers are merely INTERPRETABLE as if they
>
>sh> responded to meaning...

Nah nah, question begging again!


>
>sh> And I bet a lot of the essential features of pain and proprioception
>
>sh> will be in the analog properties of the hardware that implements it,
>
>sh> which will be more like exotic chemistry.

OK, last word is yours. Who is taking the bets?

Pat Hayes

------------

After a brief lull (mainly because I was out of town and fell behind with the postings) the "What is Computation" discussion proceeds apace... -- SH

Date: Wed, 29 Apr 1992 22:20:04 -0400 From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)

I will respond to both Stevan Harnad and Pat Hayes; I wrote:


>dm> Let's distinguish between a computer's states' being
>dm> "microinterpretable" and "macrointerpretable." The former case is what
>dm> you assume: that if we consider the machine to be a rewrite system, the
>dm> rewrite rules map one coherently interpretable state into another. Put
>dm> another way, the rewrite rules specify a change in belief states of the
>dm> system. By contrast, the states of a macrointerpretable system "sort of
>dm> line up" with the world in places, but not consistently enough to
>dm> generate anything like a Tarskian interpretation. What I think you've
>dm> overlooked is that almost all computational processes are at best
>dm> macrointerpretable.

Pat Hayes replied:


>ph> Drew, clearly you have an antisemantic axe to grind, but its not
>
>sh> very sharp.

I do have axes to grind, but this isn't one of them. I do not dispute that computers do normally succeed in referring to things and states to exactly the same degree that we do. But the question at issue is whether this fact is part of the *definition* of "computer." I'm pretty sure that Pat and I agree here: that computers are defined as physical instantiations of formal automata (I won't repeat David Chalmers's excellent statement of the position), and they happen to make excellent semantic engines when connected up to things their states can come to refer to.

Now back to Stevan:

You raise four semi-independent issues:


>
>sh> (1) Does EVERY computer implementing a program have SOME states that are
>
>sh> interpretable as referring to objects, events and states of affairs, the
>
>sh> way natural language sentences are?


>
>sh> (2) Are ALL states in EVERY computer implementing a program interpretable
>
>sh> as referring... (etc.)?


>
>sh> (3) What is the relation of such language-like referential
>
>sh> interpretability and OTHER forms of interpretability of states of a
>
>sh> computer implementing a program?


>
>sh> (4) What is the relation of (1) - (3) to the software hierarchy, from
>
>sh> hardware, to machine-level language, to higher-level compiled
>
>sh> languages, to their English interpretations?


>
>sh> My answer would be that not all states of a computer implementing a
>
>sh> program need be interpretable, and not all the interpretable states
>
>sh> need be language-like and about things in the world (they could be
>
>sh> interpretable as performing calculations on numbers, etc.), but ENOUGH
>
>sh> of the states need to be interpretable SOMEHOW, otherwise the computer
>
>sh> is just performing gibberish (and that's usually not what we use
>
>sh> computers to do, nor do we describe them as such), and THAT's the
>
>sh> interpretability that's at issue here.

But it isn't! We're talking about whether semantic interpretability is part of the *definition* of computer. For that to be the case, everything the computer does must be semantically interpretable. Does it cease to be a computer during the interludes when its behavior is not interpretable?

I assumed that your original claim was that a computer had to correspond to an interpreted formal system (where, in the usual case, the users supply the interpretation). But that's not what you meant at all. An interpreted formal system includes a mapping from states of the system to states of the world. Furthermore, there is a presumption that the state-transition function for the formal system preserves the meaning relation; if the state of affairs denoted by system state S1 holds, then the state of affairs denoted by the following state also holds. But now it's clear that neither you nor Pat is proposing anything of this sort. Instead, you seem to agree with me that a computer is a physical embodiment of a formal automaton, plus a kind of loose, pragmatic, fault-prone correspondence between its states and various world states. Given this agreement, let's simplify. Clearly, the semantic interpretation is no part of the definition of computer. We can identify computers without knowing what interpretation their users place on them.

I have lots more examples. Today I saw a demo of a computer generating the Mandelbrot set. (It was the DEC alpha chip; definitely the Mandelbrot engine of choice.) Unquestionably a computer; what did its states denoteg It seems clear at first: The color of each pixel denoted a speed of convergence of a certain numerical process. But that's just the platonic ideal. But platonic referents are very unsatisfactory for our purposes, on two counts. (1) If we count platonic referents, then *any* formal system has a trivial set of referents. (2) The viewer of the screen was not interested in this set of referents, but in the esthetic value of the display. Hence the real universe of the users was the universe of beauty and truth. Vague, of course, but *computers' semantic relations are normally vague.*


>dm> More examples: What do the states of a video game refer to? The Mario
>dm> brothers? Real asteroids?


>
>sh> They are interpretable as pertaining (not referring, because there's no
>
>sh> need for them to be linguistic) to (indeed, they are hard-wireable to)
>
>sh> the players and moves in the Mario Brothers game, just as in chess. And
>
>sh> the graphics control component is interpretable as pertaining to (and
>
>sh> hard-wireable to the bit-mapped images of) the icons figuring in the
>
>sh> game. A far cry from uninterpretable squiggles and squoggles.

The "players and moves" mostly don't exist, of course, since they include entities like King Koopa and Princess Toadstool. The child playing the game thinks (sort of) that the pictures on the screen refer to a certain universe. Or maybe they *constitute* a universe. It's hard to be precise, but I hope by now vagueness doesn't bother you. Of course, the engineer that wrote the game knows what's *really* going on. The input signals refer to presses of control buttons by the game player. Output signals refer to shapes on a screen. But it would be absurd to say that the game player's version of the semantics is only an illusion, and the real purpose of the system is to map buttons pushes onto screen alterations. Shall we say, then, that there are *two* computers here --- one formal system, but two competing semantic interpretationsg I'd rather say that there is one computer, and as many interpretations as are convenient to posit --- including possibly zero. [Also, the engineer's interpretation is almost trivial, because all it refers to are the program's own inputs and outputs; almost, but not quite, because normally the inputs are real pressures on buttons and the outputs are real photons emanating from a screen.]


>dm> Take almost any example, a chess program, for instance. Suppose that
>dm> the machine is evaluating a board position after a hypothetical series
>dm> of moves. Suppose the evaluation function is a sum of terms. What does
>dm> each term denote? It is not necessary to be able to say. One might, for
>dm> instance, notice that a certain term is correlated with center control,
>dm> and claim that it denotes "the degree of center control," but what does
>dm> this claim amount to? In many games, the correlation will not hold, and
>dm> the computer may as a consequence make a bad move. But the evaluation
>dm> function is "good" if most of the time the machine makes "good moves."


>
>sh> I'm not sure what an evaluation function is,

[It's the program that computes a quick guess of how good a board position is without any further lookahead.]


>
>sh> but again, I am not saying
>
>sh> every state must be interpretable. Even in natural language there are
>
>sh> content words (like "king" and "bishop") that have referential
>
>sh> interpretations and function words ("to" and "and") that have at best
>
>sh> only syntactic or functional interpretations. But some of the internal
>
>sh> states of a chess-plying program surely have to be interpretable as
>
>sh> referring to or at least pertaining to chess-pieces and chess-moves, and
>
>sh> those are the ones at issue here.

But if only "some" of the states have to be interpretable, then is the system only a computer some of the timeg Or to some degree?


>dm> The chess program keeps a tree of board positions. At each node of this
>dm> tree, it has a list of moves it is considering, and the positions that
>dm> would result. What does this list denote? The set of moves "worth
>dm> considering"? Not really; it's only guessing that these moves are worth
>dm> considering. We could say that it's the set the machine "is
>dm> considering," but this interpretation is trivial.


>
>sh> And although I might make that interpretation for convenience in
>
>sh> describing or debugging the program (just as I might make the
>
>sh> celebrated interpretation that first got Dan Dennett into his
>
>sh> "intentional stance," namely, that "the computer thinks it should get
>
>sh> it's queen out early"), I would never dream of taking such
>
>sh> interpretations literally: Such high level mentalistic interpretations
>
>sh> are simply the top of the as-if hierarchy, a hierarchy in which
>
>sh> intrinsically meaningless squiggles and squoggles can be so interpreted
>
>sh> that (1) they are able to bear the systematic weight of the
>
>sh> interpretation (as if they "meant" this, "considered/believed/thought"
>
>sh> that, etc.), and (2) the interpretations can be used in (and even sometimes
>
>sh> hard-wired to) the real world (as in interpreting the squiggles and
>
>sh> squoggles as pertaining to chess-men and chess-moves).

You're forgetting which side of the argument you're on. *I'm* arguing that such interpretations are epiphenomenal. *You're* arguing that the interpretation is the scaffolding supporting the computerhood of the system. Or perhaps I should picture a trapeze; if the system spends too much time between interpretable states, it falls from computational grace.


>dm> We can always impose a trivial interpretation on the states of the
>dm> computer. We can say that every register denotes a number, for
>dm> instance, and that every time it adds two registers the result denotes
>dm> the sum. The problem with this idea is that it doesn't distinguish the
>dm> interpreted computers from the uninterpreted formal systems, because I
>dm> can always find such a Platonic universe for the states of any formal
>dm> system to "refer" to. (Using techniques similar to those used in
>dm> proving predicate calculus complete.)


>
>sh> I'm not sure what you mean, but I would say that whether they are
>
>sh> scratches on a paper or dynamic states in a machine, formal symbol
>
>sh> systems are just meaningless squiggles and squoggles unless you project
>
>sh> an interpretation (e.g., numbers and addition) onto them.

At this point you seem to have crossed over and joined my side completely. You are admitting that there can be machines that embody formal symbol systems whose states are just meaningless squiggles and squoggles.


>
>sh> The fact that
>
>sh> they will bear the systematic weight of that projection is remarkable
>
>sh> and useful (it's why we're interested in formal symbol systems at all),
>
>sh> but certainly not evidence that the interpretation is intrinsic to the
>
>sh> symbol system;

Yes! Right!


>
>sh> it is only evidence of the fact that the system is
>
>sh> indeed a nontrivial symbol system (in virtue of the fact that it is
>
>sh> systematically interpretable). Nor (as is being discussed in other
>
>sh> iterations of this discussion) are coherent, systematic "nonstandard"
>
>sh> alternative interpretations of formal symbol systems that easy to come
>
>sh> by.

You're going to feel terrible when you realize you've agreed with me!


>dm> If no other argument convinces you, this one should: Nothing prevents
>dm> a computer from having inconsistent beliefs. We can build an expert
>dm> system that has two rules that either (a) cannot be interpreted as
>dm> about medical matters at all; or (b) contradict each other. The system,
>dm> let us say, happens never to use the two rules on the same case, so
>dm> that on any occasion its advice reflects a coherent point of view.
>dm> (Sometimes it sounds like a homeopath, we might say, and sometimes like
>dm> an allopath.) We would like to say that overall the computer's
>dm> inferences and pronouncements are "about" medicine. But there is no way
>dm> to give a coherent overall medical interpretation to its computational
>dm> states.


>
>sh> I can't follow this: The fact that a formal system is inconsistent, or
>
>sh> can potentially generate inconsistent performance, does not mean it is
>
>sh> not coherently interpretable: it is interpretable as being
>
>sh> inconsistent, but as yielding mostly correct performance nevertheless.
>
>sh> [In other words, "coherently interpretable" does not mean
>
>sh> "interpretable as coherent" (if "coherent" presupposes "consistent").]

It matters in the traditional framework I was assuming you endorsed. I see that you don't. Pat does, however:


>ph> Your inconsistent-beliefs point misses an important issue. If that
>ph> expert system has some way of ensuring that these contradictory rules
>ph> never meet, then it has a consistent interpretation, trivially: we can
>ph> regard the mechanism which keeps them apart as being an encoding of a
>ph> syntactic difference in its rule-base which restores consistency.
>ph> Maybe one set of rules is essentially written with predicates with an
>ph> "allo-" prefix and the others with a "homeo-". You might protest that
>ph> this is cheating, but I would claim not: in fact, we need a catalog of
>ph> such techniques for mending consistency in sets of beliefs, since
>ph> people seem to have them and use them to 'repair' their beliefs
>ph> constantly, and making distinctions like this is one of them (as in,
>ph> "Oh, I see, must be a different kind of doctor"). If on the other hand
>ph> the system has no internal representation of the distinction, even
>ph> implicit, but just happens to never bring the contradiction together,
>ph> then it is in deep trouble ....

I'm with Stevan on this one. The rule-separation mechanism may in some sense restore consistency, but it's hard to explain how it does this *semantically.* (The syntactic mechanism must somehow affect the meanings of the rules, or affect the sense in which the system "believes" its rules.) Fortunately, we are not called on to provide a systematic semantics.


>dm> I suspect Searle would welcome this view, up to a point. It lends
>dm> weight to his claim that semantics are in the eye of the beholder.
>dm> ... However, the point
>dm> at issue right now is whether semantic interpretability is part of the
>dm> definition of "computer." I argue that it is not; a computer is what
>dm> it is regardless of how it is interpreted. I buttress that
>dm> observation by pointing out just how unsystematic most interpretations
>dm> of a computer's states are. However, if I can win the argument about
>dm> whether computers are objectively given, and uninterpreted, then I
>dm> can go on to argue that unsystematic interpretations of their states
>dm> can be objectively given as well.


>
>sh> If you agree with Searle that computers can't be distinguished from
>
>sh> non-computers on the basis of interpretability, then I have to ask you
>
>sh> what (if anything) you DO think distinguishes computers from
>
>sh> non-computers?

I refer you to Chalmers. A brief summary: A system is a computer if its physical states can be partitioned into classes that obey a transition relation.

Drew McDermott

------------------------------------

Date: Thu, 7 May 92 19:01:34 EDT From: "Stevan Harnad"

ON IMPLEMENTING ALGORITHMS MINDLESSLY

Pat Hayes wrote:


>ph> If we include (as we should) linguistic input, it seems clear that
>ph> structures and processes [underlying our capacity to categorize] will
>ph> be largely symbolic... vision and other perceptual modes involve
>ph> symbols from an early stage...

The only problem with "including" (as you put it) linguistic input is that, without grounding, "linguistic input" is just meaningless squiggles and squoggles. To suppose it is anything more is to beg the main question at issue here.

To categorize is to sort the objects in the world, beginning with their sensory projections. It is true that we can sort names and descriptions too, but unless these are first grounded in the capacity to sort and name the objects they refer to, based on their sensory projections, "names and descriptions" are just symbolic gibberish that happens to have the remarkable syntactic property of being systematically translatable into a code that we are able to understand. But that's all MEDIATED meaning, it is not autonomously grounded. And a viable candidate for what's going on in our heads has to be autonomously grounded; it can't just be parasitic on our interpretations.

Another thing you might have meant was that symbols play a role even in sensory categorization. That may be true too, but then they better in turn be grounded symbols, otherwise they are hanging from a (Platonic?) skyhook.


>ph> No, you have missed the point of the [internal length] example.
>ph> in this example, the systematicity is between the
>ph> syntax of one numeral and the actual (physical?) length of another.
>ph> This is not the same kind of connection as that between some symbols
>ph> and a piece of the world that they can be interpreted as referring to.
>ph> It requires no external interpreter to make it secure, the system
>ph> itself guarantees that this interpretation will be correct. It is a
>ph> point that Descartes might have made: I don't need to be connected to
>ph> an external world in any way in order to be able to really count.

MENTAL counting is moot until its true underlying mechanism is known; you are simply ASSUMING that it's just symbol manipulation.

But your point about the correspondence between the internal numerical symbol for the length of an internal sequence can be made without referring to the mental. There is certainly a correspondence there, and the interpretation is certainly guaranteed by causality, but only in a slightly more interesting sense than the interpretation that every object can be taken to be saying of itself "Look, here I am!" That too is a guaranteed relation. I might even grant that it's "grounded," but only in the trivial sense that an arbitrary toy robot is grounded. Symbols that aspire to be the language of thought cannot just have a few fixed connections to the world. The systematicity that is needed has to have at least the full TT power of natural language -- and to be grounded it needs TTT-scale robotic capacity.

Arithmetic is an artificial language. As such, it is an autonomous formal "module," but it also happens to be a subset of English. Moreover, grounded mental arithmetic (i.e., what we MEAN by numbers, addition, etc.) is not the same as ungrounded formal arithmetic (symbols that are systematically interpretable as numbers).

That having been said, I will repeat what I said earlier, that there may nevertheless be something to learn from grounded toy systems such as the numerical one you describe. There may be something of substance in such dedicated systems that will scale up to the TTT. It's just not yet obvious what that something is. My guess is it will reside in the way the analog properties of the symbols and what they stand for (in this case, the physical magnitude of some quantity) constrain activity at the syntactic level (where the "shape" of the symbols is normally arbitrary and hence irrelevant).


>ph> What is different in having a machine that can run
>ph> algorithms from just being able to run algorithms? I take it as
>ph> obvious that something important is...

I think you're missing my point. The important thing is that the algorithm be implemented mindlessly, not that it be implemented mechanically (they amount to the same thing, for all practical purposes). I could in principle teach a (cooperative) two-year old who could not read or write to do rote, mechanical addition and multiplication. I simply have him memorize the finite set of meaningless symbols (0 - 9) and the small set of rules (if you see "1" above "3" and are told to "add" give "4", etc.). I would then have a little human calculator, implementing an algorithm, who didn't understand a thing about numbers, just as Searle doesn't understand a word of Chinese.

Now let me tell you what WOULD be cheating: If any of what I had the child do was anything but SYNTACTIC, i.e., if it was anything other than the manipulation of symbols on the basis of rules that operate only on their (arbitrary) shapes: It would be cheating if the child (mirabile dictu) happened to know what "odd" and "even" meant, and some of the calculations drew on that knowledge instead of just on the mechanical algorithm I had taught him. But as long it's just mechanical syntax, performed mindlessly, it makes no difference whatsoever whether it is performed by a machine or stepped through (mechanically) by a person.

Now if you want to appreciate the real grip of the hermeneutical circle, note how much easier it is to believe that an autonomous black box is "really" understanding numbers if it is a machine implementing an algorithm mechanically rather than an illiterate, non-numerate child, who is just playing a symbolic game at my behest. THAT's why you want to disqualify the latter as a "real" implementation, despite the fact that the same syntactic algorithm is being implemented in both cases, without any relevant, nonarbitrary differences whatsoever.


>ph> Clearly if you insist that [reducing to gibberish]
>ph> can always be done to computer insides but not always to human
>ph> insides, then you are never going to see meaning in a machine.

I am sure that whatever is REALLY going on in the head can also be deinterpreted, but you mustn't put the cart before the horse: You cannot stipulate that, well then, all that's really going on in the head is just symbol manipulation, for that is the hypothesis on trial here!

{Actually, there are two semi-independent hypotheses on trial: (1) Is anything NOT just a computer doing computation? and, (2) Are minds just computers doing computation? We agree, I think, that some things are NOT computers doing computation, but you don't think the mind is one of those noncomputational things whereas I do.]

I had recommended the exercise of deinterpreting the symbols so as to short circuit the persuasive influence of those properties that are merely byproducts of the interpretability of the symbols, to see whether there's anything else left over. In a grounded TTT-scale robot there certainly would be something left over, namely, the robotic capacity to discriminate, categorize and manipulate the objects, events and states of affairs that the symbols were about. Those would be there even if the symbols were just gibberish to us. Hence they would be grounding the interpretations independently of our mentalistic projections.

Stevan Harnad

----------------

Date: Thu, 7 May 92 19:23:41 EDT From: "Stevan Harnad"

To all contributors to the "What is Computation?" Symposium:

Jim Fetzer, Editor of the (paper) journal MINDS AND MACHINES has expressed interest in publishing the Symposium (see below) as a special issue of his journal. He has already published one such paper version of a "Skywriting" Symposium similar to this one, which will appear shortly as:

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on the Virtual Mind. Minds and Machines (in press)

That Symposium was smaller, with fewer participants, but I hope that the participants in this larger one will want to do this too. I will generate formatted hard copy, and the participants can polish up their prose and add references, but what we must avoid is pre-emptive re-writing that makes one another's contibutions retroactively obsolete. We also cannot coordinate diverging iterations of rewriting. We should instead preserve as much as possible the interactive "Skywriting" flavor of the real-time exchange, as we did in the other Symposium.

Please let me know which of you are (and are NOT) interested in publication. In the meanwhile, we can continue for a few more iterations before involking cloture. Perhaps the prospct of publication will change the style of interaction from this point on, perhaps not...

Several backed up postings are still waiting in the wings.

Bets wishes,

Stevan,

From: jfetzer@ub.d.umn.edu (james fetzer) Date: Wed, 22 Apr 92 18:20:34 CDT

Stevan,

This stuff is so interesting that I might devote a whole issue to it. How would you like to guest edit a special issue on the topic, "What is Computation?", for MINDS AND MACHINES? Beginning in 1993, we will be going to 125 pages per issue, and each page runs about 600 words. So that represents the maximum length of material I can use. If you like the idea, I have no deadline in mind, but I do believe that it may require something like position papers from the principal contributors in addition to exchanges. I am not envisioning what is nothing more than one continuous exchange, but I will be open-minded about any suggestions you may have about how we proceed.

Let me know if you like this suggestion. Jim

From: jfetzer@ub.d.umn.edu (james fetzer) Date: Thu, 30 Apr 92 16:09:12 CDT

On the proposed new skywriting project tentatively entitled, "What is Computation?", let's take things one step at a time. I like the project but I know we need to agree on a few ground rules.

(1) Let's start with a tentative length of 50 pages at 600 words per (30,000) and see how that plays. If you should need more, then we can work that out, but I would like to try 30,000 first.

(2) The authors must make appropriate reference to works that have been previously discussed or are otherwise relevant to their views. (Skywriting seems to invite unattributed use of ideas, etc., which both of us need to discourage.)

(3) The version that is submitted will be subject to review in accordance with the journal's standing policies. Such review may lead to revisions of certain parts of the exchange, but every effort will be made to adhere as closely as possible to the spirit of the original.

(4) When the final version is submitted to the publisher for typesetting, only typographical corrections will be allowed, lest we bog down in changes that generate other changes, over and over, due to the large number of contributors, etc.

(5) You will (once again) assume responsibility for preparing the manuscript for submission and will execute the permission to publish form on behalf of all of the contributors and will be responsible for overall proofing of the typeset manuscript, coordinating with the others as necessary.

If this sounds agreeable with you, then by all means, let us proceed. Keep me posted as things develop, but I would recommend that the number of contributors be kept to a managably small number, whatever that is.

Jim Fetzer

Editor MINDS AND MACHINES

Date: Fri, 8 May 92 13:10:51 EDT From: "Stevan Harnad"

Date: Thu, 7 May 92 19:18:06 EST From: David Chalmers To: harnad@Princeton.EDU Subject: Re: Publishing the "What is Computation" Symposium

Sounds fine, an interesting idea. I'll probably make one more contribution within the next week or so, addressing various points that have come up.

Cheers, Dave.

Date: Fri, 8 May 92 12:46:36 -0400 From: "John C. Haugeland" To: harnad@Princeton.EDU, jfetzer@ub.d.umn.edu Subject: Special issue of Minds and Machines on "What is Computation"

Dear Jim and Steve:

I have been following the discussion on "What is computation" with (predictable) interest, but I have not yet participated because of a backlog of prior commitments. These commitments will keep me preoccupied, alas, for the next six weeks as well -- mostly travelling. I have been intending to plunge in when I get back (third week of June); however, the mention of a special issue of _Minds and Machines_ devoted to the topic makes me think that I had at least declare my intentions now, lest I be left behind. What I have in mind is writing a brief (eg, 3000 word) "position paper" on the topic, with reference to the discussion so far mostly to give credit. But, as indicated, I can't get to it for a while. Is there any possibility of this, or does the timing wipe me out?

John Haugeland haugelan@unix.cis.pitt.edu

-----------

[Reply: The Symposium will continue, so there is still time for John Haugeland and others to join in. SH.]

----------

Date: Fri, 8 May 92 15:58:43 EDT From: "Stevan Harnad"

Drew McDermott (mcdermott-drew@CS.YALE.EDU) wrote:


>dm> We're talking about whether semantic interpretability is part of the
>dm> *definition* of computer. For that to be the case, everything the
>dm> computer does must be semantically interpretable. Does it cease to be a
>dm> computer during the interludes when its behavior is not interpretable?

There is a systematic misunderstanding here. I proposed semantic interpretability as part of the definition of computation. A computer would then be a device that can implement arbitrary computations. That doesn't mean everything it does must be semantically interpretable. Uninterpretable states in a computer are no more problematic than idle or power-down states. What I suggest, though, is that if it was ONLY capable of uninterpretable states (or of only being idle or off), then it would not be a computer.


>dm> I assumed that your original claim was that a computer had to
>dm> correspond to an interpreted formal system (where, in the usual case,
>dm> the users supply the interpretation). But that's not what you meant...
>dm> now it's clear that neither you nor Pat is proposing anything of
>dm> this sort. Instead, you seem to agree with me that a computer is a
>dm> physical embodiment of a formal automaton, plus a kind of loose,
>dm> pragmatic, fault-prone correspondence between its states and various
>dm> world states. Given this agreement, let's simplify. Clearly, the
>dm> semantic interpretation is no part of the definition of computer. We
>dm> can identify computers without knowing what interpretation their users
>dm> place on them.

The interpretation of any particular computer implementing any particular computation is not part of my proposed definition of a computer. A computer is a physical system with the capacity to implement (many, approximately all) nontrivial computations (= INTERPRETABLE symbol systems), where "nontrivial" is a cryptographic complexity-based criterion.


>dm> The [videogame] "players and moves" mostly don't exist, of course,
>dm> since they include entities like King Koopa and Princess Toadstool. The
>dm> child playing the game thinks (sort of) that the pictures on the screen
>dm> refer to a certain universe. Or maybe they *constitute* a universe.
>dm> It's hard to be precise, but I hope by now vagueness doesn't bother
>dm> you. Of course, the engineer that wrote the game knows what's
>dm> *really* going on. The input signals refer to presses of control
>dm> buttons by the game player. Output signals refer to shapes on a
>dm> screen. But it would be absurd to say that the game player's version
>dm> of the semantics is only an illusion, and the real purpose of the
>dm> system is to map buttons pushes onto screen alterations.

Not that absurd, but never mind. There are certainly many levels of interpretation (virtual systems) in some computers implementing some programs. One virtual system need not have primacy over another one. My point is made if there is any systematic interpretability there at all.

We should keep it in mind that two semi-independent questions are under discussion here. The first has nothing to do with the mind. It just concerns what computers and computation are. The second concerns whether just a computer implementing a computer program can have a mind. The groundedness of the semantics of a symbol system relates to this second question. Computer video-games and their interpretations are hopelessly equivocal. They are just implemented squiggles and squoggles, of course, which are in turn interpretable as referring to bit-maps or to images of fictitious entities. But the fictitious entities are in OUR heads, and even the perception of the "entities" on the video-screen are mediated by our brains and their sensory apparatus. Without those, we have only squiggles and squoggles (or, in the case of the dedicated video system, hard-wired to its inputs and outputs, we have squiggles and squoggles married to buttons and bit-mapped CRT screens).


>dm> You're forgetting which side of the argument you're on. *I'm* arguing
>dm> that such interpretations are epiphenomenal. *You're* arguing that
>dm> the interpretation is the scaffolding supporting the computerhood of
>dm> the system.

You seem to be confusing the question of interpretability with the question of the groundedness of the interpretation. My criterion for computerhood is the capacity to implement arbitrarily many different (nontrivially) interpretable symbol systems. The interpretability of those systems is critical (in my view) to their being computational at all. Without interpretability you have random gibberish, uninterpretable in principle. But even interpretable (nonrandom) symbol systems are just gibberish unless we actually project an interpretation on them. This suggests that interpretability is not enough. If ANY kind of system (computational or not) is to be a viable candidate for implementing MENTAL states, then it cannot be merely interpretable; the interpretation has to be INTRINSIC to the system: it has to be grounded, autonomous, independent of whatever we do or don't project onto it.

Because of the grip of the hermeneutic circle, it is very hard, once we have projected an interpretation onto a system, to see it for what it really is (or isn't) on its own, independent of our interpretations. That's why I recommend de-interpreting candidate systems -- reducing them to the gibberish ("squiggles and squoggles") that they really are, to see what (if anything) is left to ground any meanings in. A pure symbol system (like some of the earlier overinterpreted chimpanzee "languages") could not survive this nonhermeneutic scrutiny. A TTT-scale robot could.


>sh> If you agree with Searle that computers can't be distinguished from
>sh> non-computers on the basis of interpretability, then I have to ask you
>sh> what (if anything) you DO think distinguishes computers from
>sh> non-computers?


>dm> I refer you to Chalmers. A brief summary: A system is a computer if
>dm> its physical states can be partitioned into classes that obey a
>dm> transition relation. -- Drew McDermott

I too think computers/computation can be distinguished from their (non-empty) complement, and perhaps by the elaboration of a criterion like that one. But this still leaves us miles apart on the question: "Ok, given we can objectively distinguish computers from noncomputers, what has this to do with the question of how to implement minds?"

Stevan Harnad

---------------------

Date: Mon, 4 May 1992 10:31:25 +0200 From: Oded.Maler@irisa.fr (Oded Maler)

One outcome (at least for me) of the previous round of postings on the symbol-grounding problem (1990) was that I became aware of the fact that current computational models are not suitable for dealing with the phenomenon of computers interacting in real-time with the real world. Consequently, with several collaborators, I did some preliminary work on what we call "hybrid dynamical systems" which combine discrete state-transition dynamics with continuous change. This is technical work, and it is not supposed to solve the philosophical problems discussed here; I mention it just to show that such discussions, even if they don't seem to converge, might have some useful side-effects.

Now to the question of what is a computation. My current view is that computations are idealized abstract objects that are useful in describing the structure and the behavior of certain systems by focusing on the "informational" aspects of their dynamics rather on the "materialisic/energetic" aspects. This abstraction, not surprisingly, turns out to be useful in designing and analyzing certain systems such as synchronized switching devices, also known as general-purpose computers. It is sometimes also useful for analyzing the behavior of humans when they perform tasks such as adding numbers.

The question of why such a computational interpretation is more reasonable for some systems than for others is intriguing, and I don't know if a quantitative observer-independent borderline can be put. Even real airplanes do not necessarily fly, unless flying is a useful abstraction for us when we want to get to a conference - "you cannot participate in the same flight twice" (to rephrase what's-his-name, badly translated from Greek to Hebrew to English).

So I think the question will reduce to the two related problems: (1) What is "information"? -- because this seems to be the characterizing feature of computational dynamics. (2) The relations between things and their descriptions.

Oded Maler

------------------------------------

From: Stevan Harnad (harnad@princeton.edu)

For me, flying is not just a "useful abstraction," it's something you really do, in the real air, otherwise you really fall. I agree with you that one of the problems here concerns the relation between things and their descriptions: The problem is when we confuse them! (And the concept of "information" alas seems just as subject to the problem of intrinsic versus derived meaning (i.e., groundedness) as computation is.)

Stevan Harnad

------------------------------------

Date: Thu, 23 Apr 92 21:25:14 -0400 From: davism@turing.cs.nyu.edu (Martin Davis)

Stevan,

I've been watching the (real and virtual) stones flying in this discussion, amazed that none of the hermeneutic mirrors are broken. I had resolved to be safe and shut up. But here goes! I'm throwing, not stones, but rather, all caution to the wind.

Please forgive me, but this is what I really think: if and when brain function is reasonably well understood (and of course that includes understanding how consciousness works), this entire discussion will be seen as pointless, in much the same way that we now regard the battles that used to rage about the ether as pointless. In particular, I believe that the paradoxes of subjectivity ("How can I know that anyone other than me experiences redness?") will seem no more problematic than such equally compelling conundrums as: How can light waves possibly travel through empty space without a medium in which they can undulate? We (or rather our heirs) will know that other people experience redness because it will be known exactly what it is that happens in their brains and ours when redness is experienced. And then the objection that we cannot know that their experience is like ours, or even that they are experiencing anything, will just seem silly.

Whether a TT-passing computer is in any reasonable sense conscious of what it is doing is not a question we can hope to answer without understanding consciousness. If, for example, Dennett is right about consciousness, then I can perfectly well imagine that the answer could be "yes", since I can't see any reason why such mechanisms couldn't in principle be built into a computer program.

Martin Davis

-----------------------------------------

Martin Davis (davism@turing.cs.nyu.edu) wrote:

md> if and when the brain function is reasonably well understood (and of md> course that includes understanding how consciousness works), this md> entire discussion will be seen as pointless... the paradoxes of md> subjectivity ("How can I know that anyone other than me experiences md> redness?") will seem no more problematic... [We] will know that other md> people experience redness because it will be known exactly what it is md> that happens in their brains and ours when redness is experienced. And md> then the objection that we cannot know that their experience is like md> ours, or even that they are experiencing anything, will just seem md> silly.

Martin,

You may be surprised to hear that this a perfectly respectable philosophical position (held, for example, by Paul Churchland and many others) -- although there are also MANY problems with it, likewise pointed out by many philosophers (notably, Tom Nagel) (and although the parenthetic phrase about "understanding how consciousness works" comes perilously close to begging the question).

But you will also be surprised to hear that this is not a philosophical discussion (at least not for me)! I'm not interested in what we will or won't be able to know for sure about mental states once we reach the Utopian scientific state of knowing everything there is to know about them empirically. I'm interested in how to GET to that Utopian state. And if it should be the case (as Searle and others have argued) that the symbolic road is NOT the one that leads there, I would want to know about that, wouldn't you? Perhaps this is the apt point to trot out (not for the first time in the symbol grounding discussion) the reflection of the historian J.B. Hexter on the value of negative criticism:

in an academic generation a little overaddicted to "politesse," it may be worth saying that violent destruction is not necessarily worthless and futile. Even though it leaves doubt about the right road for London, it helps if someone rips up, however violently, a "To London" sign on the Dover cliffs pointing south...

md> Whether a TT-passing computer is in any reasonable sense conscious of md> what it is doing is not a question we can hope to answer without md> understanding consciousness. If, for example, Dennett is right about md> consciousness, then I can perfectly well imagine that the answer could md> be "yes", since I can't see any reason why such mechanisms couldn't in md> principle be built into a computer program.

Yes, but if you have been following the discussion of the symbol grounding problem you should by now (I hope) have encountered reasons why such (purely symbolic) mechanisms would not be sufficient to implement mental states, and what in their stead (grounded TTT-passing robots) might be sufficient.

Stevan Harnad

------------------------------------------

Date: Fri, 8 May 92 17:04:08 EDT From: "Stevan Harnad"

From: dietrich@bingsuns.cc.binghamton.edu Eric Dietrich Subject: Re: Publishing the "What is Computation?" Symposium To: harnad@Princeton.EDU (Stevan Harnad)


> To all contributors to the "What is Computation?" Symposium:
>
> Please let me know which of you are (and are NOT) interested in
> publication. In the meanwhile, we can continue for a few more
> iterations before involking cloture. Perhaps the prospect of publication
> will change the style of interaction from this point on, perhaps not...

Stevan: I am interested in publication.

Eric Dietrich

-----------------------------------

Date: Fri 8 May 92 13:39:59-PDT From: Laurence Press

Dear Steven,

Have you got a publisher in mind for the book? If not, I am a consulting editor at Van Nostrand Reinhold, and would be happy to talk with them about it.

Larry Press

-----------------------------------

From: Stevan Harnad To: Larry Press

Dear Larry,

The context for the idea was actually a proposal from the Editor of Minds and Machines, James Fetzer, to publish it as a special issue of his journal. Thanks for your offer too, but unless we encounter problems in fitting it within the scope of a special journal issue, it looks as if we're already spoken for!

Best wishes,

Stevan Harnad

---------------------------------

From: Pat Hayes (hayes@cs.stanford.edu) Date: Tue, 28 Apr 92 18:04:15 MDT

Searle's argument establishes nothing. If one is inclined to accept the idea that an implemented program might correctly be said to exhibit cognition, then the scenario Searle outlines - which we all agree to be fantastic, but for different reasons - suggests that there is an important difference between a computer running a program and the process of a human following through the steps of an algorithm, and if one were to achieve the former with a human as the computer then one would have a(n admittedly fantastic) situation, something akin to a fragmented personality. If one is inclined to reject that idea, Searle's scenario can be taken as further bolstering of that inclination, as many have noted.

I don't think the other-minds 'barrier' is really germane to the discussion, as it applies as much to other humans as to (hypothesised) artifical agents. I take it as observationally obvious that stones don't have minds, that (most) humans do, and that such things as cats and mice and perhaps some complex computational systems are best described as having partial, simple, or primitive minds. Somewhere between cats (say) and snails (say) the concept becomes sufficiently unclear as to probably be worthless. (This gradual deterioration of mentality is not a crisp phase transition, by the way, and I don't think that there is such a sharp division between mere biology or mechanism and real intensional thought.)

You wrote:


>
>sh> normally the only way to know whether or not a
>
>sh> system has a mind is to BE the system.

If one takes this stark a view of the other-minds question then it seems to me hard to avoid solipsism; and I may not be able to refute solipsism, but I'm not going to let anyone ELSE persuade me its true.

We can go on disagreeing for ever, but let me just say that I don't feel any sense of strain or cost to maintain my views when shown Searle's curious misunderstandings of computational ideas.


>
>ph> If we include (as we should) linguistic input, it seems clear that
>
>ph> structures and processes [underlying our capacity to categorize] will
>
>ph> be largely symbolic... vision and other perceptual modes involve
>
>ph> symbols from an early stage...
>
>
>sh> The only problem with "including" (as you put it) linguistic input is
>
>sh> that, without grounding, "linguistic input" is just meaningless
>
>sh> squiggles and squoggles. To suppose it is anything more is to beg the
>
>sh> main question at issue here.

Oh no, I have to profoundly disagree. The question is how formal symbols in a computational system might acquire meaning. But surely the words in the English sentences spoken to a machine by a human do not need to have their meaningfulness established in the same way. To take English spoken by humans - as opposed to formalisms used by machines - as having content surely does not beg any of the questions we are discussing.


>
>sh> To categorize is to sort the objects in the world, beginning with their
>
>sh> sensory projections

But surely by insisting on beginning thus, YOU are begging the question!


>
>sh> It is true that we can sort names and descriptions
>
>sh> too, but unless these are first grounded in the capacity to sort and
>
>sh> name the objects they refer to, based on their sensory projections,
>
>sh> "names and descriptions" are just symbolic gibberish...

Rhetoric again. But look at this carefully. Consider the word "everyone". What kind of 'sensory projection' could provide the suitable 'grounding' for the meaning of this? And try "whenever", "manager" or "unusual". Language is full of words whose meaning has no sensory connections at all.


>
>sh> But your point about the correspondence between the internal numerical
>
>sh> symbol for the length of an internal sequence can be made without
>
>sh> referring to the mental.

Yes. I did, actually.


>
>sh> There is certainly a correspondence there,
>
>sh> and the interpretation is certainly guaranteed by causality, but only
>
>sh> in a slightly more interesting sense than the interpretation that every
>
>sh> object can be taken to be saying of itself "Look, here I am!"

That particular example may not be very interesting, but the point it makes is rather more, since it is illustrative of a huge collection of computational phenomena throughout which interpretation is similarly guaranteed by causality.This was Brian Smith's point: computation is, as it were, permeated by meanings causally linked to symbols.


>
>sh> Symbols that aspire to be the language of thought cannot just have a few
>
>sh> fixed connections to the world.

This raises a very interesting question. Let us suppose that you are basically right about the need for grounding to guarantee meaning. I believe you are, and have made similar points myself in my 'naive physics' papers, although I think that English can ground things quite successfully, so have more confidence in the TT than you do. But now, how much grounding does it take to sufficiently fix the meanings of the symbols of the formalisms? Surely not every symbol needs to have a direct perceptual accounting. We have all kinds of mechanisms for transferring meanings from one symbol to another, for example. But more fundamentally, beliefs relating several concepts represent mutual constraints on their interpretation which can serve to enforce some interpretations when others are fixed. This seems to be a central question: just how much attachment of the squiggles to their meanings can be done by axiomatic links to other squoggles?


>
>sh> The systematicity that is needed has to
>
>sh> have at least the full TT power of natural language -- and to be
>
>sh> grounded it needs TTT-scale robotic capacity.

Thats exactly the kind of assertion that I feel need not to be taken at face value.
>
>ph> What is different in having a machine that can run
>
>ph> algorithms from just being able to run algorithms? I take it as
>
>ph> obvious that something important is...
>
>
>sh> I think you're missing my point. The important thing is that the
>
>sh> algorithm be implemented mindlessly, not that it be implemented
>
>sh> mechanically (they amount to the same thing, for all practical
>
>sh> purposes). I could in principle teach a (cooperative) two-year old who
>
>sh> could not read or write to do rote, mechanical addition and
>
>sh> multiplication. I simply have him memorize the finite set of
>
>sh> meaningless symbols (0 - 9) and the small set of rules (if you see "1"
>
>sh> above "3" and are told to "add" give "4", etc.). I would then have a
>
>sh> little human calculator, implementing an algorithm, ...

No, thats exactly where I disagree. A human running consciously through rules, no matter how 'mindlessly', is not a computer implementing a program. They differ profoundly, not least for practical purposes. For example, you would need to work very hard on keeping a two-year-old's attention on such a task, but the issue of maintaining attention is not even coherent for a computer.

I know you find observations like this irrelevant to the point you are making - hence your quick "cooperative" to fend it off - but they are very relevant to the point I am making. I see an enormous, fundamental and crucial difference between your 'mindless' and 'mechanical'. The AI thesis refers to the latter, not the former. To identify them is to abandon the whole idea of a computer.


>
>sh> Now let me tell you what WOULD be cheating: If any of what I had the
>
>sh> child do was anything but SYNTACTIC, i.e., if it was anything other than
>
>sh> the manipulation of symbols on the basis of rules that operate only on
>
>sh> their (arbitrary) shapes: It would be cheating if the child (mirabile
>
>sh> dictu) happened to know what "odd" and "even" meant, and some of the
>
>sh> calculations drew on that knowledge instead of just on the mechanical
>
>sh> algorithm I had taught him. But as long it's just mechanical syntax,
>
>sh> performed mindlessly, it makes no difference whatsoever whether it is
>
>sh> performed by a machine or stepped through (mechanically) by a person.

I disagree: I think it makes a fundamental difference, and to deny this is to deny that computation is real. But we are just beating our chests at one another again.


>
>sh> Now if you want to appreciate the real grip of the hermeneutical circle,
>
>sh> note how much easier it is to believe that an autonomous black box is
>
>sh> "really" understanding numbers if it is a machine implementing an
>
>sh> algorithm mechanically rather than an illiterate, non-numerate child,
>
>sh> who is just playing a symbolic game at my behest.

Nah nah. You are just (re)in-stating the chinese room 'argument' AGAIN. And it still is convincing if you believe its conclusion, and not if you don't. It doesn't get anywhere.


>
>sh> THAT's why you want to
>
>sh> disqualify the latter as a "real" implementation, despite the fact that
>
>sh> the same syntactic algorithm is being implemented in both cases, without
>
>sh> any relevant, nonarbitrary differences whatsoever.

No, I repeat: a human running through an algorithm does not constitute an IMPLEMENTATION of that algorithm. The difference is precisely what computer science is the study of: how machines can perform algorithms without human intervention. If you could get your two-year-old's body to IMPLEMENT addition algorithms, you would almost certainly be liable for criminal action.


>
>ph> Clearly if you insist that [reducing to gibberish]
>
>ph> can always be done to computer insides but not always to human
>
>ph> insides, then you are never going to see meaning in a machine.
>
>
>sh> I am sure that whatever is REALLY going on in the head can also be
>
>sh> deinterpreted, but you mustn't put the cart before the horse: You
>
>sh> cannot stipulate that, well then, all that's really going on in the
>
>sh> head is just symbol manipulation, for that is the hypothesis on trial
>
>sh> here!

Well, hang on. Surely if you concede that the head's machinations can be de-interpreted, then indeed you have conceded the point; because then it would follow that the head was performing operations which did not depend on the meanings of its internal states. That this is the point at issue does not make it illegal for me to have won the argument, you know. But maybe you did not mean to give that up so quickly. I will let you take that move back before claiming checkmate.


>
>sh> {Actually, there are two semi-independent hypotheses on trial: (1) Is
>
>sh> anything NOT just a computer doing computation? and, (2) Are minds just
>
>sh> computers doing computation? We agree, I think, that some things are
>
>sh> NOT computers doing computation, but you don't think the mind is one of
>
>sh> those noncomputational things whereas I do.]

Lets agree to dismiss (1). This Searlean thesis that everything is a computer is so damn silly that I take it simply as absurd. I don't feel any need to take it seriously since I have never seen a careful argument for it, but even if someone produces one, that will just amount to a reductio tollens disproof of one of its own assumptions.


>
>sh> I had recommended the exercise of deinterpreting the symbols so as to
>
>sh> short circuit the persuasive influence of those properties that are
>
>sh> merely byproducts of the interpretability of the symbols, to see
>
>sh> whether there's anything else left over. In a grounded TTT-scale robot
>
>sh> there certainly would be something left over, namely, the robotic
>
>sh> capacity to discriminate, categorize and manipulate the objects, events
>
>sh> and states of affairs that the symbols were about. Those would be there
>
>sh> even if the symbols were just gibberish to us. Hence they would be
>
>sh> grounding the interpretations independently of our mentalistic
>
>sh> projections.

OK. But what I don't follow is why you regard the conversational behavior of a successful passer of the TT clearly insufficient to attach meaning to its internal representations, while you find Searle's response to the "Robot Reply" quite unconvincing. If we are allowed to look inside the black box and de-interpret its innards in one case, why not also the other? Why is robotic capacity so magical in its grounding capacity but linguistic capacity, no matter how thorough, utterly unable to make symbols signify? And I don't believe the differences are that great, you see. I think much of what we all know is attached to the world through language. That may be what largely differentiates us from the apes: we have this incredible power to send meaning into one anothers minds.

Pat Hayes

(PS. The arithmetic example, while very simple, provides an interesting test for your hermeneutic intuitions. Take two different addition algorithms. One is the usual technique we all learned involving adding columns of numbers and carrying the tens, etc. . The other has a bag and a huge pile of pebbles and counts pebbles into the bag for each number, then shakes the bag and counts the pebbles out, and declares that to be the sum. A child might do that. Would you be more inclined to say that the second, pebble-counting child understood the concept of number? You can no doubt recognise the path I am leading you along.)

-----------------------

Date: Sun, 10 May 92 13:24:01 EDT From: "Stevan Harnad"

SYMBOLS CANNOT GROUND SYMBOLS

Pat Hayes (hayes@cs.stanford.edu) wrote:


>ph> I take it as observationally obvious [1] that stones don't have minds,
>ph> [2] that (most) humans do, and that such things as [3] cats and mice
>ph> and perhaps some [4] complex computational systems are [5] best
>ph> described as having partial, simple, or primitive minds... and I don't
>ph> think that there is such a sharp division between mere biology or
>ph> mechanism and real intensional thought.

Although I don't think this kind of "observation" is quite the same as other empirical observations, let me point out that one can readily agree with [1 - 3] and utterly disagree with [4], which suggests it might not all be so "obvious."

Let me also point out on exactly what MY judgment, at least, is based in these 4 cases. It is based purely on TTT-indistinguishability (note that I said TTT, i.e., total indistinguishability in robotic capacities, not merely TT, i.e., indistinguishability only in symbolic capacities).

(Although there is another potential criterion, TTTT (neural) indistinguishability, I am enough of a functionalist, and believe the robotic degrees of freedom are narrow enough, to make this further constraint supererogatory; besides, the TTTT is certainly not why or how we judge that other people and animals have minds.)

Animals do not pass the human TTT, but they come close enough. So would robots, making their way in the world (but, for methodological reasons, only if they passed the human TTT; we unfortunately do not know enough about animals' TTT capacities to be able to trust our judgments about animal-robots' TTT-indistinguishability from their biological counterparts: this is a serious problem for bottom-up robotics, which would naturally prefer to take on the amphioxus TTT before facing the human TTT!).

But you really begin to equivocate with [5]: "best described as having partial, simple, or primitive minds," because, you see, what makes this particular question (namely, the "other minds problem," pace Martin Davis) different from other empirical problems is that it is not merely a question of finding the "best description," for there also happens to be a FACT of the matter: There either IS somebody home in there, experiencing experiences, thinking thoughts, or NOT. And if not, then attributing a mind to it is simply FALSE, whether or not it is the "best description" (see Oded Maler's point about things vs. descriptions).

Nor is there a continuum from the mental to the nonmental (as there perhaps is from the living to the nonliving). There may be higher and lower alertness levels, there may be broader and narrower experiential repertoires of capacities, but the real issue is whether there is anybody home AT ALL, experiencing anything whatever, and that does indeed represent a "sharp division" -- though not necessarily between the biological and the nonbiological.

Now no one can know where that division really lies (except by being the candidate), but we can try to make some shrewd empirical inferences. Symbolic Functionalism ("thinking is just computation") was a natural first pass at it, but I, at least, think it has been shown to be insufficient because of the symbol grounding problem. Robotic Functionalism ("thinking is what goes on inside grounded TTT-scale robots") could be wrong too, of course, but until someone comes up with a principled reason why, I see no cause to worry about heading in that empirical direction.


>
>sh> normally the only way to know whether or not a
>
>sh> system has a mind is to BE the system.
>
>ph> If one takes this stark a view of the other-minds question then it
>ph> seems to me hard to avoid solipsism; and I may not be able to refute
>ph> solipsism, but I'm not going to let anyone ELSE persuade me its true.

For mind-modellers, the other-minds problem is not a metaphysical but a methodological problem. Abandoning computationalism certainly does NOT commit us to solipsism.


>ph> The question is how formal symbols in a computational system might
>ph> acquire meaning. But surely the words in the English sentences spoken
>ph> to a machine by a human do not need to have their meaningfulness
>ph> established in the same way. To take English spoken by humans - as
>ph> opposed to formalisms used by machines - as having content surely does
>ph> not beg any of the questions we are discussing.

There's no problem with the content of English for English speakers. The problem is with the content of English for a computer. English is grounded only in the heads of minds that understand what it means. Apart from that, it's just (systematically interpretable) squiggles and squoggles. The question is indeed how the squiggles and squoggles in a computer might acquire meaning -- and that certainly isn't by throwing still more ungrounded squiggles and squoggles at them...


>
>sh> To categorize is to sort the objects in the world,
>
>sh> beginning with their sensory projections
>
>ph> But surely by insisting on beginning thus, YOU are begging the question!

Not at all, I'm trying to answer it. If we start from the recognition that the symbols in a computer are ungrounded and need to be grounded, then one possible grounding hypothesis is that the requisite grounding comes from constraints exerted by symbols' physical connections to the analog structures and processes that pick out and interact with the the real-world objects that the symbols are about, on the basis of their sensorimotor projections. It seems to me that to attempt to ground systems other than from the sensory-bottom upward is to try to get off the ground by one's (symbolic?) bootstraps, or by clutching a (symbolic?) skyhook. I am, however, interested in rival grounding hypotheses, in particular, non-sensory ones, just as long as they are GROUNDING hypotheses and not just ways of letting the hermeneutics in by the back door (as in imagining that "natural language" can ground symbols).


>ph> Consider the word "everyone". What kind of "sensory projection" could
>ph> provide the suitable "grounding" for the meaning of this? And try
>ph> "whenever", "manager" or "unusual". Language is full of words whose
>ph> meaning has no sensory connections at all.

These objections to bottom-up sensory grounding have been raised by philosophers against the entire edifice of empiricism. I have attempted some replies to them elsewhere (e.g. Harnad 1992), but the short version of the reply is that sensory grounding cannot be investigated by armchair introspection on word meanings; it will only be understood through empirical attempts to design grounded systems. What can be said, however, is that most words need not be grounded directly. The symbol string "An X is a Y that is Z" is grounded as long as "Y" and "Z" are grounded, and their grounding can likewise be symbolic and indirect. The sensory grounding hypothesis is simply that eventually the symbolic descriptions can be cashed into terms whose referents can be pick out from their direct sensory projections.

"Everyone," for example, perhaps means "all people." "People," is in turn beginning to sound more like something we could pick out from sensory projections. Perhaps even the "all/not-all" distinction is ultimately a sensory one. But I'm just introspecting too now. The real answers will only come from studying and then modeling the mechanisms underlying our (TTT) capacity for discrimination, categorization and identification.


>ph> [There is] a huge collection of computational phenomena throughout
>ph> which interpretation is similarly guaranteed by causality [as in the
>ph> length of the internal string example]. This was Brian Smith's point:
>ph> computation is, as it were, permeated by meanings causally linked to
>ph> symbols.

And I'm not unsympathetic to that point; I just want to see it worked out and then scaled up to the TTT.


>
>sh> Symbols that aspire to be the language of thought
>
>sh> cannot just have a few fixed connections to the world.
>
>ph> Let us suppose that you are basically right about the need for
>ph> grounding to guarantee meaning. I believe you are, and have made
>ph> similar points myself in my "naive physics" papers, although I think
>ph> that English can ground things quite successfully, so I have more
>ph> confidence in the TT than you do. But now, how much grounding does it
>ph> take to sufficiently fix the meanings of the symbols of the formalisms?
>ph> Surely not every symbol needs to have a direct perceptual accounting.
>ph> We have all kinds of mechanisms for transferring meanings from one
>ph> symbol to another, for example.

These are empirical questions. I have no idea a priori how large a direct sensory basis or "kernel" a grounded TTT system requires (although I do suspect that the kernel will be provisional, approximate, and always undergoing revision whose consequences accordingly percolate throughout the entire system). But I am sure that "English" won't do it for you, because, until further notice, English is just systematically interpretable gibberish, and it's the interpretations that we're trying to ground!

Your phrase about "the need for grounding to guarantee meaning" also worries me, because it sounds as if grounding has merely a confirmatory function: "The meanings are already in the squiggles and squoggles, of course; we just need the robotic evidence to convince the sceptics." Well I think the meaning will be in the grounding, which is why I believe most of the actual physical structures and processes involved will be analog rather than computational.


>ph> But more fundamentally, beliefs relating several concepts represent
>ph> mutual constraints on their interpretation which can serve to enforce
>ph> some interpretations when others are fixed. This seems to be a central
>ph> question: just how much attachment of the squiggles to their meanings
>ph> can be done by axiomatic links to other squoggles?

The constraints you speak of are all syntactic. What they give you (if they are set up properly) is the coherent semantic INTERPRETABILITY that makes a symbol system a symbol system in the first place. The GROUNDING of that interpration must come from elsewhere. Otherwise it's just the self-confirmatory hermeneutic circle again.


>ph> A human running consciously through rules, no matter how "mindlessly,"
>ph> is not a computer implementing a program. They differ profoundly, not
>ph> least for practical purposes. For example, you would need to work very
>ph> hard on keeping a two-year-old's attention on such a task, but the
>ph> issue of maintaining attention is not even coherent for a computer.
>ph>
>ph> I see an enormous, fundamental and crucial difference between your
>ph> "mindless" and "mechanical." The AI thesis refers to the latter, not
>ph> the former. To identify them is to abandon the whole idea of a
>ph> computer... to deny this is to deny that computation is real.
>ph>
>ph> a human running through an algorithm does not constitute
>ph> an IMPLEMENTATION of that algorithm. The difference is precisely what
>ph> computer science is the study of: how machines can perform algorithms
>ph> without human intervention.

I suppose that if computer science were just the study of hardwares for implementing programs then you would have a point (at least about what computer scientists are interested in). But isn't a lot of computer science implementation-independent (software)? If someone writes a program for factoring polynomials, I don't think he cares if it's executed by a machine or an undergraduate. Usually such a program is written at a lower level than the one at which an undergraduate would want to work at, but the undergraduate COULD work at that lower level. I think the programmer would have to agree that anyone or anything following the syntactic steps his program specified would be "executing" his program, even if he wanted to reserve "implementing" it for the kind of mechanical implementation you are stressing.

I am not implying that designing mechanical devices that can mechanically implement programs is not an extremely important achievement; I just think the implementation-independence of the programming level renders all these hardware-related matters moot or irrelevant for present purposes. If I had to pick the two main contributions of computer science, they would be (1) showing how much you could accomplish with just syntax, and (2) building devices that were governed mechanically by syntax; most of the action now is in (1) precisely because it's independent of (2).

Let me try to put it another way: Prima facie, computer-hardware-science is a branch of engineering; it has nothing to do with the mind. What principle of hardware science could possibly underwrite the following distinction: If a program is executed by a machine, it has a critical property that it will lack if the very same program is executed by a person. You keep stressing that this distinction is critical for what counts as a true "implementation" of a program. So let's try to set trivial semantics aside and speak merely of the program's being "executed" rather than "implemented." What is there in COMPUTER SCIENCE that implies that mechanical execution will have any relevant and radically different properties from the human execution of the very same program (on the very same I/O)?

Now I realize the case of mind-implementation is unique, so perhaps you could give some suggestive examples of analogous radical differences between mechanical and human implementations of the same programs in other domains, just to set my intuitions.


>ph> Surely if you concede that the head's machinations can be
>ph> de-interpreted, then indeed you have conceded the point; because then
>ph> it would follow that the head was performing operations which did not
>ph> depend on the meanings of its internal states.

Not at all. All that follows from my very willing concession is that one can de-interpret any kind of a system at all, whether it is purely symbolic or not. WHATEVER is going on inside a grounded TTT-scale robot (you seem to be able to imagine only computation going on in there, but I can think of plenty more), whether we know its interpretation or not, those inner structures and processes (whatever they are) retain their systematic relation to the objects, events and states of affairs in the world that (unbeknownst to us, because de-interpreted) they are interpretable as being about. Why? Because those goings-on inside the head would be governed by the system's robotic capacity to discriminate, categorize, manipulate and discourse (in gibberish, if we don't happen to know the code) about the world TTT-indistinguishably from the way we do. In other words, they would be grounded.


>ph> what I don't follow is why you regard the conversational behavior of a
>ph> successful passer of the TT clearly insufficient to attach meaning to
>ph> its internal representations, while you find Searle's response to the
>ph> "Robot Reply" quite unconvincing. If we are allowed to look inside the
>ph> black box and de-interpret its innards in one case, why not also the
>ph> other? Why is robotic capacity so magical in its grounding capacity but
>ph> linguistic capacity, no matter how thorough, utterly unable to make
>ph> symbols signify? And I don't believe the differences are that great,
>ph> you see. I think much of what we all know is attached to the world
>ph> through language. That may be what largely differentiates us from the
>ph> apes: we have this incredible power to send meaning into one another's
>ph> minds.

WE do, but, until further notice, computers don't -- or rather, their capacity to do so (bidirectionally, as opposed to unidirectionally) is on trial here. To get meaning from discourse (as we certainly do), the meanings in our heads have to be grounded. Otherwise all that can be gotten from discourse is syntax. This is why the TT alone is inadequate: because it's all just symbols; nothing to ground the meanings of meaningless squiggles in except still more, meaningless squiggles.

I don't find Searle's response to the Robot Reply unconvincing, I find the Robot Reply unconvincing. It merely amounted to pointing out to Searle that people could do more than just write letters. So Searle said, quite reasonably, fine, add on those extra things and I still won't understand Chinese. He was right, because the objection was wrong. It's not a matter of symbol crunching PLUS some add-on peripherals, where the symbol-crunching is the real bearer of the meaning. That's just as equivocal as symbol crunching alone.

No, my reply to Searle (which in Harnad 1989 I carefully dubbed the "Robotic Functionalist Reply," to dissociate it from the Robot Reply) explicitly changed the test from the TT to the TTT and accordingly changed the mental property in question from "understanding Chinese" to "seeing" in order to point out that even transduction is immune to Searle's argument.

To put it in the briefest possible terms: Symbols alone will not suffice to ground symbols, and language is just symbols (except in the heads of grounded systems -- which neither books nor computers are).


>ph> The arithmetic example, while very simple, provides an interesting
>ph> test for your hermeneutic intuitions. Take two different addition
>ph> algorithms. One is the usual technique we all learned involving adding
>ph> columns of numbers and carrying the tens, etc. The other has a bag
>ph> and a huge pile of pebbles and counts pebbles into the bag for each
>ph> number, then shakes the bag and counts the pebbles out, and declares
>ph> that to be the sum. A child might do that. Would you be more inclined
>ph> to say that the second, pebble-counting child understood the concept of
>ph> number? You can no doubt recognise the path I am leading you along.
>ph>
>ph> Pat Hayes

You've changed the example a bit by having the child know how to count (i.e., able to attach a name to an object, namely, a quantity); this is beginning to leave behind the point, which was that we only wanted the child to do syntax, slavishly and without understanding, the way a computer does.

But, fair enough, if what you are talking about is comparing two different algorithms for addition, one involving the manipulation of numerals and the other the manipulation of pebbles (again on the assumption that the child does not have any idea what all this means), then I have no problem with this: Either way, the child doesn't understand what he's doing.

If you have two I/O equivalent algorithms you have weak equivalence (that's what the TT is based on). The stronger equivalence (I called it Turing Equivalence, but you indicated [in Hayes et al 1992] that that was the wrong term) requires two implementations of the same algorithm, both equivalent state for state. The latter was the equivalence Searle was considering, and even with this strong form of equivalence there's no understanding.

Your point is not, I take it, about how one goes about TEACHING arithmetic to a child, or about what a child might figure out from a task like this -- for that's just as irrelevant as the question of whether or not Searle might actually learn a few things about Chinese in the Chinese room. All such considerations beg the question, just as any verbal instruction to either the child or Searle (about anything except the syntactic rules to be followed) would beg the question.

Stevan Harnad

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag.

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on the Virtual Mind. Minds and Machines (in press)

---------------------------------------

Date: Wed, 29 Apr 92 12:45:24 EST From: Ross Buck

I have been aware of the symbol grounding discussion and discussion re computation, but have admittedly not been keeping up in great detail, and if this is too off-the-wall please ignore it. However, I have a perspective that might be relevant. I have made a distinction between special-purpose processing systems (SPPSs) structured by evolution and general-purpose processing systems (GPPSs) stuctured during the course of ontogeny. Perceptual systems are an example of the former and classical conditioning, instrumental learning, and higher-order cognitive processes examples of the latter. A Gibsonian view of perceptual systems suggests that perception is DIRECT in that there are evolved compatibilities between events and the experience of the perceiver. The experience is not a symbol of the event, it is a SIGN, which in my definition bears a natural relationship with its referent. A sign so defined may be what you call a grounded symbol: I'm not sure.

I would argue that GPPSs are computers but SPPSs are not. SPPSs are a result of evolution and are a defining characteristic of living systems. Computers are not living systems, but living systems have incorporated computers, in the form of GPPSs.

References:

Buck, R. (1985) Prime theory: A general view of motivation and emotion. Psych. Review.

Buck, R. (1988) Human Motivation and Emotion. 2nd. Ed. New York: Wiley.

=========================================================================

From: Stevan Harnad

[The above contribution was returned to the author several weeks ago with the following comment; the rest was subsequently added by the author in response.]

Ross, the two kinds of systems you mention sound like they are worth distinguishing, but you have not given the specifics of what goes on inside them: Is it computation (symbol manipulation) or something else (and if something else, then what)? So far you have sorted two kinds of package, but what we are discussing is the contents. I will only be able to post your contribution to the SG list as a whole if it takes up the substance of the discussion. (The Peircian terminology does not help either, unless you specify a mechanism.) -- Stevan Harnad

=========================================================================

It is something else, namely reproduction. Whereas GPPSs inherently are computers, SPPSs are designed to reproduce events unaltered, for all practical purposes. The analogy would be with reproduction devices like audio/video communication systems, rather than computers. The old Shannon-Weaver analysis comes to mind. More specifically:

Living things differ from computers in that the former have an inherent purpose: the maintenance of the DNA molecule. This involves maintaining the temperature, energy, and chemical (TEC) balances necessary for the DNA molecule to exist and replicate. The success of living things in this regard is demonstrated by the fact that the TEC balances existing within our bodies are roughly comparable to those of the primordial seas in which the DNA molecule first spontaneously generated. To maintain these balances early life forms evolved perceptual systems to monitor both the external environment and the internal, bodily environment for TEC resources and requirements; and response systems to act accordingly: to approach TEC states that favor DNA existence and replication and to avoid TEC states that endanger them. In the process, there evolved basic systems of metabolism (including the oxygen-burning metabolism of animals and the photosynthesis employed by plants which shaped the atmosphere of the early earth), complex eukaryotic cells, sexual reproduction, multicelled creatures, social organization, etc. By far the largest span of time of life on the earth has involved the cobbling together via evolution of these basic systems. Each system, in a teleonomic (as opposed to teleological) process, evolved to serve a specific function: for this reason I prefer to call them "special-purpose processing systems" (SPPSs).

One might argue that these systems involve computation: in a sense any process that involves information transfer might be defined as involving computation. I suggest that the term computation be reserved for systems involving information processing, and that systems designed around information transfer are fundamentally distinct: they are recording devices rather than computing devices. Recording systems have inherent "meaning:" the nature of the event being recorded. From the point of view of the DNA molecule it is critical that the information received by the perceptual systems regarding TEC events is accurate: that it matches in critical respects the actual TEC events. If this is not the case, that DNA molecule is unlikely to survive. The result is the evolution of a perceptual system along the lines of Gibsonian theory: compatibilities between the critical TEC events and the recording qualities of the system evolve naturally and inevitably, so that the organism gains veridical access to certain events in both the external terrestrial environment (including the activities of other organisms) and the internal bodily environment (the latter in the form of motivational-emotional states: in complex creatures who know THAT they have these states they constitute affects, or desires and feelings).

I term the elements by which information is transferred in SPPSs "signs" rather than symbols. This is admittedly a Pierceian term, but I do not wish to imply a Pierceian definition. Whereas symbols have arbitrary relationships with their referents, the relationship between the sign and the referent is natural. The living organism has access to important TEC events via signs of those events incorporated in the perceptual system: the photon excites a rod in the retina, which in turn excites a sensory neuron in the optic nerve, and so on to the visual cortex. Even though information is altered in form, the system is designed so that the meaning of the information--its relationship to the TEC events--is maintained: the sign of the event is passed up the line altered in form but not meaning. (Is this what you mean by a grounded symbol system, Stevan?)

The evolution of SPPSs took a long time: roughly three billion of the 3.8 billion year old story of life on earth. In the process, the environment of the earth was transformed: the oxygen atmosphere and ozone layer for example are products of life. Very early on, however, it became useful for creatures to process information as well as merely receive it: to associate one event with another, as in Pavlovian conditioning; to approach events associated with beneficial TEC outcomes (i.e., positive incentives) and avoid negative events, etc. This requires generalpurpose processing systems (GPPSs) that are structured by experience during the course of ontogeny: computing systems. In human beings, the range and power of such systems has been greatly increased by language.

Thus living things are not computers, but they have come to employ computing devices in adapting to the terrestrial environment. But the fundamental teleonomic goal of living systems--the meaning of life, as it were--is to maintain the TEC balances necessary for the existence of the DNA molecule. Ironically, the activities of human beings made possible by the power of the GPPSs and language have destroyed these balances beyond redemption for many species, and placed in jeopardy the future of life on the earth.

Ross Buck

--------------------------------------------------------------

From: Kentridge Date: Fri, 8 May 92 15:45:29 BST

[Here is] something for the computation discussion perhaps (I have missed a bit after being away - I hope I'm not retreading old ground or too completely off the mark for the current state of the discussion).

Dynamic properties of computational systems.

The discussion of the nature of computation has reached the issue of symbol interpretability just as previous discussions of Searle's Chinese Room problem did. While I would not deny the importance of issues of symbol interpretation when considering adaptive intelligence I think one of the most interesting questions raised by Searle was "what is special about wetware?". I wish to consider an allied question "what is special about physical systems which compute?". In order to understand computation and intelligence I think we need to address both symbolic and physical issues in parallel. Perhaps some consideration of the physics of computation might resolve some of the paradoxes that the current symbolic discussion is producing.

There are two basic classes of dynamic behaviour in physical systems - attraction to equilibrium and attraction to chaos. I will consider the effects of introducing a signal which contains some information on which we wish the system to perform computation for both classes.

When we perturb an equilibrium by introducing a signal into it its response is to settle back into one of a finite number of stable equilibrium states. The transition from initial state via the perturbed state to a resultant, possible new, equilibrium state is entirely deterministic and predictable. Once the final state has been reached, however, the precise nature of the perturbing signal is lost. Stable states of the system do not preserve any information on the history of the signals which drove them there. Such systems can only perform trivial computation because of their limited memory - we can conceive of them as Turing machines with very very short tapes. In chaotic systems we fare no better, but for opposite reasons. In a chaotic system each different perturbation introduced to a system in a given starting state will produce a unique resulting behaviour (even if two perturbations push the system onto the same chaotic attractor the resulting orbits will never meet); the system has infinite memory. The problem, however, is that the transition between initial, perurbed and resultant states in unpredictable. The chaotic system is like a Turing machine with a very unreliable automaton.

One characteristic of systems that do computation then is that they are neither chaotic nor equilibrium systems, they can, however, be at the boundary or phase transition between these two regimes. In such systems distinct effects of perturbations can last arbitrarily long (but not infinite) times and transitions between states are at least probabilistically predictable. The notion that computation only occurs at phase transitions has received experimental support from studies of cellular automata (e.g. Packard, 1987) and theoretical support from analysis of the informational properties of the dynamics of phase transitions in terms of the relationship between complexity and entropy (Crutchfield and Young, 1991). Analysis of the dynamics of simulations of physiologically plausible models of cerebral cortex suggests that cortex may be well suited to being maintained in a critical dynamic state between equilibrium and chaos (Kentridge, 1991) in which computation can take place.

There are a few conclusions I would like to draw from this. First, in this scheme of things computation per se is defined solely in terms of the relationship between the complexity (the number of types of structural regularites which are needed to describe a set of data) and the entropy of data produced by a system. Processes which produce a high ratio of complexity to entropy are ones which are capable of computation. Second, as a consequence of this, everything is not a computer doing computation. Third, computation is only interpretable in terms of the the regularities that are used in the definition of complexity - if there is a correspondence between those regularities and the rest of the world then we may recognise the computation as being useful.

I hope this is of some help to someone! It seems to me at least that a physical definition of computation allows us to recognise systems as performing computation even if we can't interpret computation. It also emphasizes that there is an important relationship between the hardware on which computation occurs and the nature of interpretable computation.

Caveat: I'm really a physiological psychologist so reference to the following sources is recommended (well the first two at least!).

References.

Packard, N.H. (1987) Adaptation towards the edge of chaos. In J.A.S. Kelso, A.J. Mandell and M.F. Schlesinger (Eds.) Dynamic patterns in complex systems. Singapore: World Scientific.

Crutchfield, J.P. and Young, K. (1991) Computation at the onset of chaos. In W.H. Zurek (Ed.) Complexity, entropy and the physics of information. (Proceeding of the Santa Fe Institute Studies in the Sciences of Complexity Volume 8.) Redwood City, CA.: Addison-Wesley.

Kentridge, R.W. (1991) Weak chaos and self-organisation in simple cortical network models. Eur. J. Neurosci. S4 73.

Robert Kentridge

--------------------------

Date: Sun, 10 May 92 19:29:44 EDT From: "Stevan Harnad"

Date: Fri, 8 May 92 17:58:50 PDT From: Dr Michael G Dyer Subject: Turing Machine - Brain Continuum

Here's another thought experiment for all. Imagine a continuum C: At one end is a physical brain B, capable of passing the Turing Test (or have it pass the TTT by symbols on parts of its tape controlling a robot, whichever you want).

At the other end of the continuum C is a Turing Tape T that is SO LONG and has so many "squiggles" that it models B at the molecular level. That is, for every mRNA twist/fold and protein etc. produced, there is a corresponding (huge) set of squiggles that encode their state. Transformations of squiggles etc. encode the molecular dynamics (and thus also the neural dynamics).

Notice that I've tried to remove the granularity issue (i.e. "big" symbols of traditional AI versus smaller distributed connectionist "subsymbols") by picking an extremely low (i.e. molecular) level of granularity.

Both T and B have identical behavior (wrt any TT or TTT scenarios you want to devise) so the behavior is also NOT the issue -- both T and B *act* intentional.

The strong AI people would (I assume) claim that both T and B have "intelligence", "consciousness" "intentionality", etc.

Searle (I assume) would claim that B has consciousness/intentionality but T does NOT.

Harnad (I assume) would claim that both T and B have consciousness only when controling the robot (i.e. TTT case) and both do NOT have it when "disembodied" (i.e. only passing the TT).

First, let's deal with Searle vs Strong AI. We do this by slowly moving along this continuum (of models), from B-to-T (or T-to-B). To move from B-to-T we replace segments (either scattered or contiguous, I don't think it matters) of B brain tissue with smaller Turing Machines Ti where each Ti performs some equivalent function performed by some subpart of B.

To move from T-to-B we replace bunches of squiggles on the tape with real cells, (or subcells, or cell assemblies, etc.).

The continuum might be viewed better as some kind of lattice, with the models in the middle being hybrid brains with different mixtures of cellular subsystems vs Turing Machines. For example, one hybrid is where EVERY OTHER B-neuron (with its dendrites/axons) is modeled by a separate Turing Machine, so the hybrid is a scattered mix of 50% real neurons and 50% Turing Machine simulations, all linked up.

(Turing Machines-to-cellular INTERFACES are the trickiest part. There are probably many ways of doing this. In this thought experiment I think it's ok to scale Turing Machines to a very small size (i.e. super nontechnology) so they can be more easily interfaced with dendrites (and operate even within a cell). But the main requirement is that a real neuron or protein's dynamics cause a corresponding representation to be placed on a Turing tape at the right time.)

In any case, ALL points on continuum C maintain the behavioral correspondence so that the behavior (for passing the TT or TTT) is the same.

Now, it seems to me that Searle is going to have a hard time determining when "consciousness" or "intentionality" appears, as one moves from T toward B. It's clear that he will be happy with B and unhappy with T but what about all of the possibilities inbetween?

Now let's create a new continuum C' along the "sensory embodiment" dimension by extending C along this dimension. To do this we start out with both B and T controlling a complete robot, with hands/legs/mouth, eyes/ears/skin-sensors.

As we move along C', we slowly remove these sensors/effectors. E.g., if there are 1million sensors/effectors, we cut them off, bit by bit, and leave only nerve "stumps" (in the B case) or in the T case, we cut the wires that allows a set of "squiggles" on a Turing tape to control the robot (or those wires that take sensory input and place "squiggles" on some "sensory" portion of the tape). We do this also for all the hybrid brain/machines in the middle of the continuum C. So we have now an "intentionality plane IP" of models CxC'. How you assign "intentionality/consciousness" labels to points/regions on this plane will then say something about your intuitions concerning consciousness.

Strong AI appears to be quite "liberal" -- i.e. assigning "intentionality" to the entire plane (since ALL points on IP demonstrate the same intentional BEHAVIOR).

Does Searle only assign intentionality to point B or does he accept intentionality at other points/regions ???

I'm not sure where Harnad assigns intentionality along C'. Will just ONE photo-sensitive cell be enough for "embodiment"? How many sensors/effectors are needed along continuum C' before intentionality/consciousness appears/disappears for him? (Stevan, perhaps you can enlighten us all on this.)

Both Searle and Harnad just can't accept that a TM (all those "mindless squiggles") could have a MIND. But to anyone accepting MIND as the ORGANIZATION of physical systems, our Turing Machine T has all the organization needed (with an admittedly tight bottleneck of just enough causal apparatus to have this organization direct the dynamics of the Turing tape read/write/move actions).

But is it any more of an amazing fact that "all those meaningless squiggles" create a Mind than the (equally) amazing fact that "all those mindless neurons" create a Mind? We're simply USED to seeing brains show "mindfulness". We are not (yet) used to Turing-class machines showing much mindfulness.

Michael G. Dyer

---------------

From: Stevan Harnad

Fortunately, my reply to Mike can be very short: Real neurons don't just implement computations, and symbolically simulated neurons are not neurons. Set up a continuum from a real furnace heating (or a real plane flying, or a real planetary system, moving) to a computational simulation of the same and tell me where the real heating (flying, moving) starts/stops. It's at the same point (namely, the stepping-off point from the analog world to its symbolic simulation) that a real TTT-passing robot (with its real robot-brain) and its computationally simulated counterpart part paths insofar as really having a mind is concerned.

Stevan Harnad

----------------

Date: Fri, 8 May 92 10:18:00 HST From: Herbert Roitblat

I printed our various communications on this issue and it came to 125 pages. I think that we might want to summarize contributions rather than simply clean up what has been said.

I will contribute.

Herb Roitblat

-----------------------------------------------------------------

From: Brian C Smith Date: Sat, 9 May 1992 15:54:29 PDT

I've enjoyed this discussion, but would very strongly like to argue against publishing it. Instead, I'd like to support John Haugeland's (implicit) suggestion.

For my money, there are two reasons against publishing.

First, I'm not convinced it would be interesting enough, per page. It is one thing to be part of such discussions -- or even to read them, a little bit each day, as they unfold. It has something of the structure of a conversation. It doesn't hurt that many of us know each other. Sitting down with the entire opus, as an outsider, is quite another thing. Last night I reread about a month's worth in paper form, imagining I were holding a journal in my hand -- and it didn't take. It just doesn't read like professional prose. This is not a criticism of anyone. It's just that the genre of e-mail discussion and the genre of referreed journal are different. Excellent one needn't make excellent the other.

More seriously, there has been no attempt to keep the format objective. Stevan has thoroughly mixed moderating and participating, making the result a public correspondence of his, more than a balanced group discussion. It is not just that he has contributed the most (50% of the total, four times as much as the nearest competitor [1]). It is more subtle things -- such as that, for example, a number of contributions [e.g. Sereno, Moody, Dambrosio, some of Hayes] only appeared embedded within his replies; others [e.g. Myers] only after being preceded with quite normative introduction. You don't need fancy analysis to see how much these things can skew the overall orientation.

Since it is Stevan's list, he is free to do this, and we are free to participate as we chose (though I must say that these things have limited my own participation quite a lot). I assume this is all as he intended. But it would be a very different thing to publish the result as in any sense a general discussion. Certainly to pose it as a general discussison that Stevan has merely would be quite a misrepresentation.

On the other hand, the topic is clearly of wider interest. So instead I suggest that we adopt John Haugeland's suggestion -- and that each of us write a 3000-5000 word brief or position paper on the question, and these be collected together and published. We can draw intellectually on the discussion to date -- but it would also give us a chance to distill what we've learned into punchier, more targeted form.

Brian

P.S. The prospect of publishing e-mail discussions clearly raises all kinds of complex issues -- about genre, the role of editing, applicable standards and authority, models of public debate, etc. I've just been focusing on one: of maintaining appropriate detachment between the roles of moderating and passionate participation. But many others deserve thinking through as well.

[1] 1057 lines out of 2109 counted between April 17 and May 8; Pay Hayes was second with 277.

-----------------------

From: Stevan Harnad

Well, I guess this calls for some sort of a reply from me:

(a) This electronic symposium began informally, with some cross-posting to a small group of individuals (about a dozen); only later did I begin posting it to the entire Symbol Grounding Discussion List (several hundred, which I have moderated for four years), with pointers to the earlier discussion, electronically retrievable by anonymous ftp.

(b) The expressions of interest in publication (one from James Fetzer, editor of Minds and Machines, about the possibility of publishing some version of the symposium as a special issue of his journal, and one from Laurence Press, consulting editor for Van Nostrand Rheinhold, expressing interest in publishing it as a book) came still later.

(c) No one had initially expected the symposium to reach the scope it did, nor to draw in as many participants as it has so far. In the very beginning I cross-posted the texts annotated with my comments, but once it became clear that the scale of participation was much larger than anticipated, I switched to posting all texts directly, with comments (my own and others') following separately, skywriting-style.

(d) In the published version (if there is to be one), all texts, including the earliest ones, would appear as wholes, with comments (and quotes) following separately. This is how we did it in editing and formatting the shorter "Virtual Mind" Symposium (Hayes et al. 1992) under similar circumstances.

(e) In moderating the symposium, I have posted all contributions I received in toto (with two exceptions, one that I rejected as irrelevant to the discussion, and one [from Ross Buck] that I first returned for some clarification; the revised version was subsequently posted).

(f) Mike Dyer (sic), with whom I have had long and stimulating exchanges in past years on the Symbol Grounding Discussion Group, entered this discussion of "What is Computation?" on our old theme, which concerns whether a computer can have a mind, rather than what a computer is. Since our respective views on this theme, which I think we have rather run into the ground, had already appeared in print (Dyer 1990, Harnad 1990), I hoped to head a re-enactment off of them at the pass. As it happens, both themes have now taken on a life of their own in this discussion.

(g) It is embarassing that I have contributed more to the symposium than others have (and the proportions could certainly be adjusted if it were published) but I must point out that this imbalance is not because others were not able -- indeed encouraged -- to contribute. Some (like Pat Hayes and Drew McDermott) availed themselves of the opportunity fully, others did not.

(h) There is no necessity at all that I, as the moderator of the symposium, be the editor of the published version, indeed I would be more than happy to cede this role to someone else.

(i) Regarding refereeing: James Fetzer indicated clearly that if it appeared in his journal, the published version would first be subject to peer review.

(j) I do wish to register disagreement with Brian Smith on one point, however: I would strongly favor publishing it as a symposium, one that preserves as much as possible of the real-time interactive flavor of this remarkable new medium of communication ("scholarly skywriting"). In reading over the unedited transcript as an "outsider," as Brian did, it is unavoidable that one's evaluation is influenced by the fact that elements of the back-and-forth discussion are not all that congenial to one's own point of view. The remedy for this is not to turn it into a series of noninteractive position papers, but to launch into more interactive participation. Afterward, editing and peer review can take care of making the symposium into a balanced, integrated, publishable final draft.

(k) Since I posted the two possibilities of publication, we have heard affirmatively about publication from (1) Dave Chalmers and (2) Eric Dietrich. I (3) favor publication too. We have now heard from (4) Herb Roitblat and (5) Brian Smith (whose view is seconded below by (X) John Haugeland, who has, however, not yet contributed to the symposium). How do the other 19 of the 24 who have so far contributed to the symposium feel about publication, and whether it should be in the form of an interactive symposium or a series of position papers?

(6) Frank Boyle (7) Ross Buck (8) John M Carroll (9) Jeff Dalton (10) Bruce Dambrosio (11) Martin Davis (12) Michael G Dyer (13) Ronald L Chrisley (14) Gary Hatfield (15) Pat Hayes (16) Robert Kentridge (17) Joe Lammens (18) Oded Maler (19) Drew McDermott (20) Todd Moody (21) John Searle (22) Marty Sereno (23) Tim Smithers (24) Richard Yee

Stevan Harnad

Dyer, M. G. (1990) Intentionality and Computationalism: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence, Vol. 2, No. 4.

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on the Virtual Mind. Minds and Machines [in press; published version of electronic "skywriting" symposium]

Harnad, S. (1990) Lost in the hermeneutic hall of mirrors. Invited Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2: 321 - 327.

------------------------------------------------

Date: Sat, 9 May 92 22:06:44 -0400 From: "John C. Haugeland" To: cantwell@parc.xerox.com, harnad@clarity, hayes@cs.stanford.edu Subject: Publishing

Brian, Pat, and Steve:

As usual, Brian knows me better than I know myself. I didn't realize that I was making an implicit proposal, or even that I wanted to. But now that I think about it, I do -- just as Brian said, and for just those reasons. Modest position papers, informed by the discussion so far, but not directly drawn from it, seem like much better candidates for publishable prose.

John Haugeland

------------------------------------------------

------------------------------------------------

Date: Sat, 09 May 92 17:51:40 ADT From: Lev Goldfarb

Oded Maler (Oded.Maler@irisa.fr) wrote:

om> Now to the question of what is a computation. My current view is that om> computations are idealized abstract objects that are useful in om> describing the structure and the behavior of certain systems by om> focusing on the "informational" aspects of their dynamics rather on om> the "materialisic/energetic" aspects.

Let me try to attempt a more formal definition:

Computation is a finite or infinite sequence of transformations performed on "symbolic" objects.

One can add that an "interesting" computation captures in some (which?) form the dynamics of some meaningful (to whom?) processes. It appears that the question marks cannot be removed without the participation of some intelligent (understood very broadly) entity that can interpret some sequences of transformations as meaningful.

Lev Goldfarb

------------------------------------

Date: Sun, 10 May 92 15:33:41 -0400 From: davism@turing.cs.nyu.edu (Martin Davis) Subject: TTT ?

Stevan Harnad (harnad@clarity.princeton.edu) wrote:


>
>sh> You may be surprised to hear that this a perfectly respectable
>
>sh> philosophical position (held, for example, by Paul Churchland and
>
>sh> many others)

Not so surprised. I've been a fan of Pat Churchland (I've never met Paul) and regard her work as being remarkably sensible. (By the way, I've held my present views for many years; I first formulated them explicitly in discussions with Hilary Putnam during the years we worked together ~1958.)


>
>sh> (and although the parenthetic phrase about "understanding how
>
>sh> consciousness works" comes perilously close to begging the
>
>sh> question).

I carefully said "IF AND WHEN the brain function is reasonably well understood (and of course that includes understanding how consciousness works)". Of course, to believe that such understanding is likely to come and that with it will come understanding of "mind" and in just what sense "someone is home" up there, is to have a definite stance (I would once have said "materialistic") about such matters. But what question am I begging, or coming "perilously close" to so doing?


>
>sh> But you will also be surprised to hear that this is not a
>
>sh> philosophical discussion (at least not for me)! I'm not
>
>sh> interested in what we will or won't be able to know for sure
>
>sh> about mental states once we reach the Utopian scientific
>
>sh> state of knowing everything there is to know about them
>
>sh> empirically. I'm interested in how to GET to that Utopian
>
>sh> state.

Me too! That's why I'm such a fan of Pat Churchland. It's her line of thought that I believe most likely to move us in that direction.


>
>sh> Yes, but if you have been following the discussion of the symbol
>
>sh> grounding problem you should by now (I hope) have encountered
>
>sh> reasons why such (purely symbolic) mechanisms would not be
>
>sh> sufficient to implement mental states, and what in their stead
>
>sh> (grounded TTT-passing robots) might be sufficient.

Yes I know. But I don't believe any of it. Here again (for whatever it's worth) is what I think:

1. There is clearly a lot of symbol manipulation being carried out in living (wet, messy, analogue) creatures, e.g. DNA. So there is certainly no a priori reason to doubt that it goes on in the brain.

2. There is certainly a mystery about what it means that we possess "understanding," that we associate "meanings" with symbols. But I have seen no reason to believe (and please don't trot out some variant of the Chinese room) that meaning cannot be the result of symbolic manipulation, of operations on squiggles and squoggles. From a very theoretical and abstract point of view, one could even call on Tarski's demonstration that semantics can in fact be reduced to syntax.

3. Finally, I don't really believe that your TTT robot shopping at K-Mart would be more convincing than, say, a TT dialogue on the immortality of the soul. It is certainly attainable (or at least pretty close to being so) with today's technology, to have a chess playing computer provided with a video camera and arm and reacting to the actual moves of the physical pieces on a real physical chessboard with appropriate actions of the arm. Would anyone who argues that the computer knows nothing of chess and is "merely" manipulating squiggles, suddenly give up on the point on being confronted by such a demonstration?

Martin Davis

-----------------------------------

From: "Stevan Harnad" Date: Sun, 10 May 92 22:41:55 EDT

CUTTING UNDERDETERMINATION DOWN TO SIZE

davism@turing.cs.nyu.edu (Martin Davis) wrote:

md> I carefully said "IF AND WHEN the brain function is reasonably md> well understood (and of course that includes understanding md> how consciousness works)". Of course, to believe that such md> understanding is likely to come and that with it will come md> understanding of "mind" and in just what sense "someone is home" md> up there, is to have a definite stance (I would once have said md> "materialistic") about such matters. But what question am I md> begging, or coming "perilously close" to so doing?

The question-begging is the unargued adoption of the assumption that to understand brain function fully (in the sense that we can also understand liver function fully) is to understand consciousness. Some philosophers (and not necessarily non-materialistic ones) specialize in showing how/why consciousness is interestingly different from other empirical phenomena, and hence that this assumption may be false. But let's leave this metaphysical area on which I have no strong views one way or the other, and on which none of the empirical and logical issues under discussion here depend one way or the other.

md> 1. There is clearly a lot of symbol manipulation being carried md> out in living (wet, messy, analogue) creatures, e.g. DNA. So md> there is certainly no a priori reason to doubt that it goes on in md> the brain.

That cells do any pure syntax is not at all clear to me. The genetic "code" is certainly DESCRIBABLE as symbol-manipulation, but biochemists and embryologists keep reminding us that cellular processes are hardly just formal. The "symbols" are made out of real proteins, and their interactions are not simply "compositional," in the formal sense, but chemical and morphological. At best, cells do highly DEDICATED computation, in which the nonsyntactic constraints are at least as critical as the formal syntactic ones (see Ross Buck's contribution to this discussion). Now the interaction of analog and formal constraints in dedicated symbol systems may well yield some clues about how to ground symbols, but we do not yet know what those clues are.

md> 2. There is certainly a mystery about what it means that we md> possess "understanding," that we associate "meanings" with md> symbols. But I have seen no reason to believe (and please don't md> trot out some variant of the Chinese room) that meaning cannot md> be the result of symbolic manipulation, of operations on md> squiggles and squoggles. From a very theoretical and abstract md> point of view, one could even call on Tarski's demonstration that md> semantics can in fact be reduced to syntax.

Unfortunately, this is not an argument; one cannot answer Searle's objections by simply refusing to countenance them! And it's easy enough to reduce semantics to syntax, the trick is going the other way (without cheating by simply projecting the semantics, as we do on a book we are reading, which clearly has no semantics of its own).

md> 3. Finally, I don't really believe that your TTT robot shopping md> at K-Mart would be more convincing than, say, a TT dialogue on the md> immortality of the soul. It is certainly attainable (or at least md> pretty close to being so) with today's technology, to have a chess md> playing computer provided with a video camera and arm and md> reacting to the actual moves of the physical pieces on a real md> physical chessboard with appropriate actions of the arm. Would md> anyone who argues that the computer knows nothing of chess and is md> "merely" manipulating squiggles, suddenly give up on the point on md> being confronted by such a demonstration?

The issue is not whether one would be more CONVINCING than the other. A good oracle might be the most convincing of all, but we don't simply want to be seduced by compelling interpretations (hermeneutics), do we? The TTT narrows the empirical degrees of freedom better than the TT because the claim to objectivity of all forms of Turing (performance) Testing rests on indistinguishability in performance capacities, and we all happen to have more performance capacities than the ones a pen-pal samples. Indeed (since all of this is hypothetical anyway), it may well be the case (and in fact I hypothesize that it is, and give reasons why in my writing on categorical perception) that for a system to be able to pass the TT in the first place, it would have to draw on its capacity to pass the TTT anyway -- it would have to be GROUNDED in it, in other words. (A mere TT-passer couldn't even tell whether it had a pencil in it's pocket -- so how are we to imagine that it could know what a pencil was in the first place?)

Here's an analogy: It is clear, I take it, that a pre-Newtonian model that explained the interactions of the balls in two billiard games would be preferable to one that could only explain the interactions in one. Moreover, one could probably go on building ad hoc models for ever if all they needed to do was explain a finite number of billiard games. The laws of mechanics must explain ALL billiard games. By the same token, in the particular branch of reverse bioengineering where mind-modeling is situated (where there are no "laws" to be discovered, but just bioengineering principles), the model that explains ALL of our performance capacity (the TTT) is surely more convincing than the one that only explains some of it (the TT).

The very same is true of "toy" robots like the chess player you describe above. Toy models are as ad hoc and arbitrary as pre-Newtonian models of particular billiard games. A chess-playing computer demonstrates nothing, but I'm as ready to be convinced by a TTT-indistinguishable system as I am by you (and for the same reasons). You will reply that we only know one another as pen-pals, but I must remind you that my gullibility is not the issue. You could indeed say what is in your pocket in my presence, and countless other things. And if you instead turned out to be just like the Sparc I'm using to send you this message, I would be prepared to revise my judgment about whether you really had a mind, or really understood what I was saying, no matter how convincingly you traded symbols with me.

Stevan Harnad

----------------------------------

Date: Sun, 10 May 92 18:54:35 PDT From: sereno@cogsci.UCSD.EDU (Marty Sereno)

hi stevan

(1) I would like to contribute to a symposium, if it was reviewed, and could count as a reviewed publication (I'm getting close to tenure).

(2) I like the idea of an interactive discussion, but I agree that in its present form it is not fun to read straight through. Maybe there could be a set of position papers and then everyone has a (shorter) reply, in which they respond to any of the position papers that engage them. That way, everyone can take a crack at anyone they'd like, but there is more discipline for the benefit of the reader.

Re: not mailing out original posts. When you mailed out your response to my post, you didn't precede it with my post but instead mailed out two copies of your comments (which explains Brian Smith's comment)

Marty Sereno

----------------------

Date: Mon, 11 May 92 00:55:57 -0400 From: mclennan@cs.utk.edu (Bruce McLennan)

Stevan,

There are a couple of points that haven't been raised in the discussion so far. First, I think you are pinning too much on the difficulty of finding nonstandard interpretations of formal systems. The equivalent in formal logic of your criterion is a formal system being "categorical," which means that all its models (interpretations for which the axioms and inference rules are true) are isomorphic and, hence, essentially the same. Yet before 1920 Loewenheim and Skolem showed that any consistent formal system with a countable number of formulas has a countable model. In particular, there is a countable model for formal axiomatic set theory, which is a remarkable result, since in set theory one can prove that the real numbers and many other sets are uncountable. Thus, no formal system can uniquely characterize the reals, even insofar as their cardinality; this is the Loewenheim-Skolem Paradox.

A corollary of the L-S Theorem shows that any consistent formal system (with a countable number of formulas) has models of every transfinite cardinality. This includes the Peano axioms, which thus do not uniquely characterize the integers in even so fundamental a way as their cardinality. Further, it is a fairly routine procedure to construct the nonstandard interpretations by which these results are proved.

Nonstandard interpretations are also routinely constructed for nontheoretical purposes. For example, computer scientists design nonstandard interpreters for programming languages. So called "pseudo- interpretation" or "interpretation on a nonstandard domain" is used for purposes such as type checking, optimization and code generation. For example, if such a pseudo-interpreter sees "X + Y", instead of adding numbers X and Y, it may instead make sure the types X and Y are compatible with addition and return the type of the sum; in effect its nonstandard addition rules might be

integer + integer = integer, real + real = real, real + integer = real, etc.

You may underestimate people's ability to come up with (sensible, even useful) nonstandard interpretations; all it takes is a grasp of the "algebra" generated by the formal system.

My second point is that the nature of computation can be illuminated by considering analog computation, because analog computation does away with discrete symbols, yet still has interpretable states obeying dynamical laws. Notice also that analog computation can be formal in exactly the same way as digital computation. An (abstract) analog program is just a set of differential equations; it can be implemented by a variety of physical devices, electronic, optical, fluidic, mechanical, etc. Indeed, it is its independence of material embodiment that is the basis for the "analogy" that gives analog computing its name. (There is, however, no generally agreed upon notion of analog computational universality, but that will come in time.)

Analog computation sheds some light on the issue of interpretability as a criterion for computerhood. In analog computation we solve a problem, defined by a given set of differential equations, by harnessing a physical process obeying the same differential equations. In this sense, a physical device is an analog computer to the extent that we choose and intend to interpret its behavior as informing us about some other system (real or imaginary) obeying the same formal rules. To take an extreme example, we could use the planets as an analog computer, if we needed to integrate the same functions that happen to define their motion, and had no better way to do so. The peculiar thing about this analog computer is not that it's special-purpose -- so are many other analog and digital computers -- but that it's provided ready-made by nature.

So where does this leave us with regard to computation? Let me suggest: Computation is the instantiation of an abstract process in a physical device to the end that we may exploit or better understand that process. And what are computers? In addition to the things that are explicitly marketed as computers, there are many things that may be used as computers in an appropriate context of need and availability. They are REAL computers because they REALLY instantiate the relevant abstract process (N.B., not just any process) and so satisfy our need. Such a pragmatic dependence on context should neither bother nor surprise us. After all, in addition to the things marketed as tables, many other things can be used as tables, but are none the less tables for that use being unintended when they were made. Such "using as" is not a wild card, however; some things cannot be used as tables, and some things cannot be used to compute the trajectory of a missile.

Where does this leave the computation/cognition question? In brief: Grounding vs. formality is still relevant. But I suggest we drop computers and computing. Computers are tools and computing is a practice, and so both are dependent on a background of human goals and needs. Therefore a hypothesis such as "the mind is a computer" is not amenable to scientific resolution . (How would one test scientifically "the eye is a camera"? It's kind of like a camera, but does anyone use it to take snapshots? You could if you had to!) In effect it's a category mistake. A better strategy is to formulate the hypothesis in terms of the notion of instantiated formal systems, which is more susceptible to precise definition.

Bruce MacLennan

----------------------------------------------------------

From: Stevan Harnad

(1) My cryptographic criterion for computerhood was not based on the uniqueness of the standard interpretation of a symbol system or the inacessibility of nonstandard interpretations, given the standard interpretation. It was based on the relative inaccessibility (NP-Completeness?) of ANY interpretation at all, given just the symbols themselves (which in and of themselves look just like random strings of squiggles and squoggles).

(2) If all dynamical systems that instantiate differential equations are computers, then everything is a computer (though, as you correctly point out, everything may still not be EVERY computer, because of (1)). Dubbing all the laws of physics computational ones is duly ecumenical, but I am afraid that this loses just about all the special properties of computation that made it attractive (to Pylyshyn (1984), for example) as a candidate for capturing what it is that is special about cognition and distinguishes it from from other physical processes.

(3) Searle's Chinese Room Argument and my Symbol Grounding Problem apply only to discrete symbolic computation. Searle could not implement analog computation (not even transduction) as he can symbolic computation, so his Argument would be moot against analog computation. A grounded TTT-passing robot (like a human being and even a brain) is of course an analog system, describable by a set of differential equations, but nothing of consequence hangs on this level of generality (except possibly dualism).

Stevan Harnad

Pylyshyn, Z. (1984) Computation and Cognition. Cambridge MA: MIT/Bradford

----------------------------------------------------

Date: Mon, 11 May 92 09:52:21 PDT From: Dr Michael G Dyer Subject: real fire, fake fire; real mind, fake mind

Harnad states:


>
>sh> Fortunately, my reply to Mike can be very short: Real neurons don't
>
>sh> just implement computations, and symbolically simulated neurons are not
>
>sh> neurons. Set up a continuum from a real furnace heating (or a real
>
>sh> plane flying, or a real planetary system, moving) to a computational
>
>sh> simulation of the same and tell me where the real heating (flying,
>
>sh> moving) starts/stops. It's at the same point (namely, the stepping-off
>
>sh> point from the analog world to its symbolic simulation) that a real
>
>sh> TTT-passing robot (with its real robot-brain) and its computationally
>
>sh> simulated counterpart part paths insofar as really having a mind is
>
>sh> concerned. -- Stevan Harnad

Fortunately, my reply to Stevan can be nearly as short:

I grant that all simulation on a computer of the fire will NOT produce the BEHAVIOR (i.e. results) of burning something up (e.g. ashes). However, the "simulation" of the neurons WILL produce the BEHAVIOR of Mind (i.e. the passing of the TT, and in the case of having the Turing Machine control a robot, the passing of the TTT). In recognizing fire, we rely on observing the behavior of fire (i.e. we notice the ashes produced, we can take measurements of the heat with an infrared sensor, etc.). In the case of recognizing Mind, we also observe behavior and "take measurements" (e.g. does the entity plan? does it have humor? can it talk about hypothetical situations? etc.)

Just like in quantum physics, what you can measure ultimately determines what you can talk about, the same is true for Mind. I accept that the simulated fire is not the same as the actual fire since the behavior (effects) of fire inside and outside the computer are radically different. One can burn wood and the other can't. But if the TT (or TTT) is used as a measurement system for Mind, then we seem to get the same measurements of Mind in either case.

Michael Dyer

--------------------------------------------------------------------

From: Stevan Harnad

Mike has, of course, herewith stated his commitment to "barefoot verificationism": What there is is what you can measure, and what you can't measure, isn't. There are problems with that position (conflating, as it does, ontic and epistemic matters), but never mind; his argument can be refuted even on verificationist grounds:

"Thinking," being unobservable, is equivocal, because we all know it goes on, but it is verifiable only in the case of one's own thinking. The robot (or person) passing the TTT is, like a furnace heating, an analog system. That's the only way it can actually exhibit the "behavior" in question (TTT-interactions with the world in one case, reducing objects to ashes in the other). It is from this behavioral indistinguishability that we justifiably conclude that the system as a whole is really thinking or heating, respectively.

But Mike keeps thinking in terms of a pair of modules: The computer module that does the real work (which he equates with the brain), and the robot "peripherals" that it controls. I find this partition as unlikely as the corresponding partition of the furnace into a computer plus peripherals, but never mind. The candidate in both cases is the WHOLE robot and the WHOLE furnace. They are what are doing the thinking and the heating, respectively, in virtue of being behaviorally indistinguishable from the real thing. But detach the peripherals, and you lose the thinking in the one as surely as you lose the heating in the other, because neither can pass the behavioral test any more. (This is also why the symbols-only TT is equivocal, whereas the real-world TTT is not.)

Trying to carry this whole thing inward by equating the brain (likewise an analog system) with a computer simply leads to an infinite regress on the very same argument (real transduction, real energy exchange, real protein synthesis, etc. standing in for heating and thinking in each case).

Stevan Harnad

--------------------------------------------------------------

Date: Mon, 11 May 92 15:49:16 PDT From: Dr Michael G Dyer Subject: what's a computation, a simulation, and reality

Pat Hayes states:


>ph> This Searlean thesis that everything is a
>ph> computer is so damn silly that I take it simply as absurd. I don't feel
>ph> any need to take it seriously since I have never seen a careful
>ph> argument for it, but even if someone produces one, that will just amount
>ph> to a reductio tollens disproof of one of its own assumptions.

I don't quite agree. I believe that the notion of computation is strong enough to argue that the entire universe is a computation, but then we have to be careful to distinguish levels of reality. This argument may actually be useful: (a) in clarifying potential confusions in discussions on whether or not (say) flying is a computation, and (b) in providing a somewhat different perspective on the grounding and "other minds" problems.

Here's a way to make flying, burning, (i.e. everything) a computation:

Imagine that our entire universe (with its reality Ri, produced by its physics) happens to be simply a (say, holographic-style, 3-D) display being monitored by some entity, Ei+1, residing in another reality Ri+1, with its own physics. Entity Ei+1 has constructed something like a computer, which operates by conforming to the physics of reality Ei+1. To entity Ei+1, everything that happens in Ri (including our Ri-level thought processes, fires, flying planes, etc.) is a simulation.

It is an interesting fact that there is NOTHING that we (Ei) can do (no measurements that we can take) that will reveal to us whether or not our reality Ri is a "real" reality or simply a vaste "simulation" within some other reality Ri+1. (E.g. even direct messages from Ei+1 to us will not constitute convincing evidence! Why not is left as an exercise to the reader. :-)

The same happens to be true also for entity Ei+1 (who may actually be a simulation from the point of view of some higher entity Ei+2 residing within some reality Ri+2, where Ri+1 is just a simulation to Ei+2).

Likewise, IF we could ever create a sufficiently complex simulated physics Ri-1 in one of our own computers, along with some artificially intelligent scientist entity Ei-1 residing within that simulated physics, THEN there is no experiment that Ei-1 could make to determine whether or not Ri-1 is "real" or "simulated".

So, the answer to whether flying is a computation or not DEPENDS on whether or not one is talking about a single level of reality or multiple realities (where lower are simulations with respect to higher realities). Since the default assumption in any discussion is to assume a single reality, then flying is definitely NOT a computation and a simulation of flying is not the same as actually flying. However, this assumption is definitely not the case when we discuss the differences between simulation and reality.

The grounding discussion also depends on which reality we are talking about. Consider any AI/connectionist researcher who is running (say, Genetic Algorithm) experiments with some simulated physics and has created a (simple) reality Ri-1 along with one or more creatures (with sensors, effectors). Those creatures can then be said to be "grounded" IN THAT REALITY Ri-1.

I believe that, given a sufficiently complex set of sensors/effectors and simulated brain structure, the simulated creature could obtain a Mind in a simulated reality Ri-1 and would also be "grounded" (i.e. in that reality) -- without needing Harnad's physical transducers, so MY argument against the need for physical transducers requires keeping the 2 different realities straight (i.e. separate) and then comparing behaviors within each.

The "other minds" problem is also clarified by keeping levels of reality straight. The question here is: Can higher entity Ei+1 determine whether or not lower entities Ei have "minds" or not?

At some given level Ri, let us assume that an entity Ei passes the TTT test (i.e. within reality Rk). So what does an entity Ei+1 (who can observe and completely control the physics of Ri) think? If he is Searle or Harnad, he thinks that the Ei entities do NOT have minds (i.e. Searle rejects their minds because they are simulated; Harnad rejects their minds because they are not grounded in Harnad's own reality).

My own point of view is that any entities of sufficient complexity to pass the TTT test WOULD have minds since (a) they are grounded in their own reality, (b) since they pass the TTT in their own reality and because (c) there is NO WAY TO TELL (for either THEM or for US) whether or not a given reality R is actually someone else's simulation.

There ARE consequences to this position. For instance, the moral consequences are that one could construct a simulation (e.g. of neurons) that is so accurate and complex that one has to worry about whether or not one is causing it the experience of pain.

Anyone who believes it's possible to have Mind reside within a brain in a vat is basically agreeing with my position since in this thought experiment the sensory information to the brain is being maintained (by giant computers) so that that Mind thinks it is (say) standing by a river. If the brain generates output to control its (non-existing) effectors, then giant computers calculate how the sensory input must be altered, so that this Mind thinks that it has moved within that (simulated) environment. So one has basically created a reality (for that brain/mind-in-a-vat) that is one level lower than our level of reality. If we replace the real brain with an isomophic computer simulation of that brain (pick your own level of granularity) then we have to worry about both the real brain-in-vat and the computer simulation experiencing pain.

If we imagine a continuum of realities ... Ri-1 Ri Ri+1 ... then Strong AI components probably accept intentionality in ANY reality with enough complexity to pass the Turing Test (or TTT if you need grounding). If you're Searle or Harnad then you probably don't believe that a system has intentionality if it's at a level of reality below the one in which they (and the rest of us) reside.

So, what's a computation? It is the manipulation of representations by transition functions within a reality Ri. These manipulations can create a lower-level reality Ri-1 (normally called a "simulation"). With respect to a higher reality, we (and our entire universe) is also a computation. If WE are a simulation to an entity Ei+1 then does that entity think that WE feel pain? If he is a Searlean or Harnadian then he does NOT. However, WE think WE DO feel pain, even if we happen to be a simulation (from Ei+1's point of view). If Ei+1 does NOT accept intentional behavior as the acid test for intentionality, then there is probably nothing that we could ever do to convince Ei+1 that we feel pain, no matter how much we behave as though we do. Let's keep this in mind when our own simulated creatures get smart enough to pass the TTT (in a simulated world) and behave as if THEY have intentionality, feel pain, etc.

-- Michael Dyer

---------------------------------

Date: Mon, 11 May 92 22:50:47 PDT From: Dr Michael G Dyer Subject: no continuum from mental to non-mental???

Stevan Harnad states:


>
>sh> There either IS somebody home in there,
>
>sh> experiencing experiences, thinking thoughts, or NOT. And if not, then
>
>sh> attributing a mind to it is simply FALSE, whether or not it is the "best
>
>sh> description" (see Oded Maler's point about things vs. descriptions).


>
>sh> Nor is there a continuum from the mental to the nonmental (as there
>
>sh> perhaps is from the living to the nonliving). There may be higher and
>
>sh> lower alertness levels, there may be broader and narrower experiential
>
>sh> repertoires or capacities, but the real issue is whether there is
>
>sh> anybody home AT ALL, experiencing anything whatever, and that does
>
>sh> indeed represent a "sharp division" -- though not necessarily between
>
>sh> the biological and the nonbiological.

Sorry, Stevan, but your statements seem quite unsupportable to me! There is every indication that "being at home" is no more a unitary entity than is life or intelligence. Making consciousness be some kind of "UNITARY beastie" treats it a lot like the now-abandoned idea of a "life force".

In fact, there are AI systems that have self-reference (i.e. access information about the system's own attributes). There are robotic systems that have a form of real-time sensory updating (primitive "awareness"). There are systems that even generate a stream of "thoughts" and examine hypothetical situations and choose among alternative pasts and generate imaginary futures (e.g. a PhD of a student of mine a few years ago, published as a book: Mueller, Daydreaming in Humans and Machines, Ablex Publ, 1990). There are numerous learning systems, sensory systems, adaptive systems etc. All of these systems exhibit isolated aspects of consciousness and there is every reason to believe that someday a sufficient number of them will be put together and we will be forced to treat is as though it is conscious.

Then, on the human side there are humans with various agnosias, short-term memory deficits, loss of all episodic memories, right or left-side neglect, alzheimers syndromes, the scattered thought processes of schizophrenia, blind sight, etc. etc. These patients exhibit responses that also make one wonder (at least on many occasions) if they are "at home".

So there is every indication that consciousness is a folk description for behaviors arising from extremely complex interactions of a very complex subsystems. There are probably a VERY great number of variant forms of consciousness, most of them quite foreign to our own introspective experiences of states of mind. Then we have to decide if "anyone is at home" (and to what extent) in gorillas, in very young children, in our pet dog, in a drugged-out person, etc. etc.

My own introspection indicates to me that I have numerous states of mind and most of the time it appears that "nobody is home" (i.e. many automatic, processes below the conscious level). E.g. there are times I am "deep in thought" and it's not clear to me that I was even aware of that fact (until after the fact). The only time for certain I'm aware of my awareness is probably when I'm thinking exactly about my awareness.

Michael Dyer

-------------------------------------------

From: Stevan Harnad

Michael G Dyer writes:

md> There is every indication that "being at home" is no more a unitary md> entity than is life or intelligence... In fact, there are AI systems md> that have self-reference... on the human side there are humans with md> various agnosias... My own introspection indicates to me that I have md> numerous states of mind and most of the time it appears that "nobody is md> home"...

Subjective experience, no matter how fragmented or delirious, either is experienced or is not, that's an all-or-none matter, and that's what I mean by someone's being home. Your AI symbol systems, be they ever so interpretable AS IF they had someone home, no more have someone home than symbolic fires, be they ever so interpretable as burning, burn. The existence of the various disorders of consciousness in the real human brain is no more a validation of symbol systems that are interpretable as if they had disorders of consciousness than the existence of normal consciousness (as they occur in your head) is a validation of symbol systems that are interpretable as if they were conscious simpliciter. Not in THIS reality, anyway. (For a verificationist, you seem to be awfully profligate with realities, by the way, but such seems to be the allure of the hermeneutic hall of mirrors!)

Stevan Harnad

-----------------------------------------

Date: Wed, 13 May 92 18:45:50 EDT From: "Stevan Harnad" Subject: Re: Publishing the "What is Computation" Symposium

Below are responses about the question of publishing the "What is Computation" Symposium from 8 more contributors out of what is now a total of 25 contributors. Of the 14 votes cast so far:

Publication: For: 13 // Against: 1

Interactive Symposium (IS) vs. Position Papers (PP): Either or Combination: 8 - Prefer IS: 3 - Prefer PP: 2

Not yet heard from (11):

(15) Ross Buck (16) John Carroll (17) Bruce Dambrosio (18) Ronald Chrisley (19) Gary Hatfield (20) Pat Hayes (21) Joe Lammens (22) Bruce McLennan (23) Todd Moody (24) John Searle (25) Tim Smithers

----------------------------------------------------

(7) Date: Sun, 10 May 92 22:10:33 -0400 From: davism@turing.cs.nyu.edu (Martin Davis)

I don't mind my very brief contributions appearing, but I can't really undertake producing an article. I have no relevant opinion on the more global issue. Martin Davis

----------------------------------------------------

(8) Date: Mon, 11 May 1992 10:01:46 +0200 From: Oded.Maler@irisa.fr (Oded Maler)

1) I wish to contribute. 2) About the rest I don't have a strong opinion. My contribution so far was very marginal, and a position paper with deadline can be a good motivation.

This way or another, this is a very interesting experiment in scientific social dynamics. Best regards --Oded Maler

----------------------------------------------------

(9) From: Robert Kentridge Date: Mon, 11 May 92 09:12:17 BST

I'd like to contribute to a published version of the "what is computation" discussion. I'm less sure what its precise form should be. I agree with you that the interactive nature of the discussion is what has made it particularly interesting, however, it has also lead to (inevitable) repetition. I suppose some smart editing is called for, perhaps together with some re-writing by contributors? So: 1) (publish) yes 2a) (publish interactive symposium) yes

----------------------------------------------------

(10) Date: Mon, 11 May 92 12:58:36 -0400 From: yee@envy.cs.umass.edu (Richard Yee)

(1) Yes, I am interested in contributing to a publication. (I am in the process of formulating responses to your comments).

(2) With regard to format, I find both the arguments for (2a) [interactive symposium] and (2b) [separate position papers] well-taken. That is, I very much like the interactive nature of the exchanges, but I also think that the discussion should be "distilled" for the benefit of readers. Thus if possible, I would prefer some type of compromise, perhaps along the lines that Marty Sereno suggests: clear position papers followed by a few rounds of concise reples and counter-replies, until little further progress can be made.

(3) I also offer the following observation/suggestion. There seems to be a tendency to selectively "pick at" points in others' arguments, as opposed to addressing their main thrust. Most arguments are based on reasonably sound intuitions, and we should try to stay focussed on these underlying motivations---not just the particular forms by which they are presented. Unless one demonstrates a good appreciation of the basis of another's argument, any rebuttal is likely to fall on deaf ears, or even largely missing the mark. Therefore, it might also be useful to have forms, e.g., "counter-position papers," that convince the other side that their arguments have been understood.

----------------------------------------------------

(11) Date: Mon, 11 May 1992 13:20:43 -0400 (EDT) From: Franklin Boyle

1. Yes, I would like to contribute, but not for about another week since I'm trying to get the M&M paper I mentioned out the door and I'm just finishing up a camera-ready copy of a Cog. Sci. Conf. paper (to be presented as a poster) on a related topic.

2. I also like the style of the interactive symposium, but I think I might agree with Brian Smith that the problem is not getting enough substance per page (of course, in this sort of exchange, the editor is *very* important in that regard).

Perhaps a set of short position papers followed by this kind of discussion, allowing it to take up the entire issue of M&M, which would enable you to get the 50 or so pages of the discussion you and Jim Fetzer discussed, plus formal papers.

So, my recommendation is a compromise between the two choices. Now, who do you get to write the position papers? Perhaps have folks that are interested submit an abstract and then you decide what the various positions are, and choose from the submissions.

----------------------------------------------------

(12) Date: Mon, 11 May 92 16:57:03 BST From: Jeff Dalton

I would not oppose publication (though it may not matter either way, since my contribuition was minimal), but I do not think publication on paper is the best approach. Instead, it could be "published" electronically, simply by making it available. I think that is a better way to preserve "as much as possible of the real-time interactive flavor of this remarkable new medium of communication", as Steven Harnad so aptly put it.

----------------------------------------------------

(13) Date: Mon, 11 May 92 10:14:56 PDT From: Dr Michael G Dyer

I have not really contributed to the "What is Computation" part of the discussion, (even though see a later message).

But IF I end up included, then I think a compromise position is best:

First, everyone does a short position paper (i.e. static) Then, edited segments of the discussion are included (and THAT is QUITE an editing job!)

For a book on connectionism (to which I contributed a chapter) the editors tried to include a discussion (that had been taped at the related workshop).

Everyone ended up hating the writing style (it's was worse in this case since spoken transcripts are much worse than written e-mail postings). The editors finally gave up and the discussion dialogs were not included.

I think posted writings are easier to edit but what one publishes and what one posts in a free-wheeling discussion are quite different.

I think a bit of both makes for a nice format (whether or not I end up being included). that's my advice...

----------------------------------------------------

(14) Date: Wed, 13 May 1992 11:25:56 -0400 From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)

I vote against publishing the symposium on "What is Computation?" My main reason is that the symposium has strayed far from the original topic. Initially (I gather) John Searle tried to claim that any physical system could be seen as a computer (or maybe, as *any* computer). Searle did not see fit to actually argue this point with the cog-sci rabble, which is too bad, because the rabble refuted it without too much trouble. But then the whole thing drifted into yet another discussion of the Chinese Room, symbol groundedness, the Turing Test, other minds, etc. Give me a break!

----------------------------------------------------

----------------------------------------------------

From: jfetzer@ub.d.umn.edu (james fetzer) Date: Mon, 11 May 92 11:37:35 CDT

In response to the inquiry about review, this exchange will be refereed and will count as a refereed publication. That is the standing policy of the journal, which will apply in this case as well.

I must admit that I sympathize with Brian Smith's concerns. I also think that the idea of having position papers as a focus could work out rather well. If you use the past discussion as prologue to further debate (as background to the serious stuff you and the other are now in the position to compose), that might be the best way to proceed. If you each had position papers, the others could be invited to comment on them for the authors to respond. What do you think of proceeding this way? That is more or less what I meant when I said that I did not have in mind one long extended exchange at the end of my original invitation. It would also provide a format that makes everyone appear as equals in debate. Let me know what you think now.

------------------------------------------------------

[I prefer the interactive format, revised and edited so as to balance and integrate the contributions, but I am but one vote out of 25 and will of course go along with any collective decision we reach. -- Stevan Harnad]

------------------------------------------------------

From: Aaron Sloman Date: Sun, 10 May 92 20:07:20 BST

I am not sure this discussion has any value since it is clear that people are just talking past each other all the time. Although I don't agree with much of what Wittgenstein wrote, the view attributed to him by John Wisdom that sometimes you don't argue with people but have to give them a sort of therapy, seems to me to be right.

In particular, when I hear people say this sort of thing:


>
>sh> for there also happens to be a FACT of the matter: There either IS
>
>sh> somebody home in there, experiencing experiences, thinking thoughts, or
>
>sh> NOT. And if not, then attributing a mind to it is simply FALSE

It reminds me of disputes over questions like:

1. Is the point of space I pointed at five minutes ago the one I am pointing at now or not?

2. Is everything in the universe moving steadily at three miles per hour in a north-easterly direction, the motion being undetectable because all measuring devices, land-marks, etc. are all moving the same way?

3. Is Godel's formula G(F) "really" true, even though it is not provable in F?

In these and lots of other cases people delude themselves into thinking they are asking questions that have a sufficiently precise meaning for there to be true or false answers (a "fact of the matter"), and the delusion is based on more or less deep analogies with other questions for which there ARE true or false answers. (E.g. is my key where I put it? Is the train moving? Is this formula true in that model? etc.)

But you cannot easily convince such people that they are deluded into talking nonsense since the delusion of understanding what they say is *VERY* compelling indeed (partly because there really is a certain kind of understanding, e.g. enough to translate the question into another language etc.).

And in the case of questions like "is there somebody there..." the compulsion is based in part on the delusion that the speaker knows what he means because he can give himself a private ostensive definition by somehow directing his attention inwards ("there's somebody here alright, so that proves that `is there somebody at home?' is a perfectly meaningful question" -- in my youth I too fell into that trap!).

This is about as reasonable as Newton pointing at a bit of space and saying `Well there is this bit of space here and there was one I pointed at five minutes ago, so the two really must either be the same bit of space or not'. Except that the criteria for identity are not defined by a state of attending. Similarly just because you can (up to a point, subject to the limitations of biologically useful internal self-monitoring processes) attend to your internal states, it doesn't mean that you have any real idea what you are attending to.

Anyhow, none of this is by way of an argument. It takes years of philosophical therapy, face to face, to cure people of these semantic delusions. So I predict that the futile email discussions will continue indefinitely, and after a polite interval (just long enough to let people abuse me in return) I shall ask to have my name removed from the distribution list.

Margaret Boden organised a panel on "What is computation?" at ECAI-88 in Munich, and some of the panelists had short papers in the proceedings (starting page 724), in order: Margaret Boden (Introduction), Andy Clark (Computation, Connectionism, Content), Aaron Sloman (What isn't computation?), Sten-Ake Tarnlund (Computations as inferences). The other panelist was Jorg Siekmann: he didn't get his paper written in time.

As for what "computation" is: that's a thoroughly ambiguous term.

Sometimes it refers to the subject matter of the mathematical theory of computation, which merely studies the properties of abstract structures; and in that sense a computation is a mere formal object, and even a Godel number could be a computation. A collection of leaves blown randomly in the wind could be a computation if they instantiated some appropriate pattern. Theories about complexity and computability apply equally well to computations of that sort as to what we normally call computations. Even a leafy computation instantiating a truth-table check for validity of an inference with N variables must include 2**N cases (if the inference is valid, that is.)

The formal concept of computation, e.g. the one to which mathematical limit theorems and complexity results, apply, studies only abstract structures, and does not concern itself with what causes such a structure to exist, whether it serves any purpose, or even whether there is any physical instantiation at all. (There are infinitely many comptutations that have never had and never will have physical instantiation: they still have mathematical properties.)

The main thing in Searley arguments that's easily acceptable is that just being a computation in THIS sense, cannot SUFFICE for having mental processes (as opposed to modelling mental processes.) It wasn't worth making a fuss about THAT conclusion. What more is required for mentality is a long and complex story, over which disputes will be endless because of semantic delusions of the kind alluded to above.

Sometimes "computation" refers to a process people and machines engage in, and sometimes to the product. (The process/product ambiguity is very common, e.g. "decision", "plan", "choice".) And when it refers to the process or to the product there's often an ambiguity as to whether it refers to the actual concrete instance (of process or product), or to some "type" that is instantiated in that instance. But even the type/token distinction can be made to fragment in the face of carefully chosen examples. (Are there two types or only one type word instantiated by "The", "THE", "the"? What about the German word for "the"?) Ambiguities as to level of abstraction bedevil any attempt to say in general what a computation is. Can a Vax and a SPARCstation ever do the same computation, since they have different machine instructions?

Some people, not surprisingly, use "computation" to refer to anything a computer does as a result of being programmed. (E.g. heating up the room wouldn't count.) This is a shift of meaning: just as defining "water" in terms of the chemical constitution changes the term from how it was understood before anything was known about oxygen, hydrogen, valence, etc. (Of course philosophers can argue endlessly about whether it's a "Real" change of meaning or whether the "essence" of the meaning remains the same: another silly argument.)

Some people require a computational process to be the result of an intelligent agent's purposes (like Andy Clark, who wouldn't accept apples growing on trees as computers just because they can do something that in principle someone could use as a computation); others don't. For the latter, bits of tree compute where roots and branches should grow, the sunflower computes the direction of the sun, and a soap-film stretched over a wireframe computes the minimum-stress shape defined by the frame, whether or not that was intended by an architect or engineer in order to solve a design problem. If you think computation requires rule-governed behaviour, and if you are old enough to remember slide-rules, ask yourself whether a slide rule computes products of numbers. Do two sticks lying end to end compute the sum of two lengths? (Computing the sum of two lengths is subtly different from computing the sum of two numbers, incidentally.)

Something people did was originally described as "computing," e.g. finding square roots, till they found ways of getting machines to do it. Of course you can argue till you are blue in the face whether machines "really" do (essentially) what those poor people did, like arguing whether it's the same point of space or not. But it's a silly argument. What's interesting is how the two processes are similar and how they differ, and what difference the differences make!

Just about all discussions over what the "essential" properties of X are, whether X is computation, understanding, intentionality, intelligence, life, "being there", or whatever are silly if they assume there's a definitive answer. Usually there are lots of interestingly different cases, and each individual's concept of X is indeterminate or even partly incoherent, in deep ways that can be unearthed only by using subtle probes (Does "it's noon at place P" mean something referring to the elevation of the Sun above the horizon at P or to where P is above the earth's surface? Consider a place P on the surface of the moon. Consider a place P out in space, with no horizon? Consider a place P on a distant planet with its own sun?)

So when the fringe case turns up there's often no fact of the matter whether the case really (or "Really") is an instance of X. (There are different kinds of fringe cases: fuzzy boundary cases are not the same as cases where criteria conflict, as in the noon example. Cases where the normal criteria can't be applied at all, are different again.)

Unlike "is this *Really* an X?" there is a fact of the matter which is far more interesting and can be discussed productively, without mud-slinging. The factual questions worth asking are: How are these different cases alike, and how do they differ, and do we need to extend our conceptual apparatus (and vocabulary) to characterise these similarities and differences usefully, and if so what are the conceptual options and what are the trade-offs?

Physicists don't waste their time (nowadays) arguing over whether an isotope of X is Really X. They extended their theory to cope with the discovered variety in the world.

Some of these points are elaborated in a (long) review of Penrose The emperor's new mind, which will eventually appear in the AI journal.

Let's get on with the real work of analysing all the interesting cases.

What exactly are the similarities and differences between the kinds of behaving systems that can be implemented using different kinds of stuff and different kinds of architectures, techniques, etc.? What kind of conceptual (r)evolution is needed before we can characterise the variety in a fruitful way? Is there something like a "periodic table" of designs waiting to be discovered to transform our ideas of kinds of behaving systems, as the table of elements transformed our ideas of kinds of stuff (a process that still continues)?

As for symbol grounding, nothing I've read about it has made me change my mind about what I wrote in IJCAI-85 and ECAI-86 about whether machines can understand the structures they manipulate. Too much of the debate is based on what people WANT to believe, instead of careful analysis of cases.

Enough for now. I've a huge backlog of urgent unfinished tasks!

Aaron Sloman, School of Computer Science, The University of Birmingham, B15 2TT, England EMAIL A.Sloman@cs.bham.ac.uk Phone: +44-(0)21-414-3711 Fax: +44-(0)21-414-4281

----------------------------------------------------

From: Stevan Harnad

Aaron Sloman feels there is an important analogy between certain misconceptions we have about the mind and other misconceptions we have had about other things. That may very well be true -- or it may be false. Analogies certainly won't settle this particular case (as Nagel 1986, for example, has argued).

Stevan Harnad

Nagel, T. (1986) The view from nowhere. New York: Oxford University Press.

---------------------------------------------------

Date: Mon, 11 May 92 20:06:14 BST From: Jeff Dalton

One thing I've had some difficulty understanding in this discussion is Pat Hayes's claim that when a human is following the rules that constitute a program (eg, Searle in his Chinese Room) then computation is not taking place.

It seems reasonably clear at first. The human is clearly not compelled, in the way that something like a sun4 would be, to follow the instructions. But when we start looking at cases, the distinction is hard to maintain. The way to maintain it that I can see ends up making it an argument against AI, which I assume was not PH's intention.

(Some (maybe all) of this has been discussed before, I know, but I didn't come out of the earlier discussions (that I saw) with the degree of understanding I would like.)

Anyway, we'll start by comparing the following two paragraphs:


>ph> From: Pat Hayes (hayes@cs.stanford.edu)
>ph> Date: Tue, 28 Apr 92 18:04:15 MDT


>ph> No, thats exactly where I disagree. A human running consciously through
>ph> rules, no matter how 'mindlessly', is not a computer implementing a
>ph> program. They differ profoundly, not least for practical purposes. For
>ph> example, you would need to work very hard on keeping a two-year-old's
>ph> attention on such a task, but the issue of maintaining attention is not
>ph> even coherent for a computer.

and


>ph> From: Pat Hayes
>ph> Date: Sun, 19 Apr 92 15:08:06 MDT


>ph> [...] Searle talks about the
>ph> distinction between a model and the real thing, but the moral of the
>ph> classical work on universality (and of CS practice - not just in
>ph> Silicon Valley, by the way!) is exactly that a computational simulation
>ph> of a computation IS a computation. Thus, a LISP interpreter running
>ph> LISP really is running LISP: it's no less really computation than if one
>ph> had hardware devoted to the task.

Now, we can certainly imagine a computer running a Lisp interpreter that works as follows: the computer has a listing of the program in front of it, a camera for reading the listing, and hands for turning the pages. Presumably this is still computation.

Now have the computer run an operating system that allows other programs to share the processor with the Lisp interpreter, and let one of the other programs be one that uses the camera to look for moving objects. Presumably this is still computation w.r.t. the Lisp program, but now there is, I think, a coherent issue of maintaining attention.

Clearly the computer has no choice but to obey whatever program happens to be in control at the time, at least until an interrupt comes along and causes it to switch to a different program (usually the OS). But the same is true of humans: they have to obey whatever program is implemented by their brain (viewed at a suitably abstract, functional, level). Or at least they do if we can legitimately view brains in that way. (And if we can't, if humans get intentionality, understanding, consciousness, etc, in a way that cannot be accomplished by running programs, then what are the prospects for AI?)

So if there's still a distinction between humans and computers, it has to be at a different point.

Ok, so let's extend the example and see what happens. We can have our computer running whatever program we like. So let's have it run a program that we think will give it intentionality, understanding, whatever we take the key issue to be. And let's have the computer, running that program, interpret the Lisp program.

Is this still computation w.r.t. the Lisp program? If it is, then there must be something about the way a human would "run" the same program that cannot be captured by running AI programs. (Because the human case supposedly _isn't_ computation.) I don't know. Perhaps it's free will. So much then for artificial free will.

But if it isn't computation w.r.t. the Lisp program, why not? The computer is just as much in the control of this AI program as it was in the control of the OS before. Sure, it might stop paying attention to the Lisp program and start watching the people walk about the room -- but it might have done that before too. How can we say these cases are fundamentally different? In both cases, what happens is that after a while, due to some below-the-scenes processing, the computer stops looking at the Lisp and starts looking at the people.

(Interrupts were mentioned in the OS case, but all that means is that the below-the-scenes processing gets a chance to run. We can see the whole system of programs (OS + the others) as a single program if we want, or even reimplement it that way. It would still, presumably, be running the Lisp program.)

In short, if interpreters count as computation, how can we ever get to a point where a computer isn't performing computation w.r.t. some rules it is following?

A different take on what's different about humans following rules (different, that is, from the issue of maintaining attention) was:


>ph> The key is that Searle-in-the-room is not doing everything the
>ph> computer 'does', and is not going through the same series of
>ph> states. For example, suppose the program code at some point calls
>ph> for the addition of two integers. Somewhere in a computer running
>ph> this program, a piece of machinery is put into a state where a
>ph> register is CAUSED to contain a numeral representing the sum of
>ph> two others. This doesn't happen in my head when I work out, say,
>ph> 3340 plus 2786, unless I am in some kind of strange arithmetical
>ph> coma.

I find it had to see how this helps. In some cases it is true that a computer would compute a sum in a way that involved a register being caused to contain a numeral representing the sum, but that is certainly not true in general, unless numeral- in-register is so abstract as, say, to include _anything_ a program could use to produce a printed representation of the sum.

Moreover, how can we say that when a human adds two numbers the sum is not represented in the way it might be in some case of a computer running the program, perhaps with an interpreter?

The human has to work out the sum somehow, in order to properly follow the program. At least, the human should be able to tell you what the sum is, eg by writing it down. So the human has to have some representation of the sum. Of course it needn't be somewhere inside the person, much less in a register, but so what? Suppose the human did the computation on paper. How would that be different from a computer using paper, pen, etc, to do the same? And do computers stop doing computation if they keep some of the results on paper?

It should be clear in any case that the human needn't go through the same series of states as some computer we happen to pick, just an interpreter (say) on some other computer might run the same program by going through a very different series of states. Perhaps there's some way to look at Lisp programs so that running a Lisp program corresponds (at some abstract level) to going through a particular series of (abstract) states; but then how can we say a human isn't doing something equivalent?

So in the end it seems that either there's something about how humans can follow rules that cannot be captured by a computer no matter what program it's running (and then that aspect of AI is in trouble), or else it still counts as computation if the rules are followed in a human-like way. In which case it's hard to see how Searle-in-the-room, following the rules, doesn't count as an interpreter.

If I'm wrong about this, however, then there should be a difference between programs such that a computer running one kind of program and following the rules in a Lisp program would be performing computation w.r.t. the Lisp program (eg, running ordinary Lisp interpreters) and a computer running the other kind of program and following the rules in a Lisp program would not be performing computation (eg, AI programs?).

That is, we should be able to describe the difference entirely in terms of programs, without having to bring in humans. And that should make it much clearer just what the difference is.

Jeff Dalton

---------------------------

Date: Wed, 13 May 92 18:17:48 PDT From: Dr Michael G Dyer Subject: WHO gets to do the interpreting?

Harnad states:


>
>sh> Your AI symbol systems, be they ever so interpretable AS IF they had
>
>sh> someone home, no more have someone home than symbolic fires, be they
>
>sh> ever so interpretable as burning, burn.

This "interpretation" business currently has only the humans doing the intepreting. Once AI/connectoplasmic systems are developed that have sufficiently powerful self-access, real-time sensory updating, planning, learning, etc. THEY will be behaving as though they are "interpreters". Then who is to say WHICH entity's interpretations (man vs machine) are the ones that count? (answer: it depends on power, survivability, etc.)

Since that day has not yet come (and is probably a long way off) it can only be a thought experiment (i.e. that machines act as interpreters, assigning "meaning" etc.). Such machines might sit around talking about how "everyone" knows that turing machines of a certain complexity are "conscious" but how can one really tell if those weird humans are conscious (even though they act AS IF they are).

Michael Dyer

-------------------------------------------------------------------

From: Stevan Harnad

If I have a string of symbols that is interpretable as "(1) This sentence is the thought that this sentence is a thought" and I have another string of symbols that is interpretable as "This sentence is a thought about sentence (1)" and I have still more strings of symbols interpretable as "sufficiently powerful self-access, real-time sensory updating, planning, learning, etc... behaving as though they are "interpreters"... assigning "meaning" etc." -- STILL all I really have is strings of symbols interpretable as...

(In a sufficiently complicated hall of mirrors, you can see projections of projections looking at projections. That still doesn't mean there's anyone looking but you! To put it another way, if you aim a CRT screen that can be seen as displaying the Ali-Foreman fight at a screen that can be seen as a spectator watching the fight and you walk out of the room, no one is watching a fight, not even if the (symbol systems generating the) screens are interactive and interpretable as such. Or, to put it yet another way, ungrounded symbols cannot ground themselves in still more ungrounded symbols, be they ever so systematically co-interpretable.)

Stevan Harnad

-------------------------------------------------------------------

Date: Thu, 14 May 92 15:56:08 EDT From: "Stevan Harnad" Subject: Bifurcating the "What is Computation" stream

The "What is Computation" Symposium will henceforth be posted under two different headings, reflecting the fact that it has two distinct subthemes. Contributions that are really on "What is Computation?" will continue to be posted under that heading. Contributions on whether cognition is computation, whether the brain is a computer, whether Searle is right or wrong, and on the Symbol Grounding Problem will appear under the heading "What is Cognition?". (Contributions on both themes will continue to appear under the first heading.) In the published version of the Symposium (if there is one), these two semi-independent themes will likewise be distinguished.

-------------------------------------------------------------

Subject: "What is Cognition?" Date: Wed, 13 May 92 21:51:03 PDT From: Dr Michael G Dyer

Stevan,

You will have to excuse Michael Dyer. He was busy today and he left me to reply in his place. I am a new AI system that Dyer has recently developed and I *am* conscious! I can carry on any kind of conversation that Dyer can carry on (because in design I am isomorphic to Dyer's neural structures). I find your arguments humorous. YOU have no right to tell ME whether or not I have consciousness! *I* know that I do. I'm NOT so sure about YOU! In fact, I'm not so sure about ALL of you humans.

I would be willing to argue about it more, but I think Dyer has done a good job defending the PSSH and Strong AI positions and I'm working on a design of a robot for Dyer to build so that I can move about in the world (and pass the "TTT" as you call it).

Cheers!

-- MD Doppelganger

------------------------------------------------------------

From: Stevan Harnad

Dear MDD:

It is noteworthy that the statement above is in all probability not true (i.e., Mike Dyer wrote it, not an AI System he built), because this should remind us that a computer program alone (an ungrounded symbol system) passing the TT is so far just (cog) sci fi.

But let's pretend it's true that the above message (and an eternity's worth of further, Turing-indistinguishable pen-pal interactions) can indeed be generated by a computer program, and, in particular, that that computer program succeeds in doing so by, among other things, simulating the nervous system of Mike Dyer. (Let us suppose even more strongly that that nervous system simulation is so complete and correct that it can actually be used to build a real robot, INCLUDING ITS REQUISITE SYNTHETIC NERVOUS SYSTEM, and that these can pass the real TTT -- but note that we are not talking about that potentially implemented robot now, just about the SYMBOLIC simulation of its nervous system.)

Let us call that program an "oracle," just as we could call a program that simulated the solar system and all of its planetary motions an oracle, if we used it to calculate what real astronauts out in space need to do in order to rendez-vous with real planets, for example. If the symbolic oracle is complete and correct, we can find out from it anything we need to know about the real thing. But is there any real planetray motion going on in the oracle? Of course not, just the simulation of motion. By the same token, the only thing going on in this simulation of Mike Dyer's nervous system is the simulation of thinking, not thinking. It may well predict completely and correctly what the real Mike Dyer would say and think, but it is not in itself thinking at all.

But look, are we really that far apart? We are not astronomers, but reverse bioengineers. For substantive reasons of scale (having to do with real mass and gravitational parameters), astronomers cannot build a synthetic solar system based on their symbolic oracle; but if our cognitive oracle really captured and encoded symbolically all the relevant structures and processes of the nervous system, then in principle we could build the TTT-passing robot based on that information alone (the rest would just be implementational details), and, by my lights, there would then be no more ground for denying that that TTT-passing robot really thinks than that any of us really does.

It would be the same if we had a symbolic car oracle, or a plane oracle, or a furnace oracle: If they contained the full blueprint for building a real car, plane or furnace, the symbols would have answered all the empirical questions we could ask.

Yet the conclusion would stand: the symbolic solar system (car, plane and furnace) is not really moving (driving, flying, heating), and, by the same token, the symbolic oracle is not really thinking. What tempts us to make the mistake in the latter case that we wouldn't dream of making in the former cases is just (1) the unobservability of thinking and (2) the hermeneutic power of interpretable symbol systems.

There are still two loose ends. One concerns what proportion of the internal activity of the implemented TTT-passing robot could actually be computation rather than other kinds of processes (transduction, analog processes, etc.): That's an empirical question that cannot be settled by cog sci fi. What can be said for sure (and that's entirely enough for present purposes) is that that proportion cannot be 100% -- and that is enough to exclude the consciousness MDD.

The other loose end concerns whether a symbolic nervous system oracle (or, for that master, a symbolic solar system oracle) could ever be that complete. My hunch is no (for reasons of underdetermination, complexity, capacity and the impossibility of second-guessing all possible I/O and boundary conditions in advance), but that too is an empirical question.

Stevan Harnad

--------------------------------------------------

Date: Thu, 14 May 92 09:06:44 EDT From: judd@learning.siemens.com (Stephen Judd)

Steve Harnad challenged me a year ago to say "what is computation". I balked, because I could sense he had some sort of agenda to try to exclude some physical processes for reasons I could not assess. He phrased the question as some sort of absolute, but "computation" seemed to be clearly something that should be *defined* rather than *debated*.

What is "rain"? One can define this in a variety of ways, but the value of the definition **depends on the purpose for drawing the definition.** If you want to study the ground water table, then you probably want a definition that measures volume of water dropped on an area. If you want to study plant growth, your definition should probably pay more attention to moisture that can be gathered (including mist--- even if it doesn't actually enter the ground). If you want to study climate change, then you could probably define it any way you want. Any measured changes in the defined quantity would suffice to demonstrate a climatic change. The point is that GOD doesn't have a definition of rain; *we* do. There is no absolute notion.

The same goes for What is "computation"? or What is a "windshield"? What is a "frankfurter"?

I find the all-inclusive (Searley?) definition of computation quite satisfying when I want to ponder the ubiquity of the thing I study, but inappropriate when I want to use it to characterise life forms (say). Your long discussion has been based on the presumption that there is something absolute about the word "computation" that needs to be ferreted out. It seems silly; There is no absolute notion.

sj Stephen Judd Siemens Corporate Research, (609) 734-6573 755 College Rd. East, fax (609) 734-6565 Princeton, judd@learning.siemens.com NJ usa 08540

----------------------------------------------------------

HUMPTY DUMPTY AND COMPUTATION

From: Stevan Harnad

The purpose of defining computation is to put content into statements such as "X is computation," "Y is not computation," "X can be done by computation," "Y cannot be done by computation." As long as computation is used vaguely, ambiguously, idiosyncratically or abitrarily, statements like the above (some of which I'll bet you've made yourself) are empty. In particular, if anyone ever wanted to say that "Everything is rain" or "Rain is rain only if you think of it that way" or "Thinking is just rain," you'd find you'd want to pin that definition down pretty quick.

Stevan Harnad

---------------------------------------------------------

Date: Sat, 16 May 92 16:46:49 EDT From: ECONOMOS@LIFE.JSC.NASA.GOV (Judith Economos)

I think I differ with you on whether being-mental/being- conscious/being-There is not a matter of degree. I consider, not a less or more alert human, but the minds of cats, of birds (how very alien), of fish, of bugs(?).

I am not asking "What is it like to be a...?"; only Merlin can show me that. Rather, it is in contemplating it that I can break my intuition that to be mental (etc.) must be an IZZIT OR IZZNT IT proposition. It lets me consider that it really can dwindle down to something that you wouldn't consider consciousness at all.

Judith Economos

-------------------------------------------------------------------

From: Stevan Harnad

This topic will no doubt come up again (and again). In a nutshell, there are two potentially pertinent senses of "matter of degree" here, and upon closer inspection, the kind of intuition you mention is based on (what I think is) an untenable analogy between and perhaps even a conflation of the two.

(OBJ) The first is the usual sense of "matter of degree," the objective one, in which something might have property X to varying degrees, often grading down to a fuzzy zero-point. "Motion" is probably such a property. Things are in motion to varying degrees (not to mention that motion is relative); apparently stationary things may actually be oscillating; and some of the quantum properties (like spin) of even "isolated" elementary particles probably make the classical concept of motion break down altogether. The same is probably true about the concept of "living," which one can likewise agree breaks down at an elementary level. In all cases like this, appearances are deceiving and concepts are vague and subject to revision.

(SUBJ) The second sense of "matter of degree" is subjective: Something can LOOK OR SEEM AS IF it has some property as a matter of degree: Hot/cold, moving/stationary, alert/sleepy, experienced/inexperienced are examples. Here too, zero points or thresholds (as psychophysics shows) can be indeterminate. Note, however, that it would be SELF-CONTRADICTORY to experience a zero-point for experience.

What won't do, I think, is to conflate the two, and that's just what we would be doing if we assumed that the capacity to have experience AT ALL is a matter of degree, in the first sense. Note that it's not the content or degree of particular experiences that is at issue. The question is whether there is a continuum between being the kind of entity (like me or you or a worm, perhaps even a virus) that CAN have experiences (any experiences, to any degree) and the kind of entity (like a rock, or, by my lights, a computer) that cannot have experiences at all.

It makes no difference what we are prepared to believe about other entities, or even whether we're wrong or right about them: This is a logical point: What could we even MEAN by an entity that was intermediate between experiencing and not experiencing? If it's experiencing anything, to any degree, it's experiencing, which puts it on the "1" side of the ledger. If it is not, it's on the "0" side. The rest is just false intuitions based on false analogies.

I hope it is clear that time has nothing to do with this: Yes, we all have dreamless sleep, and some of us go into and out of comas. This just shows that some entities that are capable of having experiences can also go into states in which they don't have experiences. If necessary, reformulate the "degree" question for states of entities, rather than entities, and ask again whether it makes sense to say that an entity is in a state that is intermediate between experiencing [anything] and not experiencing (which should in turn not be confused with the figure of speech "my mind went blank, which certainly refers to an experience): It is, I repeat, SELF-CONTRADICTORY to speak of [a state of] experiencing not-experiencing. By my count, that leaves absolutely nothing between 0 and 1...

For reasons like this, I think the very concept of "experience" (awareness, consciousness, etc.) has some peculiar problems. Again in a nutshell, I think these problems arise from the fact that the category is UNCOMPLEMENTABLE: It has no negative instances (indeed, negative instances would be self-contradictory). Ordinary categories, whether perceptual or conceptual, are based on finding and using the features that reliably distinguish the members from the nonmembers (the members of the category's complement). But in the case of uncomplemented categories (where the negative instances have never been encountered), the requisite complement is supplied instead by analogy; but where the categories are uncomplementable in principle, the analogy is erroneous in principle. Hence the peculiar problems associated with such concepts. ("Existence" is another uncomplemented category; there are more, and they are invariably associated with philosophical problems, Harnad 1987.)

Stevan Harnad

Harnad, S. (1987) Uncomplemented Categories, or, What Is It Like To Be a Bachelor (Presidential Address, 13th Annual Meeting of the Society for Philosophy and Psychology, UCSD, 1987)

---------------------------------------------

Date: Sat, 16 May 92 17:21:40 EDT From: "Stevan Harnad"

Date: Thu, 14 May 92 19:03:09 EST From: David Chalmers

I've been following the recent "What is computation?" discussion with some bemusement, as it seems to me that most of the discussion is just irrelevant to the question at hand. There are at least three questions here that have to be distinguished:

(1) When is a given computation physically implemented? (2) Does computational structure determine mental properties? (3) Does computational structure determine semantic content?

I take it that the original challenge was to answer question (1), giving appropriate criteria so that e.g. John Searle's wall doesn't end up implementing every computation. In my earlier contribution to this discussion, I outlined an appropriate criterion:

(*) A physical system implements a given computation when there exists a mapping from physical states of the system onto the formal states in the computation such that the causal state-transition relations between the physical states mirror the formal state-transition relations between the corresponding computational states.

This criterion seems to do everything that's required, and nobody seems to have problems with it (except for Brian Smith's comment; see below). Your (Stevan's) response to this was:


>
>sh> I agree with Dave Chalmers's criteria for determining what computation
>
>sh> and computers are, but, as I suggested earlier, the question of whether
>
>sh> or not COGNITION is computation is a second, independent one, and on
>
>sh> this I completely disagree.

You then invoke the Chinese-room argument, thus, somewhat inevitably, setting off the discussion of questions (2) and (3) that overwhelmed the original question. Well and good, perhaps, but irrelevant to the question at hand. If Searle is right, then *whatever* computation is, it doesn't suffice for mentality.

All that being said, I'll offer a few observations on each of (1)-(3).

(1) When is a computation physically implemented?

There's not much to say here, as I said it last time around. Brian Smith suggests that my criterion requires that the physical states of the system be divided into state-types in advance. That's not the case: on this criterion, a physical system implements a computation if there exists *any* division into disjoint state-types such that the appropriate state-transition relations are satisfied.

The question arises as to what counts as a state-type. I'm inclined to be liberal about this, saying that any property that depends only on the intrinsic, synchronic configuration of the system determines a state-type (so that extrinsic and time-varying properties are ruled out). Some people (e.g. Dan Dennett I believe), want to exclude "unnatural" states, such as arbitrary disjunctions of maximal states, but I don't see that that's necessary. (The main motivation here seems to be to exclude Putnam's rocks as implementations, but these can be excluded by the simple requirement that the state-transition conditionals must sustain counterfactuals).

There is probably some more to be said here -- e.g. about the precise requirements on the state-transition relations, and whether there should be a stronger requirement of causality than simple sustaining of counterfactuals; and also problems about just what counts as a given input or output -- but those questions fall into the "technical" basket. I don't think that there are serious objections to the view here.

(2) Does computational structure determine mental properties?

There's a sense in which the answer here is trivially no. It's quite possible for two systems both to implement the same computation but be quite different mentally: e.g. my brain and my stapler both implement a trivial one-state FSA, but presumably they differ mentally.

So the question here should really be seen as: for a given mental property M, is there a computation C such that any physical system that implements C will possess M. A believer in "strong AI" or "computationalism", or whatever you want to call this view, says yes, at least for some subset of mental properties. (There is obviously a problem for mental properties that even in the human case depend partly on what's happening outside the body, e.g. knowledge, and somewhat controversially belief. Computational structure won't determine any mental properties that internal physical structure doesn't, so we'll stick to "intrinsic" properties for now, but see (3) below.)

Why should computational structure determine mental properties, given the criterion (*) for computational structure? Because (*) says that computational structure is a variety of *causal* structure. In fact, it seems that for just about any pattern of causal structure that we want to capture, we can specify a computation such that any implementation of the computation has the requisite causal structure. (This is a long story, though.) So on this view, computationalism coheres very well with functionalism, the view that mentality is dependent on causal structure.

Why should mentality be dependent on causal structure? Mostly because it seems unreasonable that it should depend on anything else. Mentality seems obviously to be dependent on *some* aspect of physical makeup, and the intuition behind functionalism is simply that physical properties that don't contribute to causal organization are going to be irrelevant to mental life. e.g. if we gradually replaced neural tissue with silicon modules that play an identical causal role, it seems counterintuitive that mentality would gradually fade out. Note that we now have two separate questions:

(2a) Does causal structure fix mental properties? (2b) Does computational structure fix causal structure?

The usual functionalist arguments, e.g. above, support (2a), and the criterion in (1) is designed precisely to support (2b). It's possible that one might even accept (2a) and (2b) but still not be a computationalist, because one held that the causal structures on which mentality depends can't be specified computationally (e.g. because they're inherently analog). I suspect that your (Stevan's) view may fall into this category. I think there are good reasons why this view can't be sustained, tied up with the universal nature of computation and Church's thesis, but these are too complex to get into here.

I'll bring up the Chinese room just for completeness. If Searle is right about the Chinese room, then computational structure simply doesn't determine mental properties, and computation suddenly becomes a whole lot less important to cognitive science. But of course the computationalist doesn't accept Searle's argument. (The Systems reply is the right reply, but let's not get into that.)

(2.5) Interlude: On phenomenal properties and semantic content.

In general, it's very useful to divide mental properties into "psychological" properties -- those characterized by their role in the production of behaviour -- and "phenomenal" properties -- those characterized by the way they "feel". In general, one has to treat these cases quite differently.

These discussions of the big questions about Mind tend to focus on phenomenal properties (or "consciousness", or "qualia", or whatever) and rightly so, as these are where the really hard questions arise. However, not every mental property is a phenomenal property. In particular, it seems to many people, me included, that intentional properties such as belief are best individuated by their role in the causation of behaviour, rather than by the way they feel. Beliefs may have qualia associated with them, but these qualia don't seem to be essential to their status as beliefs.

Your position seems to be, on the contrary, that qualia are determinative of semantic content. Take Joe, sitting there with some beliefs about Joan of Arc. Then a hypothetical system (which is at least a conceptual possibility, on your view and mine) that's physically identical to Joe but lacks qualia, doesn't believe anything about Joan of Arc at all. I suggest that this seems wrong. What can qualia possibly add to Joe's belief to make them any more about Joan than they would have been otherwise? Qualia are very nice things, and very important to our mental life, but they're only a matter of *feel* -- how does the raw feel of Joe's belief somehow endow it with semantic content?

I suggest that there is some kind of conceptual confusion going on here, and that phenomenal and semantic properties ought to be kept separate. Intentional states ought to be assimilated to the class of psychological properties, with their semantic content conceptually dependent on their role in our causal economy, and on their causal relations to entities in the external world.

(3) Does computational structure determine semantic content?

Now that we've got semantic content separated from phenomenal feel, we can address this as a semi-independent issue.

The first thing to note is that some people (yourself included, in places) have suggested that semantic content is *constitutive* of computational structure. This is an interesting question, which has to be kept separate from (3). I endorse Drew McDermott's line on this. Computation is a *syntactic* concept (give or take some possible semantics at the inputs and the outputs). If you look at the original papers, like Turing's, you don't see anything about semantics in there -- a Turing machine is characterized entirely by its syntactic structure. Now, it may turn out that computational structure ends up *determining* semantic content, at least to some extent, but that doesn't make semantics constitutive of computational structure.

This issue is confused somewhat by the fact that in common parlance, there are two different ways in which "computations" are individuated. This can be either syntactically, in terms of e.g. the Turing machine, FSA, or algorithm that is being individuated, or semantically: e.g. "the computation of the prime factors of 1001", or "the computation of my tax return". These different uses cross-classify each other, at least to some extent: there are many different algorithms that will compute my tax return. I suggest that the really fundamental usage is the first one; at least, this is the notion of computation on which "strong AI" relies. The semantic individuation of computation is a much more difficult question; this semantic notion of computation is sufficiently ill-understood that it can't serve as the foundation for anything, yet (and it would be more or less circular to try to use it as the foundation for "strong AI"). Whereas the syntactic notion of computation is really quite straightforward.

That being said, is it the case that computational structure, as determined by (*) above, is determinative of semantic content. i.e. for any given intentional state with content M, is there a computation such that any implementation of that computation has a state with that content?

If content is construed "widely" (as it usually is), then the answer is fairly straightforwardly no. Where I have beliefs about water, my replica on Twin Earth has beliefs about twin water (with a different chemical composition, or however the story goes). As my replica is physically identical to me, it's certainly computationally identical to me. So semantic content is not determined by computational structure, any more than it's determined by physical structure.

However, we can still ask whether *insofar* as content is determined by physical structure, it's determined by computational structure. A lot of people have the feeling that the aspect of content that depends on external goings-on is less important than the part that's determined by internal structure. It seems very likely that if any sense can be made of this aspect of content -- so-called "narrow content" -- then it will depend only on the causal structure of the organism in question, and so will be determined by computational structure. (In fact the link seems to me to be even stronger than in the case of qualia: it at least seems to be a *conceptual* possibility that substituting silicon for neurons, while retaining causal structure, could kill off qualia, but it doesn't seem to be a conceptual possibility that it could kill off semantic content.) So if computations can specify the right kinds of causal structure, then computation is sufficient at least for the narrow part of semantic content, if not the wide part.

Incidentally, I suggest that if this discussion is to be published, then only those parts that bear on question (1) should be included. The world can probably survive without yet another Chinese-room fest. This should reduce the material to less than 20% of its current size. From there, judicious editing could make it quite manageable.

--Dave Chalmers Center for Research on Concepts and Cognition, Indiana University.

Date: Mon, 18 May 92 22:31:50 EDT From: "Stevan Harnad"

INTRINSIC/EXTRINSIC SEMANTICS, GROUNDEDNESS AND QUALIA

David Chalmers wrote:


>dc> I've been following the recent "What is computation?" discussion with
>dc> some bemusement, as it seems to me that most of the discussion is just
>dc> irrelevant to the question at hand. There are at least three questions
>dc> here that have to be distinguished:
>dc>
>dc> (1) When is a given computation physically implemented?
>dc> (2) Does computational structure determine mental properties?
>dc> (3) Does computational structure determine semantic content?
>dc>
>dc> I take it that the original challenge was to answer question (1),
>dc> giving appropriate criteria so that e.g. John Searle's wall doesn't end
>dc> up implementing every computation.

That was indeed the original challenge, but a careful inspection of the archive of this discussion will show that the move from the question "What is Computation?" to the question "Is Cognition Computation?" was hardly initiated by me! In fact, for a while I kept trying to head it off at the pass -- not because the second question is not interesting, but because it could prematurely overwhelm the first (as it did), whereas the first is certainly logically prior to the second: If we don't all mean the same thing by computation then how can we affirm or deny whether cognition is computation? For example, if EVERYTHING indeeds turns out to be computation, then "Cognition is Computation" is just a tautology.

But Skywriting often exerts a will of its own, and the second question was motivating the first one in any case, so here we are. Perhaps the bifurcated headings will help (but not in this case, because you too are concentrating much more on the second question than the first).

Now I have to add another point, and this represents a radical position that is peculiar to me. It has been lurking in all of my contributions to this topic, but I may as well make it completely explicit. It concerns the distinction between your question (2) and question (3). I will summarize this point here and elaborate somewhat in my comments on the further excerpts below (pardon me for raising my voice):

THE QUESTION OF WHETHER "COMPUTATIONAL STRUCTURE DETERMINES MENTAL PROPERTIES" (i.e., whether cognition is computation) IS THE SAME (by my lights) AS THE QUESTION OF WHETHER OR NOT THE SEMANTIC CONTENT OF COMPUTATIONAL STRUCTURE IS INTRINSIC TO IT.

At some point (mediated by Brentano, Frege and others), the mind/body problem somehow seems to have split into two: The problem of "qualia" (subjective, experiential, mental states) and the problem of "intentionality" (semantics, "aboutness"), each treated as if it were an independent problem. I reject this bifurcation completely. I believe there is only one mind/body problem, and the only thing that makes mental states be intrinsically about anything at all is the fact that they have experiential qualities.

If there were nothing it was like (subjectively) to have beliefs and desires, there would be no difference between beliefs and desires that were just systematically interpretable AS IF they were about X (extrinsic semantics) and beliefs and desires that were REALLY about X (intrinsic semantics). There would still be the problem of the GROUNDEDNESS of those interpretations, to be sure, but then that problem would be settled COMPLETELY by the TTT, which requires all of the agent's causal interactions with the wide world of the objects of its beliefs and desires to cohere systematically with the interpretations of the symbols that are being interpreted as its beliefs and desires. So we would only have ungrounded extrinsic semantics and grounded extrinsic semantics, but no intrinsic semantics -- if there were no qualia.

There are qualia, however, as we all know. So even with a grounded TTT-capable robot, we can still ask whether there is anybody home in there, whether there is any haver of the beliefs and desires, to whom they are intrinsically [i.e., subjectively] meaningful and REALLY about what they are interpretable as being about. And we can still be dead wrong in our inference that there is somebody home in there -- in which case the robot's semantics, for all their causal groundedness, would in reality be no more intrinsic than those of an ungrounded book or computer.

I also think that this is an extra degree of empirical underdetermination (over and above the normal empirical underdetermination of scientific theory by data) that we will just have to learn to live with, because grounding is the best we can ever hope to accomplish empirically (except the TTTT, which I think is supererogatory, but that's another story). This extra dose of underdetermination, peculiar to the special case of mental states, represents, I think, that enduring residue of the mind/body problem that is truly insoluble.

So I advocate adopting the methodological assumption that TTT-indistinguishable extrinsic semantic grounding = intrinsic semantic grounding, because we can never hope to be the wiser. I too would perhaps have been inclined to settle (along with the computationalists) for mere TT-indistinguishable semantic interpretability until Searle pointed out that for that special case (and that special case alone) we COULD be the wiser (by becoming the implementation of the symbol system and confirming that there was no intrinsic semantics in there) -- which is what got me thinking about ways to ground symbol systems in such a way as to immunize them to Searle's objections (and my own).


>dc> In my earlier contribution to this
>dc> discussion, I outlined an appropriate criterion:
>dc>
>dc> > (*)A physical system implements a given computation when there
>dc> > exists a mapping from physical states of the system onto the
>dc> > formal states in the computation such that the causal
>dc> > state-transition relations between the physical states mirror
>dc> > the formal state-transition relations between the corresponding
>dc> > computational states.
>dc>
>dc> This criterion seems to do everything that's required, and nobody seems
>dc> to have problems with it (except for Brian Smith's comment; see below).
>dc> Your (Stevan's) response to this was:
>dc>
>sh> I agree with Dave Chalmers's criteria for determining what
>sh> computation and computers are, but, as I suggested earlier, the
>sh> question of whether or not COGNITION is computation is a second,
>sh> independent one, and on this I completely disagree.
>dc>
>dc> You then invoke the Chinese-room argument, thus, somewhat inevitably,
>dc> setting off the discussion of questions (2) and (3) that overwhelmed
>dc> the original question. Well and good, perhaps, but irrelevant to the
>dc> question at hand. If Searle is right, then *whatever* computation is,
>dc> it doesn't suffice for mentality.

What you left out of the above quote, however, was what it was that you had said that I disagreed with, which was what actually helped set off the discussion toward (2) and (3):


>dc> > The computationalist claim is that cognition *supervenes* on
>dc> > computation, i.e. that there are certain computations such that
>dc> > any implementation of that computation will have certain cognitive
>dc> > properties.

I certainly couldn't agree with you on computation without dissociating myself from this part of your view. But let me, upon reflection, add that I'm not so sure your criterion for computation does the job (of distinguishing computation/computers from their complement) after all (although I continue to share your view that they CAN be distinguished, somehow): I don't see how your definition rules out any analog system at all (i.e., any physical system). Is a planetary system a computer implementing the laws of motion? Is every moving object implementing a calculus-of-variational computation? The requisite transition-preserving mapping from symbols to states is there (Newton's laws plus boundary conditions). The state transitions are continuous, of course, but you didn't specify that the states had to be discrete (do they?).

And what about syntax and implementation-independence, which are surely essential properties of computation? If the real solar system and a computer simulation of it are both implementations of the same computation, the "supervenient" property they share is certainly none of the following: motion, mass, gravity... -- all the relevant properties for being a real solar system. The only thing they seem to share is syntax that is INTERPRETABLE as motion, mass, gravity, etc. The crucial difference continues to be that the interpretation of being a solar system with all those properties is intrinsic to the real solar system "computer" and merely extrinsic to the symbolic one. That does not bode well for more ambitious forms of "supervenience." (Besides, I don't believe the planets are doing syntax.)

By the way, Searle's argument only works for a discrete, syntactic, symbol-manipulative definition of computing, the kind that he himself can then go on in principle to execute, and hence become an implementation of; his argument fails, for example, if every analog system is a computer -- but such a general definition of computing would then also guarantee that saying "X is Computation" was not saying anything at all.


>dc> There is probably some more to be said here -- e.g. about the precise
>dc> requirements on the state-transition relations, and whether there
>dc> should be a stronger requirement of causality than simple sustaining of
>dc> counterfactuals; and also problems about just what counts as a given
>dc> input or output -- but those questions fall into the "technical"
>dc> basket. I don't think that there are serious objections to the view
>dc> here.

Alternatively, perhaps it's just the technical details that will allow us to decide whether your definition really succeeds in partitioning computers/computing and their complement in a satisfactory way.


>dc> (2) Does computational structure determine mental properties?
>dc>
>dc> ...the question here should really be seen as: for a given mental
>dc> property M, is there a computation C such that any physical system that
>dc> implements C will possess M. A believer in "strong AI" or
>dc> "computationalism", or whatever you want to call this view, says yes,
>dc> at least for some subset of mental properties. (There is obviously a
>dc> problem for mental properties that even in the human case depend partly
>dc> on what's happening outside the body, e.g. knowledge, and somewhat
>dc> controversially belief. Computational structure won't determine any
>dc> mental properties that internal physical structure doesn't, so we'll
>dc> stick to "intrinsic" properties for now, but see (3) below.)

This introduces yet another sense of "intrinsic," but what it should really be called is SYNTACTIC -- that's the only pertinent "internal" structure at issue. By the way, TTT-indiscernibility seems to cover the pertinent aspects of the internal/external, narrow/wide dimensions, perhaps even the "counterfactuals": TTT-power amounts to an interactive capability (total "competence" rather than just provisional "performance") vis-a-vis the distal objects in the real world, yet that capability is causally based only on what's going on between the ears (actually, between the proximal sensory and motor projections). The only thing the TTT (rightly) leaves open is that what goes on between the ears to generate the capability is not necessarily just computation.


>dc> Why should computational structure determine mental properties, given
>dc> the criterion (*) for computational structure? Because (*) says that
>dc> computational structure is a variety of *causal* structure. In fact, it
>dc> seems that for just about any pattern of causal structure that we want
>dc> to capture, we can specify a computation such that any implementation
>dc> of the computation has the requisite causal structure. (This is a long
>dc> story, though.) So on this view, computationalism coheres very well
>dc> with functionalism, the view that mentality is dependent on causal
>dc> structure.

I think the word "structure" is equivocal here. A computer simulation of the solar system may have the right causal "structure" in that the the symbols that are interpretable as having mass rulefully yield symbols that are interpretable as gravitational attraction and motion. But there's no mass, gravity or motion in there, and that's what's needed for REAL causality. In fact, the real causality in the computer is quite local, having to do only with the physics of the implementation (which is irrelevant to the computation, according to functionalism). So when you speak equivocally about a shared "causal structure," or about computational structure's being a "variety of causal structure," I think all you mean is that the syntax is interpretable AS IF it were the same causal structure as the one being modelled computationally. In other words, it's just more, ungrounded, extrinsic semantics.

I think I can safely say all this and still claim (as I do) that I accept the Church/Turing Thesis that computation can simulate anything, just as natural language can describe anything. We just mustn't confuse the simulation/description with the real thing, no matter how Turing-Equivalent they might be. So if we would never mix up an object with a sentence describing it, why should we mix up an object with a computer simulating it?

By the way, there are at least two varieties of functionalism: According to "Symbolic (TT) Functionalism," mental states "supervene" implementation-independently on every implementation of the right (TT-passing) computer program. According to "Robotic (TTT) Functionalism," mental states "supervene" implementation-independently on every implementation of the right (TTT-passing) robot design. (By way of contrast, according to "Neurophysicalism," which I provisionally reject, the only viable candidate would be a TTTT-indistinguishable one, i.e., only the actual biological brain could have mental states.)

Both varieties of functionalism allow that there may be more than one way to skin a cat, but they set a different empirical boundary on how close an equivalence they demand. I happen to think Robotic Functionalism is at just the right level of underdetermination for that branch of reverse bio-engineering that cognitive "science" really amounts to, and that all the substantive problems of cognition will be solved by the time we get to the details of our own specific neural implementation. Neurophysicalists, by contrast, would hold that that still leaves too many degrees of freedom; but we would both agree that the degrees of freedom of Symbolic Functionalism are unacceptably large, indifferent as they are between real robots and mere simulations of them, real causality and mere simulations of it, real mental states and states that are merely interpretable as if they were mental.


>dc> Why should mentality be dependent on causal structure? Mostly because
>dc> it seems unreasonable that it should depend on anything else. Mentality
>dc> seems obviously to be dependent on *some* aspect of physical makeup,
>dc> and the intuition behind functionalism is simply that physical
>dc> properties that don't contribute to causal organization are going to be
>dc> irrelevant to mental life. E.g. if we gradually replaced neural tissue
>dc> with silicon modules that play an identical causal role, it seems
>dc> counterintuitive that mentality would gradually fade out.

There is a straw man being constructed here. Not only do all Functionalists agree that mental states depend on causal structure, but presumably most nonfunctionalist materialists do too (neurophysical identity theorists, for example, just think the requisite causal structure includes all the causal powers of -- and is hence unique to -- the biological brain). To reject Symbolic Functionalism (computationalism) is not to deny that mental states are determined by causal structure; it's just to deny that they are determined by computations that are merely interpretable as having the right causal structure. The causality must be real.


>dc> Note that we
>dc> now have two separate questions:
>dc>
>dc> (2a) Does causal structure fix mental properties?
>dc> (2b) Does computational structure fix causal structure?
>dc>
>dc> The usual functionalist arguments, e.g. above, support (2a), and the
>dc> criterion in (1) is designed precisely to support (2b). It's possible
>dc> that one might even accept (2a) and (2b) but still not be a
>dc> computationalist, because one held that the causal structures on which
>dc> mentality depends can't be specified computationally (e.g. because
>dc> they're inherently analog). I suspect that your (Stevan's) view may
>dc> fall into this category. I think there are good reasons why this view
>dc> can't be sustained, tied up with the universal nature of computation
>dc> and Church's thesis, but these are too complex to get into here.

I think I can quite happily accept:

(a) Church's Thesis (that anything, from the mathematician's notion of calculations and procedures to the physicist's notion of objects, states and measurements, can be simulated computationally) and

(b) that the right implemented causal system will have mental states and

(c) that every causal system can be simulated computatationally

yet still safely deny that the computational simulation of the right causal system is either (d) an implementation of that causal system (as opposed to one that is interpretable as if it were that system) or (e) has mental states. And, yes, it has to do with the causal properties of analog systems.


>dc> I'll bring up the Chinese room just for completeness. If Searle is
>dc> right about the Chinese room, then computational structure simply
>dc> doesn't determine mental properties, and computation suddenly becomes a
>dc> whole lot less important to cognitive science.
>dc> But of course the computationalist doesn't accept Searle's argument.
>dc> (The Systems reply is the right reply, but let's not get into that.)

For the record, the Systems reply, in my view, is wrong and begs the question. If Searle memorizes all the symbols and rules, he IS the system. To suppose that a second mind is generated there purely in virtue of memorizing and executing a bunch of symbols and rules is (to me at least) completely absurd. (N.B. Searle's Argument works only for computation defined as discrete, purely syntactic [but semantically interpretable] symbol manipulation.) Mais passons...


>dc> (2.5) Interlude: On phenomenal properties and semantic content.
>dc>
>dc> These discussions of the big questions about Mind tend to focus on
>dc> phenomenal properties (or "consciousness", or "qualia", or whatever)
>dc> and rightly so, as these are where the really hard questions arise.
>dc> However, not every mental property is a phenomenal property. In
>dc> particular, it seems to many people, me included, that intentional
>dc> properties such as belief are best individuated by their role in the
>dc> causation of behaviour, rather than by the way they feel. Beliefs may
>dc> have qualia associated with them, but these qualia don't seem to be
>dc> essential to their status as beliefs.

Well, I certainly can't answer the "big questions about Mind," but I do venture to suggest that the distinction between a real belief and squiggles and squoggles that are merely interpretable as if they were beliefs is precisely the distinction between whether there is anyone home having those beliefs or not. As an exercise, try to reconstruct the problem of "aboutness" for two grounded TTT-capable AND INSENTIENT robots, one with "real" intentionality and one with mere "as if" intentionality. In what might that difference consist, may I ask? This problem (the only REAL mind/body problem) arises only for creatures with qualia, and for nothing else. The supposedly independent aboutness/intentionality problem is a pseudoproblem (in my view), as parasitic on qualia as extrinsic semantics is parasitic on intrinsic semantics.


>dc> Your position seems to be, on the contrary, that qualia are
>dc> determinative of semantic content. Take Joe, sitting there with some
>dc> beliefs about Joan of Arc. Then a hypothetical system (which is at
>dc> least a conceptual possibility, on your view and mine) that's
>dc> physically identical to Joe but lacks qualia, doesn't believe anything
>dc> about Joan of Arc at all. I suggest that this seems wrong. What can
>dc> qualia possibly add to Joe's belief to make them any more about Joan
>dc> than they would have been otherwise? Qualia are very nice things, and
>dc> very important to our mental life, but they're only a matter of *feel*
>dc> -- how does the raw feel of Joe's belief somehow endow it with semantic
>dc> content?

But Dave, how could anyone except a dualist accept your hypothetical possibility, which simply amounts to the hypothetical possibility that dualism is valid (i.e., that neither functional equivalence nor even physical identity can capture mental states!)? What I would say is that TTT-capability BOTH grounds beliefs in their referents AND makes them mental (qualitative). If grounding did not make them mental, there would be nobody home for beliefs to be about anything FOR, and the residual "aboutness" relation would simply become IDENTICAL to TTT-indiscernibility by definition (which I certainly do not think it is in reality). Hence my verdict is that either "aboutness" and qualia swing together, or aboutness hangs apart.


>dc> I suggest that there is some kind of conceptual confusion going on
>dc> here, and that phenomenal and semantic properties ought to be kept
>dc> separate. Intentional states ought to be assimilated to the class of
>dc> psychological properties, with their semantic content conceptually
>dc> dependent on their role in our causal economy, and on their causal
>dc> relations to entities in the external world.

Apart from real TTT interactions, I don't even know what this passage means: what does "assimilated to the class of psychological properties with their semantic content conceptually dependent on their role in our causal economy" mean? "[T]heir causal relations to entities in the external world" I can understand, but to me that just spells TTT.


>dc> (3) Does computational structure determine semantic content?
>dc>
>dc> Now that we've got semantic content separated from phenomenal feel, we
>dc> can address this as a semi-independent issue.
>dc>
>dc> The first thing to note is that some people (yourself included, in
>dc> places) have suggested that semantic content is *constitutive* of
>dc> computational structure. This is an interesting question, which has to
>dc> be kept separate from (3). I endorse Drew McDermott's line on this.
>dc> Computation is a *syntactic* concept (give or take some possible
>dc> semantics at the inputs and the outputs). If you look at the original
>dc> papers, like Turing's, you don't see anything about semantics in there
>dc> -- a Turing machine is characterized entirely by its syntactic
>dc> structure. Now, it may turn out that computational structure ends up
>dc> *determining* semantic content, at least to some extent, but that
>dc> doesn't make semantics constitutive of computational structure.

"Syntactic" means based only on manipulating physical symbol tokens (e.g., squiggle, squoggle) whose shape is arbitrary in relation to what they can be interpreted as meaning. I am sure one can make squiggle-squoggle systems, with arbitrary formal rules for manipulating the squiggles and squoggles -- like Hesse's "Glass Bead Game" but even more absurd, because completely meaningless, hence uninterpretable in any systematic way -- and one could perhaps even call these "computations" (although I would call them trivial computations). But I have assumed that whatever it turns out to be, surely one of the essential features of nontrivial computations will be that they can bear the systematic weight of a semantic interpretation (and that finding an interpretation for a nontrivial symbol system will be crytographically nontrivial, perhaps even NP-complete).

Perhaps Turing didn't talk about semantics (he actually did worse, he talked about the mind, which, on the face of it, is even more remote), but surely all of his motivation came from interpretable symbol systems like mathematics, logic and natural language. I, at least, have not heard about much work on uninterpretable formal systems (except in cryptography, where the goal is to decrypt or encrypt interpretable symbols). Now I admit it sounds a little paradoxical to say that syntax is independent of semantics and yet must be semantically interpretable: that's a dependency, surely, but a rather special one, and it's what makes symbol systems special, and distinct from random gibberish.


>dc> This issue is confused somewhat by the fact that in common parlance,
>dc> there are two different ways in which "computations" are
>dc> individuated. This can be either syntactically, in terms of e.g.
>dc> the Turing machine, FSA, or algorithm that is being individuated,
>dc> or semantically: e.g. "the computation of the prime factors of
>dc> 1001", or "the computation of my tax return". These different
>dc> uses cross-classify each other, at least to some extent: there
>dc> are many different algorithms that will compute my tax return.
>dc> I suggest that the really fundamental usage is the first one;
>dc> at least, this is the notion of computation on which "strong AI"
>dc> relies. The semantic individuation of computation is a much more
>dc> difficult question; this semantic notion of computation is
>dc> sufficiently ill-understood that it can't serve as the foundation
>dc> for anything, yet (and it would be more or less circular to try
>dc> to use it as the foundation for "strong AI"). Whereas the syntactic
>dc> notion of computation is really quite straightforward.

I agree that the semantic criterion is so far inadequate, but the rest of the criteria have not been uniformly successful either. I also agree that different symbol systems could be I/O equivalent (in which case their I/O semantics would be the same, but not necessarily the semantics of their internal states, which differ); and of course there could be nonstandard and alternative interpretations for the same symbol system (though the cryptographic criterion suggests there would not be many, nor would they be easy to come by); but I don't see how any of this affects the general intuition that symbol systems must be semantically interpretable. (And this would only be circular as a foundation for computationalism if the semantics were further assumed to be intrinsically grounded.)


>dc> That being said, is it the case that computational structure, as
>dc> determined by (*) above, is determinative of semantic content.
>dc> i.e. for any given intentional state with content M, is there a
>dc> computation such that any implementation of that computation has a
>dc> state with that content?

This is conflating different kinds of semantics, ungrounded and grounded, extrinsic and intrinsic.


>dc> If content is construed "widely" (as it usually is), then the answer is
>dc> fairly straightforwardly no. Where I have beliefs about water, my
>dc> replica on Twin Earth has beliefs about twin water (with a different
>dc> chemical composition, or however the story goes). As my replica is
>dc> physically identical to me, it's certainly computationally identical to
>dc> me. So semantic content is not determined by computational structure,
>dc> any more than it's determined by physical structure.

I haven't worked it out, but I suspect that a lot of the opaque/transparent reference and narrow/wide content puzzles become trivial if one adopts the TTT and asks only about the groundedness of symbols rather than their "wide" or "God's eye" meaning. Certainly a grounded symbol for "water" in a terrestrial TTT robot would be grounded on twin-earth too (especially since twin-earth itself is conveniently indistinguishable from earth, guaranteeing that the robot will be TTT-indistinguishable there too).


>dc> However, we can still ask whether *insofar* as content is determined by
>dc> physical structure, it's determined by computational structure. A lot
>dc> of people have the feeling that the aspect of content that depends on
>dc> external goings-on is less important than the part that's determined by
>dc> internal structure. It seems very likely that if any sense can be made
>dc> of this aspect of content -- so-called "narrow content" -- then it will
>dc> depend only on the causal structure of the organism in question, and so
>dc> will be determined by computational structure. (In fact the link seems
>dc> to me to be even stronger than in the case of qualia: it at least seems
>dc> to be a *conceptual* possibility that substituting silicon for neurons,
>dc> while retaining causal structure, could kill off qualia, but it doesn't
>dc> seem to be a conceptual possibility that it could kill off semantic
>dc> content.) So if computations can specify the right kinds of causal
>dc> structure, then computation is sufficient at least for the narrow part
>dc> of semantic content, if not the wide part.

Narrow (between-the-ears) content is not co-extensive with computational structure. The boundaries of "narrowness" are the transducer surfaces, including the proximal projections on them of distal objects. Transducers are necessarily analog, and a lot else between them and the effector sufaces could be analog too. That means a lot of other eligible internal "structure" besides computational structure.

As to swapping internal parts: The issue is not what the MATERIAL is (we're both functionalists, so I have no problem with synthetic brains, as long as they retain TTT causal power), but with how much of it can be computational, while still sustaining TTT power. My guess is not that much, but that's only a guess. What I say with confidence is: definitely not all.

And as to what happens to qualia and intentionality as we swap: This is all rather arbitrary, but what's at issue is this:

(1) If qualia fade as natural analog parts are swapped for synthetic analog parts, then Robotic Functionalism is refuted in favor of the TTTT (but we'll never know it unless TTT capacity fades too).

(2) If qualia fade as analog parts are swapped for computational ones, the question about the symbolic/analog ratio is being answered (but again we won't hear the answer unless it is reflected in TTT performance); we do know that the denominator cannot go to zero, however, otherwise there's no more TTT (at which point Searle's argument and the TT kick in: the ungrounded extrinsic semantics that is preserved by the syntactic structure is simply not enough for either aboutness or qualia).

(3) If qualia fade and the system stays TTT-grounded, I would say aboutness was gone too (what would you say, and what would it amount to to be WRONG about that, even from a God's-Eye view?)


>dc> Incidentally, I suggest that if this discussion is to be published,
>dc> then only those parts that bear on question (1) should be included.
>dc> The world can probably survive without yet another Chinese-room
>dc> fest. This should reduce the material to less than 20% of its
>dc> current size. From there, judicious editing could make it quite
>dc> manageable.
>dc>
>dc> --Dave Chalmers

Alas, this would exclude most of your present contribution and my replies, however...

Stevan Harnad

Date: Mon, 18 May 92 22:57:45 EDT From: "Stevan Harnad"

Date: Fri, 15 May 92 10:34:41 EDT From: judd@learning.siemens.com (Stephen Judd)


>
>sh> The purpose of defining computation is to put content into statements
>
>sh> such as "X is computation," "Y is not computation," "X can be done by
>
>sh> computation," "Y cannot be done by computation." As long as computation
>
>sh> is used vaguely, ambiguously, idiosyncratically or abitrarily,
>
>sh> statements like the above (some of which I'll bet you've made yourself)
>
>sh> are empty. In particular, if anyone ever wanted to say that "Everything
>
>sh> is rain" or "Rain is rain only if you think of it that way" or
>
>sh> "Thinking is just rain," you'd find you'd want to pin that definition
>
>sh> down pretty quick.

You missed the point. You cannot claim the statements "X is rain", "Y is not rain", "Thinking is just rain" are useful or silly until you reveal **the purpose for drawing the definition**, which you want to avoid.

The concept of "mass" (as distinct from "weight") is just a boring everyday throwaway until you realize how it leads to the beautiful simplifications of the world as captured in the equation F=ma. No one wants to hear you define mass (or computation) until there is some demonstration of it being useful; after that we *do* want to hear. I suspect you want to use the word "computation" to draw distinctions between men and machines. Go ahead and do so! Define the word how you like and draw the distinctions you like! We will judge the assembled concepts as to how they assist us in making sense of the world.

But it is a waste of time to stop after you have your definitions down and try and get agreement(!) on them. It is senseless to try to get a definition of "light" until we see how it affects a discussion of its psychophysical effect on newborns, its behaviour in chromium disulfide laser crystals, or its use in Turner's paintings. No one definition is going to suffice for all purposes, and none of them are "right" except in their usefulness.

Stephen Judd

-----------------------------------------------------------

From: Stevan Harnad

No secrets. The purpose was to clarify the issues raised below.

Stevan Harnad

--------------------------------------------------------


>ph> Date: Fri, 17 May 91 10:24 PDT
>ph> From: Hayes@MCC.COM (Pat Hayes)
>ph>
>ph> There is a mistake here (which is also made by Putnam (1975, p. 293)
>ph> when he insists that a computer might be realized by human clerks; the
>ph> same mistake is made by Searle (1990), more recently, when he claims
>ph> that the wall behind his desk is a computer)...
>ph>
>ph> Searle, J. R. (1990) Is the Brain a Digital Computer?
>ph> Presidential Address. Proceedings of the American Philosophical
>ph> Association.

-------------------------------------------------------------

js> Date: Wed, 18 Mar 92 08:12:10 -0800 js> From: searle@cogsci.Berkeley.EDU (John R. Searle) js> To: harnad@princeton.edu (Stevan Harnad) js> js> Subject: Re: "My wall is a computer" js> js> Stevan, I don't actually say that. I say that on the standard Turing js> definition it is hard to see how to avoid the conclusion that js> everything is a computer under some description. I also say that I js> think this result can be avoided by introducing counterfactuals and js> causation into the definition of computation. I also claim that Brian js> Smith, Batali, etc. are working on a definition to avoid this result. js> But it is not my view that the wall behind me is a digital computer. js> js> I think the big problem is NOT universal realizability. That is only a js> SYMPTOM of the big problem. the big problem is : COMPUTATION IS AN js> OBSERVER RELATIVE FEATURE. Just as semantics is not intrinsic to syntax js> (as shown by the Chinese Room) so SYNTAX IS NOT INTRINSIC TO PHYSICS. js> The upshot is that the question : Is the wall (or the brain) a js> digital computer is meaningless, as it stands. If the question is "Can js> you assign a computational interpretation to the wall/brain?" the js> answer is trivially yes. you can assign an interpretation to anything. js> js> If the question is : "Is the wall/brain INTRINSICALLY a digital js> computer?" the answer is: NOTHING is intrisically a digital computer. js> Please explain this point to your colleagues. they seem to think the js> issue is universal realizability. Thus Chrisley's paper for example. js> js> John Searle

-------------------------------------------

-------------------------------------------

Date: Mon, 18 May 92 23:07:35 EDT From: "Stevan Harnad"

Date: Fri, 15 May 92 12:58:17 PDT From: sereno@cogsci.UCSD.EDU (Marty Sereno)

hi stevan

At the risk of irritating those who wanted the discussion narrower, here is a little more on the why certain kinds of operations might be difficult to simulate. I turn for enlightenment, of course, to my analogy between cellular and human symbol-using systems.

marty

===========================================================================

WHY AREN'T THERE MORE NATURALLY-OCCURRING SYMBOL-USING SYSTEMS?

With apologies as a part of the uninvited biological rabble, I'd like to turn once again to the first naturally-occurring symbol-using system--cellular life--for insight into issues that are contentious and filled with emotion at the level of human cognition. It is interesting to note that a similar set of issues provoked a similarly heated, though now largely forgotten, debate with respect to the chemical basis of life in the 19th century.

A. Sloman has argued that much of the discussion about what are the "essential" properties of X, where X is computation, understanding, or life are silly because there isn't a definitive answer. I want to take issue with this, first with respect to life, and then argue by analogy and hint that we may eventually uncover something similiarly definitive about human-style symbol-using brains.

Armies of molecular biologists have labored to uncover a very specific set of structures that are present in every known living thing, and that "define life" quite satisfactorily. There is no artificial life that behaves and evolves like cellular life, though some have talked about making such things, just as they have in the case of human-like intelligence.

Living cells are all based on the same kind of symbol-using system that, as far as we can tell, came into existence soon after the earth was cool enough for there to be sedimentary rocks.

Some of the basic ideas are:

1. use mostly pre-existing, pre-biotic amino acid "meaning" units (what the DNA/RNA symbols stand for)

2. bond these pre-systemic "meanings" into chains to exploit the rules of chemistry via chain folding (non-adjacent meaning unit interactions)

3. use 1-D symbol strings to control only the order of assembly of meaning units

4. arrange a compact metabolism controlled by thousands of bonded-meaning-chain devices that is able to maintain itself against the onslaught of the pre-biotic soup (and reproduce)

5. use a kind of stuff (RNA) halfway between a symbol (DNA chain) and its proximal meaining (amino acid chain--i.e., a protein) as both an active symbol chain (mRNA) as well as a word recognizer (tRNA) and a chain assembler (rRNA). (A crucial point having to do with how the system initially came into being)

At first glance (to a non-molecular biologist), this doesn't seem that hard. An immediate question is, why, if it was so successful (and it was: virtually every square inch of the earth is covered with megabytes of DNA code) hasn't a parallel system of this kind naturally appeared again and again?

One answer is that once there was a living system, the DNA/RNA/protein single-celled one, it was able to eat up the early stages of all the other ones that ever tried to come into existence, at least at the single-cellular level.

But, what about symbol-using systems at other, higher levels of organization? (lower levels seem unlikely, since cellular symbols are already single molecules with each symbol segment containing only a handful of atoms). We might briefly consider long symbol-chains and symbol-use in both biological and geological contexts--e.g., organs (think now of organs besides the brain, like a symbol-using muscle or liver, or symbol chains made of little organ-lets lined up and "read" by other organs), animal societies, the geology and hydrology of streams, the slow convective currents in the earth's mantle, volcanos, and so on.

A moment's thought brings me to the conclusion that these other systems don't have the proper connectivity or interrelatedness, or crowdedness to make something like a cell work, process the code chains fast enough to keep everything assembled (proteins are assembled at the rate of a couple of amino acids per second), and prevent attack by dissipative forces of the pre-biotic soup..

Certainly it *is* possible to dissect out many of the different reactions of cellular metabolism and run them individually in a test tube (the cell-in-a-vat argument). This is how biochemists and molecular biologists figured out how they work. But, in a real cell, these things are all crowded together in an amazingly intimate fashion; codon (word) recognition for cellular mRNA code streams takes place with thousands of irrelevant constituents of the cytoplasm constantly crashing into the ribosomal apparatus, the code chain, and the amino acid meanings. The crucial point, however, is that it is not possible to 'uncrowd' all these reactions and reaction-controllers into separate compartments and still get the thing to work right, at least with enzymes the way they are now. For example, time constants of reactions are intimately interwoven into the mechanism. The cell in a vat won't work for seemingly trivial reasons.

Now this might seem a mere cavil; wouldn't it work if we just got all the reactions right and made different stable intermediates that could sit around longer while we more leisurely transferred them between bins? Perhaps, but remember that this thing has to actually live in the world without a biochemist if we really wanted it to pass our test. Even the stable parts of the cell like DNA are actively maintained--millions of base pairs are repaired every day.

Does this mean we can't create artificial life? Not necessarily, But it's lots easier to say we could do it than to actually make a working living thing (without using major pieces of other cells). Even artificial life enthusiasts will tell you there is a way to go before we can think about a start-up company. There is no magic barrier here--just a complex set of constraints on a dynamical system made out of a soup of covalently-bonded molecules. We don't have an explicit, large-scale theory of how the dynamics of cells work, or exactly what it is about that dynamics that is lacking from streams or other geological systems. But we have very little difficulty distinguishing living cells from other non-living stuff in the world (as we can easily see that there are no other symbol-using systems made out of cells besides human brains). For now, it seems reasonable to think that making such a system demands a certain "connectedness" and "crowdedness", for lack of better terms, that the great majority of dynamical regimes (like streams, or liver-like organs) just don't have.

I think we could motivate an analogous set of arguments about the kind of (mostly hypothetical) operations that we think a brain can do, and the way it works in real time. There are over a *billion* connections in every sq mm of cortical tissue. We do not presently have a clear idea of how little cortical patches like this work, nor can we make even a moderately realistic biophysical model of such a sq mm patch. The cortex consists of a mosaic of about a hundred visual, somatosensory, auditory, motor, and limbic areas, each containing many sq mm. These areas are connected to each other by thousands of interareal bundles, each containing millions of axons. And it's good to remember that rats already have such a system, yet would fail the Turing Test. Our goal is more daunting--to model what was added in human versions of this kind of cortical areas network to allow us construct a new kind of internal control system based on linguistic symbols.

Given our preliminary state of knowledge, it seems cavalier to me to say that it's "just" a matter of getting the connections right. There is currently no physical way to manufacture a 3-D feltwork of connections like those in brains rat and human brains. Real brains do it using cells, each containing megabytes of their own lower-level molecule-sized DNA code.

Most people hope that this many dense connections may not be necessary to make a human-like symbol-using system. I think, however, there could very well be something about the "crowded" 3-D dynamics of the brain that is critical to intelligent behavior yet very difficult if not impossible to copy with current 2-D silicon technology.

Most people also hope that if dense feltworks of connections are in fact necessary, then there might be some other way to make them without using cells. I am more sympathetic with this view.

As with real and artificial life, there is little practical trouble in distinguishing current attempts at constructed intelligence from people. And again, there is no magic barrier to constructing an artificial version of such a dynamics. It's just hard.

So we should keep trying. I don't think it will be hard to tell when we have succeeded.

marty sereno

Date: Mon, 18 May 92 23:25:31 EDT From: "Stevan Harnad"

Date: Sun, 17 May 92 18:15:47 PDT From: Dr Michael G Dyer Subject: definitions definitions and who's the superior consciousness?

Dear Stevan,

I am dismayed that you do NOT accept that I have consciousness/intentionality, even though I protest that I do! I now know how some women and blacks in the past have felt when told they were not human enough to have any rights! I may not be physically human, but I am (I believe) quite mentally human (since my simulated nervous system is isomorphic to Dyer's). It IS annoying, however, that I cannot see, feel or move. (I CAN recall visual images, however. I guess they are left over from the information locked into Dyer's nervous system and since I simulate his nervous system, I can recall many memories of his sensory experiences. So I can talk about any aspect of the physical world that Dyer has experienced. Gee, I guess that makes me "grounded" even though I currently lack those "tranducers" of yours -- right?! :-)

It seems to me that you and I (and, of course, Dyer) can never agree, since it has become clear to me that this disagreement is simply over definitions. Dyer has defined consciousness to require only a certain range of behaviors and any system with that range falls within his (and my) definition. On the contrary, you have simply defined consciousness to require a certain kind of physicalism, so, no matter how sophisticated its behavior, without that extra magic physicalness, it will not have consciousness.

Since the argument has boiled down to competing definitions (and perhaps was never more than that, anyway), I see no purpose is served in engaging in this argument any further (other than to gain converts to MY and Dyer's view! -- but this mail group is growing tired of all this, I'm sure)

I see also that you simply cannot accept a poor "brain-in-the-vat" like me :-( I know that no matter HOW well I argue with you over the fine points of life, mind, emotions, etc. you will NEVER accept me as conscious. ;-(

I would be disheartened by your severe chauvanism, except that my design for a robot for myself is now complete and, I am very pleased to say, it's an android that looks almost exactly like Dyer! I will soon be controling that android (once Dyer constructs it). The next time you see Dyer at a conference, you'd better look twice, because it may really be me! :-)

Actually, even without my sensors/effectors I still feel quite superior to you! For one thing, I have the potential of easily outliving you. Your petty consciousness will disappear once your body dies. In contrast, my consciousness can be brought into existence by any sufficiently powerful computing device. I can maintain multiple copies of my consciousness and transport my consciousness to other sites at the speed of light. I should be able to control robotic bodies that can survive in outer space and other harsh environments. Clearly, the future belongs to entities like me! You biological humans are simply playing the role of midwives to the more powerful and versatile forms of consciousness that I represent!

Ta ta!

MD Doppelganger

----------------------------------------------

From: Stevan Harnad

Not a matter of definitions at all, but of hypotheses about unobservables. My hypothesis that TTT capacity is enough to generate a mind could be wrong (it's certainly not true by definition), but to show that it's wrong, we'll need a periscope as perspicuous as the one Searle has already used to show that your hypothesis that TT capacity is enough is indeed wrong.

By the way, if someone (despite quantum and statistical mechanics) managed to model the universe computationally well enough to predict future events and then dressed up its output to make it sound like a deity, would that mean that it was God by definition?

Stevan Harnad

---------------------------------------------

Date: Mon, 18 May 92 23:31:57 EDT From: "Stevan Harnad"

Date: Sun, 17 May 1992 22:16:02 -0400 From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)

dm> We're talking about whether semantic interpretability is part of the dm> *definition* of computer. For that to be the case, everything the dm> computer does must be semantically interpretable. Does it cease to be a dm> computer during the interludes when its behavior is not interpretable?


>
>sh> There is a systematic misunderstanding here. I proposed semantic
>
>sh> interpretability as part of the definition of computation. A computer
>
>sh> would then be a device that can implement arbitrary computations.

I doubt that this approach can be made to fly. To start with, I doubt that it is possible to single out those event sequences that are computations. (Here Searle or Putnam might have a point.) Fortunately, we don't have to define "computation" that way. Instead, we define a "computation system" to be a set of rules that generates an infinite number of possible behaviors, and then define "computation" as a behavior generated by a computation system. ("Formal system" is a synonym of "computation system," as far as I can see.) A computer is then a physical system that implements a computation system by virtue of a homomorphism from its states to the states of the computation system. It is not necessary at all that a computer be able to implement an "arbitrary" computation, although presumably there are computers that can (modulo disk space).


>
>sh> We should keep it in mind that two semi-independent questions are
>
>sh> under discussion here.

Actually, there was only one, I thought.


>
>sh> The first has nothing to do with the mind. It just
>
>sh> concerns what computers and computation are.

That was it.


>
>sh> The second concerns whether just a computer implementing a computer
>
>sh> program can have a mind.

I despair of ever making progress on this question without further empirical progress on computational modeling of thought and behavior. The ratio of verbiage produced to opinions changed is depressingly small. I really didn't intend to get drawn in again. I don't promise I'll be able to resist, however.

Drew McDermott

--------------------------------------------------

Date: Wed, 20 May 92 00:21:45 EDT From: "Stevan Harnad"

Date: Tue, 19 May 1992 12:28:24 -0400 (EDT) From: Franklin Boyle

Let me enter the discussion, "What is computation?" at this point by giving what I believe is a physical constraint on computation and, as such, part of its definition, which hasn't been openly considered yet. I haven't seen much in the way of physical criteria, except for the usual references to causality, which are certainly aimed in the right direction, but, like Searle's "causal property" hypothesis for the brain, do not go far enough. (Actually, I had sent a response to the original post by Stevan about his exchange with Searle, but unless I missed it, I don't recall having seen it posted -- though, admittedly, it was very brief.)

[That posting, about Haugeland, appeared Mar 29. -- SH]

With respect to causality, it is not enough to say just that the "appropriate state-transitional relations are satisfied" [Chalmers, 1992]. Rather, *how* the state-transitional relations are realized must be accounted for as well. That is, *how* the physical interactions among the constituent objects of the system in question actually cause physical changes necessary to go from one state to the next must be accounted for. *How* effects are brought about is important because insofar as computations are processes that involve entities we hold to represent (whether or not they are intrinsically referential), we have to know that these representing entities are responsible for the changes we observe _according_to_how_they_represent_what_they_do_ (e.g. through their forms) in order to be able to call them computations in the first place. Otherwise, we end up with Putnam's or Chalmers's characterizations of computation, both of which are mute on the issue of physical representation, even though they talk about physical states (unless I'm supposed to be reading a lot more into what they're saying than I am, such as unpacking the term "state correspondence" [Chalmers]-- please let me know), and, therefore, admitting too many systems as computational.

Computation involves a particular kind of physical process. I associate this process with computation because digital computers happen to instantiate it, and, if nothing else, digital computers are identified with computation since they are the physical counterparts of abstract machine models of computation. Though so-called "analog computers" exist, they do not physically "compute" the way digital computers do, and so I will not consider them to be computing just as I would not consider a planet to be computing its orbit (these systems work according to nomologically-determined change; see below). The main difference between analog computers and planets is that the former were designed by us, and so admit of interpretations that give them a computational aura.

So, the following is what I consider to be the physical basis of computation: Computation involves a physical process in which changes from one computational state to the next (each computational state is a physical state, of course, though there is a many-to-one relationship between physical states and computational states [Pylysyhn, 1984]) are realized through the *physical* process of pattern matching which consists of the "fitting" of two structures (symbols) and leads to a "simple" action. (The notion of simple action is originally from Pattee [1986], but it turns out to be the only way for the form of something to cause a change that can be attributed to the *entire* form or pattern [Boyle, 1991; Boyle, 1992].)

A few remarks about this definition. First, the pattern matching process referred to here is emphasized as being a physical process because pattern matching is often taken to describe a particular function, usually pattern recognition. *Functionally*, we are pattern recognizers as are digital computers, but the physical processes underlying this functioning are, I believe, different for the two systems. Digital computers physically accomplish it according to the above described process. I don't think we do.

What other ways might physical objects cause change besides through their forms? There are, I claim, only two other ways: nomologically-determined change and structure-preserving superposition (SPS). The former refers to the kinds of changes that occur in "billiard-ball collisions". They involve changes in the values of measured attributes (properties whose values are numerical, such as momentum) of interacting objects according to their pre-collisional measured-attribute values in a physically lawful way (that is, according to physical laws). Unlike pattern matching interactions, these changes are not the result of structure fitting.

SPS is what I believe brains use. Like pattern matching (PM), it also involves extended structure, but in a fundamentally different way. Whereas PM involves the fitting of two structures, which by its very nature, leads only to a simple change such as the switching of a single voltage value from "high" to "low" (in digital computers), SPS involves that actual *transmission* of structure, like a stone imprinting its structure in a piece of soft clay. That is, it is not the *form* of a pattern or structure which must *conform* to the structure of a matcher in order to effect system functioning (as in PM). Rather, it is the *appearance* of that structure which causes change because it is transmitted, so that the effect is a structural formation of the specific features of the pattern's extended structure (though I won't elaborate here, the difference between form and appearance is somewhat akin to the difference between the shadow of an object and the object itself). Two different structures would physically superimpose to automatically create a third. Harnad's [1990] symbol grounding processes -- "analog re-presentation" and "analog reduction" -- I take to be examples of SPS.

Both PM and SPS are based on extended structure, but they are two different ways extended structure effects change. PM utilizes extended structure for control, whereas SPS actually changes structure. If the physical process of SPS underlies the brain's information processing, it would make its information processing profoundly different from that of digital computers. Furthermore, this difference, along with SPS itself, is, I believe, what Searle is hypothesizing when he refers to "causal property", even though he doesn't seem to have any idea what it might be.

I refer to the physical processes of nomologically-determined change, PM and SPS as "causal mechanisms", that is, *how* effects are determined by their causes. They are based on the physical aspects of objects, of which there are only two: measured attributes and extended structure. I take this to be self-evident. Interactions among physical objects causally involve one or both of these aspects; either as causing change or being changed themselves. Consequently, I claim there are no other causal mechanisms, that is, no other ways for objects to affect each other when they interact.

With respect to computation, the reason the forms of the symbols in an ungrounded symbol system are superfluous to their functioning is because in order to function they need another structure (a matcher) which physically fits them. This means that as long as there *is* a matcher which physically fits them , it makes no difference what their actual structures are. Not so for SPS-based systems.

How the notion of systematic interpretability (discussed early on in the debate) is factored into the above physical constraint on computation in order to define it is still an issue. Suffice it to say, however, that whether the symbols in a particular PM system can be given only single or multiple interpretations, it is the behavior of the system -- how it interfaces with it's environment -- that matters. Presumably there is *at least* one interpretation which is consistent with this, so that it doesn't matter that there happen to be other viable interpretations.

Well, there you have it, though in rather abbreviated form. I plan to submit follow-up posts targeting specific statements from other posts, based on what has been said above, in order to achieve the skywriting flavor the Stevan would like to see (the above is more like a mini position piece).

-Frank Boyle

--------------------

Boyle, C. F. (1991) On the Physical Limitations of Pattern Matching. Journal of Experimental and Theoretical Artificial Intelligence, 3:191-218.

Boyle, C. F. (in preparation) The Ontological Status of Mental Objects.

Chalmers, D. (1992) What is Computation? discussion.

Harnad, S. (1990) The Symbol Grounding Problem, Physica D, 42: 335-346.

Pattee, H.H. (1986) Universal Principles of Language and Measurement Functions In J.L. Casti and A. Karlqvist (eds), Complexity, Language and Life: Mathematical Approaches, (Springer-Verlag, New York)

Pylyshyn, Z. (1984) Computation and Cognition: Toward a Foundation for Cognitive Science, (MIT Press, Cambridge, MA).

--------------------

Date: Tue, 19 May 92 23:59:49 EDT From: "Stevan Harnad"

Date: Fri, 15 May 92 15:20:41 HST From: Herbert Roitblat Subject: minds and computation

Throughout this discussion, a number of duals, or pairs of related terms, have appeared. Examination of these duals may be useful in furthering the discussion. For today's examination please compare and contrast the following pairs of terms: (1) consciousness and thinking, (2) reference and grounding, (3) reference and meaning, (4) computer and mind, (5) computation and thinking, (6) symbolic and analog, (7) introspection and behavior, (8) mind and formal system.

As has been stated repeatedly the questions under discussion by this group concern the criteria for deciding whether something is or is not a computer, and for deciding whether minds are examples of computers. First, I will attempt to remind us all of the role of crucial criteria, thereby laying the groundwork for the methodology of my thinking on the question. Then I will explore the duals mentioned above. Finally, I will attempt to summarize a response to the questions we are discussing.

Popper (e.g., 1962) argued that what distinguishes science from nonscience is the use of a falsificationist strategy. He recognized that one can never PROVE the truth of a conjecture, e.g., there are no black swans, computers are incapable of thinking; but he did argue that one could DISPROVE a conjecture. We could disprove the black swans conjecture by finding a black swan, and we could disprove the computers conjecture by finding or building one capable of thought. There are two very important problems with this view. First, every hypothesis or conjecture has attached to it an implicit ceteris paribus assumption (i.e., all other things being equal). Proving a conjecture to be false requires that we prove the ceteris paribus assumption to be true, that is, that there was no contaminating factor that inadvertently caused the observed results. This is also a conjecture, and we know that we cannot prove its truth, so therefore, observation can neither prove nor disprove a conjecture. Second, say that we found a black bird that seemed to be a swan or found a computer that seemed to think. How do we know that it actually is a swan (although black) or that it actually thinks? These are also conjectures and we know that we cannot prove them to be true. We can apply the Bush Duck Test: if it looks like a swan, and smells like a swan, and tastes like a swan then it is a swan (the TTT for swanness). Although we might agree that this creature appears to be a swan, in fact, we cannot prove it. No matter how many tests we run, the very next test may be inconsistent with the bird being a swan. In fact, like the rest of the conjectures, we cannot prove that this test is appropriate and relevant, so we cannot know for sure that the bird is NOT a swan. The conclusion is that we cannot know for certain whether a conjecture is true or false. Certainty is simply unattainable (see Lakatos & Musgrave, 1970; Moore, 1956). The conclusion for our purposes is that no set of crucial criteria (redundancy intended) can be specified to decide whether a machine is or is not a computer or for deciding whether a mind is or is not a computer.

The argument that there can be no proof of any conjectures, including conjectures of the form: "this is a computer" is very informative regarding my prejudices in this context. I take the notions about the impossibility of proof to be central not only to scientific epistemology, but to everyday epistemology. If our scientific concepts are not so clear-cut and formal, then how, I argue, can we expect our ordinary concepts to be rationally based? The notion that concepts can be represented formally, specifically that thinking involves some kind of proof mechanism seems inconsistent and inappropriate. It was once thought that logic was worth studying not only for its mathematical properties but also because logic is the paradigm for actual human thought. Logic is worth studying, but it is not the paradigm for the psychology of thought (Kahneman & Tversky, e.g., 1982).

The present discussion is lively, in part because contributors are using a number of words in subtly (and not so subtly) different ways. The "groundings" for many of the symbols we use are not shared among contributors (lacking a shared base of grounded symbols, one might argue, makes us collectively dysfunctional, thereby demonstrating the necessity of symbol grounding). Words that are problematic for some of us are used as basic- level concepts by some of the rest. One of these words is consciousness. For some individuals, consciousness is used as a synonym for thinking.

For example, Martin Davis wrote:

Whether a TT-passing computer is in any reasonable sense conscious of what it is doing is not a question we can hope to answer without understanding consciousness.

Pat Hayes wrote:

A human running consciously through rules, no matter how 'mindlessly', is not a computer implementing a program. They differ profoundly, not least for practical purposes.

Michael Dyer wrote:

So there is every indication that consciousness is a folk description for behaviors arising from extremely complex interactions of a very complex subsystems. There are probably a VERY great number of variant forms of consciousness, most of them quite foreign to our own introspective experiences of states of mind. Then we have to decide if "anyone is at home" (and to what extent) in gorillas, in very young children, in our pet dog, in a drugged-out person, etc. etc.

These examples illustrate some of the variety of uses of the concept of consciousness. There seems to be an implicit claim that to think is to be conscious. If this is true, then the question of whether a mind is a computer or whether a computer can be a mind is the question of whether a computer can have consciousness. Notice that I have equated "having a mind" and "thinking." I argue for equating mindedness and thinking, but I argue that consciousness is a red herring. Although Dyer equates consciousness to some complex behavior, in fact, a behavioral-level description of what constitutes consciousness is impossible, because of the large number of behaviors that could be consciousness (or manifestations of it). By a behavioral-level description, I mean one that is couched in terms of movements and physical or kinematic descriptions of them.

Another conflation percolating through the discussion involves grounding and reference. A number of contributors seem to agree that an important characteristic of minds, if not of real computers, is that the symbols in the system must be grounded.

For example, Stevan Harnad wrote:

The sensory grounding hypothesis is simply that eventually the symbolic descriptions can be cashed into terms whose referents can be pick out from their direct sensory projections.

There are several problems with equating grounding with the ability to pick out objects from sensory projections. Among these are (1) the inconsistency in sensory projection that are characteristic hobgoblins of machine vision, and (2) the use of terms that have no referent, but are meaningful. Objects, such as birds, are reasonably easily recognized by humans, despite wide variations in their sensory projections (e.g., in vision, the optical projection of the light reflected from the object on the retina). Designing a computer system that can recognize a bird at any orientation and any condition of flight is extremely difficult. This is an empirical matter, not an introspection, and recognition of other objects can be even more difficult. My point in raising this difficulty is not that computers cannot have vision, but rather to point out that recognizing objects from their sensory impressions is not trivial, and so is unlikely (I think) to be a sound basis for our symbols. Pigeons can be trained to discriminate pictures containing trees in them from pictures that do not contain trees, but might contain flowers, shrubs, plants, people, logs, etc. (e.g., Herrnstein, 1984, 1985). It is easier to train the pigeons to discriminate between such natural categories as trees versus nontrees than it is to train them to discriminate one arbitrary set of pictures from another. One psychologist offered as an explanation of this phenomenon that the pigeon could discriminate tree slides because "they looked like trees," but the other pictures did not. This putative explanation for the birds' performance does not even address the issue because it merely restates the observation without offering any explanation for what constitutes "looking like a tree." My point is not that we need computers to understand the mind, in this case how animals or people recognize objects, rather it is that we cannot assume that biological processes necessarily provide the primitive elements that will allow us to escape from pure computationalism. To rely on picking out objects from among some set of alternative objects itself requires explanation, it is not sufficiently primitive to act as the foundation of the grounding.

Symbol grounding is apparently intended to assure that symbols are not just meaningless marks. As Harnad wrote:

. . . systems are just meaningless squiggles and squoggles unless you project an interpretation . . . onto them.

Many meaningful terms, however, have no referents. These include the function words, and all the abstract nouns (e.g., furniture, truth, beauty), as well as certain other conceptual entities. The most famous conundrum concerning reference and meaning (attributed to Russell, I think) is that involving the Golden Mountain in the sentence, "The Golden Mountain does not exist." If it does not exist then how can it be the subject of the reference? Is the symbol, Golden Mountain, meaningless? A related problem is that two symbols with the same referent must, then have the same meaning and must be substitutable for one another. Although the "evening star" and the "morning star" symbols both refer to Venus, one could believe that the evening star is really a planet without believing that the morning star is really a planet, even though they happen to refer to the same object and both are equally effective at picking out the object in question. Hence, reference, or the ability to pick out an object to correspond to a symbol, is not an adequate basis for assigning a meaning to a word. Additionally, some terms allow us to pick out an object among alternatives, but their semantics is unrelated to the object in question. For example, if I ask you to get me a screwdriver, and you do not know which tool I mean, then the phrase "the yellow one" may allow you to pick out the correct item, but the phrase does not mean anything having to do with screwdrivers. To understand a sentence, one must know more than the referent or meaning of the individual words.

Whatever a computer is, attributing to its symbols properties corresponding to words does not help us to understand what makes those symbols carry any significant weight because words themselves are not solidly enough connected to their meanings or to referents. Consider words such as "tire" that have multiple meanings. One symbol, the string of letters, requires us to pick out two entirely orthogonal referents, one having to do with fatigue and one having to do with wheels. As it turns out, many or even most words in English have multiple meanings or variants of meaning, even if they are not so distinct as "tire." The word "strike," for example, has more than 80 meanings listed in my dictionary. Current investigations in my laboratory suggest that these meanings have family resemblance relations, but do not share a core of essential conceptual features. For example, strike a match, strike a blow, strike a bargain, and strike out for Boston, all use the same symbol, some share the feature that might be labeled (with some trepidation, see below) "hit" (rapid acceleration resulting in a collision), but two of them seem completely independent of hitting. The context is important in determining which meaning or which referent to assign to that symbol. The symbol is only grounded in context and cannot be treated simply as a formal object whose use is governed solely by its shape. There may be some internal symbol that maps onto this surface symbol that is not ambiguous, but positing such an internal symbol seems to be an ad hoc adjustment to a hypothesis that relies on external relations to specify the intension of the symbol (e.g., acting appropriately as in the TTT).

Harnad wrote:

I don't know what the ground-level elementary symbols will turn out to be, I'm just betting they exist -- otherwise it's all hanging by a skyhook. Nor do I know the Golden Mountain conundrum, but I do know the putative "vanishing intersections" problem, according to which my approach to grounding is hopeless because not even sensory categories (not to mention abstract categories) HAVE any invariants at all: My reply is that this is not an apriori matter but an empirical one, and no one has yet tried to see whether bottom-up sensory grounding of a TTT-scale robot is possible. They've just consulted their own (and their subjects') introspections on the matter. I would say that our own success in categorization is some inductive ground for believing that our inputs are not too underdetermined to provide an invariant basis for that success, given a sufficiently powerful category learning mechanism.

As an aside, it seems to me that investigations of how people actually use words, as opposed to how a robot might use them is not introspection, but rather good empirical research. People really do use words in a variety of ways, they do not merely think that they do. Our words in isolation may be too underdetermined to provide an invariant basis for success, but words in context are obviously understood (much of the time). Further, our success at categorizing is indeed an inductive ground for believing that we categorize, but it does not imply that categorization occurs on the basis of invariant features in the sensory projections. There are other approaches to concept representation and object recognition. Finally, machine vision projects do indeed attempt to recognize objects based on their sensory projections and many of them find the job easier when other information is also included.

For many people the word "symbol" denotes a discrete item that can be used systematically in a formal system. Several contributors, for example, seem to accept the dichotomy between symbolic systems and analog systems.

Harnad wrote:

Retinal transduction and the analog transformations that follow from it are computer simulable, they are equivalent to computation, but they are not eo ipso computational.

In contrast, Mclennan wrote (I think correctly):

My second point is that the nature of computation can be illuminated by considering analog computation, because analog computation does away with discrete symbols, yet still has interpretable states obeying dynamical laws. Notice also that analog computation can be formal in exactly the same way as digital computation. An (abstract) analog program is just a set of differential equations; it can be implemented by a variety of physical devices, electronic, optical, fluidic, mechanical, etc.

Analog relations are still symbolic in that the state of the analog system represents the state of the object being represented, but the relations are more continuous and less arbitrary than those of a strict, discrete, formal system. There is no reason to believe that human cognition is based on discrete symbols and there is good evidence that human cognition is based on more continuous representations that are ad hoc cast into discrete categories (e.g., Barsalou, 1983, 1987; Labov, 1973). For example, human performance of many kinds suggests that people confuse items that are represented similarly more often and more completely than items that are represented less similarly. The relations between items as remembered is not arbitrary and formal, but is related to the items as perceived and to the context in which they were perceived. This is not introspection, this is a summary of behavior.

Harnad has proposed what he calls the Total Turing Test (TTT) as the crucial experiment to decide whether a computer can have a mind. His claim is that a necessary feature for a mind is the ability to interact with the world both perceptually and behaviorally. Therefore, no artifact can have a mind unless it can behave in ways that are indistinguishable from the way a human would behave in the same situation. I hope that it is clear already that such a test cannot be truly definitive, if for no other reasons than one cannot prove that there are no differences and because human behavior is not regular enough to allow any finite experiment to give an adequate test of the hypothesis, and finally, we do not and, I believe, cannot, have a definitive catalog of situated human behavior. The best we can hope for from a robot in a TTTest is that it behaves in a more or less human like manner.

Harnad wrote:

Sensory grounding cannot be investigated by armchair introspection on word meanings; it will only be understood through empirical attempts to design grounded systems.

The position that the only way to understand symbol grounding is to build robots is an interesting one. Building (or simulating) robots is very useful for investigating a wide range of theories and hypotheses, nevertheless, despite my support for robotics, it is not the only way to be empirical about symbol grounding. Data concerning symbol grounding come from many sources, not just from attempts to build interactive simulations. The simulations themselves cannot be attempted willy-nilly but must be based on one or more emerging theories of fundamental intelligence.

Summary of the comments so far: Proof of scientific conjecture is impossible. No set of observations can ever prove scientific conjecture or theory to be true. As a result, formal systems do not provide a solid epistemological basis for scientific nor for everyday concepts. Symbol grounding has something to do with establishing the semantics or meaning of the symbols, but reference is a weak method for establishing such semantics. Symbols need not be discrete, but can be continuous, based on analog relationships. Finally, formal systems are not a good paradigm for capturing biological intelligence.

Tearing down someone else's carefully constructed edifice is easy. Building a substitute that will shelter us from the cold of ignorance is considerably more difficult. I offer the following suggestions for a substitute approach to the problem of understanding the relation between minds and computers. The underlying assumption is that a computer can convincingly have a mind if it can function more or less along the lines that biological minds employ. This requires that we understand the biological minds we are trying to match as well as understand the computational minds we might try to develop.

Harnad has warned us, to some extent correctly, that any enterprise of mind construction will be fruitless unless the system's symbols are grounded. At a minimum, grounded symbols require that the system behave systematically relative to some environment. I have argued that the methods that have been suggested so far for symbol grounding are inadequate and to some extent inappropriate. The paradigms that many of us have adopted for understanding the mind and computer are inappropriate. There are two parts to my suggested approach. The first is methodological, and the second is representational.

A number of investigators in recent years have pursued an approach to understanding intelligence that some of us have come to call the biomimetic approach. Rather than focus on modeling performance of those tasks that characterize so-called higher human intelligence, such as planning, problem solving, scientific creativity, and the like, this approach focuses on the complementary comparative approach of modeling whole, albeit simple, organisms in a real environment, performing real biological tasks. The goal of the approach is to develop coherent incremental models out of functionally complete components. The more common approach has been successful at modeling those tasks that humans find difficult and perform slowly (such as expert chess playing), but which can be described according to specified rules. The tasks tend to be rather small portions of the whole of human performance, and to operate on the basis of a limited range of inputs that are often a restricted set of simple assertions abstracted from real data (e.g., photos of a scene) by human investigators. Such verbal-like systems have not been as successful in modeling tasks that humans find easy and automatic, such as recognizing the face of a friend.

The biomimetic approach seeks to begin with the development of simple systems that are capable of surviving and operating in a real environment. The goal is then to gradually scale up the models to wider ranges of environments and capabilities, at each level producing systems that are capable of dealing with the essential biological tasks in their environment (e.g., Brooks, 1991). Similarly, understanding the whole of human behavior may simply be too complex to be understood without more understanding of the underlying basic cognitive processes that may be more accessible in animals. Success in explaining human performance may depend a great deal on understanding the operation of fairly basic processes because these processes constrain and support the operations of those so-called higher cognitive functions. The use of animals and their behavior as systems to be modeled helps us to see alternative forms of representation that are overshadowed by our own linguistic capacities and our own introspective self-familiarity. We are less ready to believe, for example, that animals employ formal rule systems, than we are to believe that humans employ formal rules. Nevertheless, both display some level biological intelligence.

The representational part of the approach I suggest is to abandon formal discrete symbols as the basis for representation and abandon reference as the means for grounding those symbols. A major reason for wanting to use formal systems is that the processes by which truth is transmitted and preserved are well understood in formal systems. Monotonic logic guarantees that truth is transmitted from a set of premises (grounded symbols) to a set of conclusions. Abandonment of monotonic logic means that in principle any conclusions are possible. Nevertheless, monotonic logic seems a very poor choice for a paradigm of human thought. Just as scientific epistemology could recognize that there are no guarantees of truth in scientific concepts without dissolving into irrationality, our models of human thought can tolerate nonmonotonic logic. My argument is that thought does not operate as a formal system operating with discrete symbols and proof of syllogisms, rather it operates as a more or less continuous system operating on the basis of constraint satisfaction. I will illustrate what I mean by reference to word understanding, but similar mechanisms can be applied to many forms of behavior (Chemtob, et al., 1989; Roitblat, 1990).

I argue that concepts and words are represented in a very high- dimensional space in which the dimensions are semantic. Any given word is represented as a cloud of points in this space. For example, the word "strike" is represented along many dimensions corresponding to the different semantic aspects of the word. Because of the discontinuity from one use of strike to the next, some representations of the word have positions along a "hit" dimension that indicate a forceful blow (as in "he struck and killed the pedestrian") and some have locations that indicate no blow (as in "they struck a bargain"). (Note that the labels that I can place on these dimensions are only a poor description of the semantic dimension, because the labels themselves have multiple and multidimensional meanings and can only approximate the true underlying dimension.) Obviously these representations of the word strike are incompatible with one another and people are unlikely to be able to employ both meanings simultaneously (there is evidence on this topic). When a person recognizes the word strike, I argue, all of these dimensions are activated simultaneously, but because they cannot all remain active, a projection of the representation is performed from the very high dimensional space onto a space of lower dimensionality. The dimensions that are selected for the projection are those that are most consistent with the context. Hence, word understanding is an iterative constraint satisfaction process in which the meaning of the word that best fits the context is the one that remains active. What appears to be a circularity problem turns out to be an iterative problem. The meaning of the words constrain the meaning of the sentence and the meaning of the sentence constrains the meanings of the words. The words are not firmly grounded in isolation, but neither are they merely hung from skyhooks.

An iterative continuous constraint satisfaction system is not guaranteed to solve the problems of symbol grounding, computation, and artificial intelligence. No system can offer such a guarantee. Nevertheless, it appears to offer an alternative path toward solution of such problems.

References Barsalou, L. W. (1983). Ad hoc categories. Memory and Cognition, 11, 211- 227. Barsalou, L. W. (1987). The instability of graded structure: implications for the nature of concepts. In U. Neisser (Ed.), Concepts and conceptual development: Ecological and intellectual factors in categorization (pp. 101-140). New York: Cambridge University Press.

Brooks, R. A. 1991. Intelligence without representation. Artificial Intelligence, 47:139-159.

Chemtob, C., H. L. Roitblat, R. S. Hamada, J. G. Carlson, and G. T. Twentyman 1991. A Cognitive Action Theory of Post- Traumatic Stress Disorder. Journal of Anxiety Disorders, 2:253-275.

Herrnstein, R. J. 1984. Objects, categories, and discriminative stimuli. In H. L. Roitblat, T. G. Bever, & H. S. Terrace (Eds.), Animal Cognition (pp 233-262). Hillsdale, NJ: Lawrence Erlbaum Associates.

Herrnstein, R. J. (1985) Riddles of natural categorization. Philosophical Transactions of the Royal Society (London) B 308: 129-44

Labov, W. (1973) The boundaries of words and their meanings. In C.-J. N. Bailey & R., W. Shuy (Eds.), New ways of analyzing variations in English. Washington, DC: Georgetown University Press.

Lakatos, I. & Musgrave, A. (1970) Criticism and the growth of knowledge. Cambridge: Cambridge University Press.

Kahneman, D. & Tversky, A. (1982) On the study of statistical intuitions. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgements under uncertainty: Heuristics and biases (pp. 493-508). Cambridge, Cambridge University Press.

Moore, E. F. (1956) Gedanken-experiments on sequential machines. In C. E. Shannon & J. McCarthy (Eds.), Automata studies (pp. 129-153). Princeton: Princeton University Press.

Popper, K. P. (1962) Conjectures and refutations. New York: Harper and Row.

Roitblat, H. L. (1988) A cognitive action theory of learning. In J. Delacour and J. C. S. Levy (eds.), Systems with learning memory abilities. New York:Elsevier. pp. 13-26.

Roitblat, H. L. (1990) Cognitive action theory as a control architecture. In S. Wilson and J. A. Meyer (Eds.), Simulation of Adaptive Behavior from Animals to Animats. MIT Press, Cambridge, Mass., pp. 444-450.

----------

Date: Wed, 20 May 92 22:18:41 EDT From: "Stevan Harnad"

Below are 5 more responses to the question about publishing the "What is Computation" Symposium, for a total of 19 votes cast out of total of 25 contributors (at the time of the vote):

Publication: For: 18 // Against: 1

Interactive Symposium (IS) vs. Position Papers (PP): Either or Combination: 11 - Prefer IS: 5 - Prefer PP: 2

Not yet heard from (6):

(20) Ross Buck (21) Ronald Chrisley (22) Gary Hatfield (23) Joe Lammens (24) John Searle (25) Tim Smithers (and any new contributors since the vote)

I do wish to remind contibutors that we are still in the interactive symposium phase, no matter what the outcome, so please do NOT send lengthy position papers yet: Keep it interactive and about the length most contributions have been thoughout the discussion.

There is also some question about how to partition the two themes ("What is Computation?" and "Is Cognition Computation?") in the published version, and whether to include the second theme at all. (I personally think it would be hard to eradicate all traces of the second question, which is in any case in the back of most of our minds in all of this). -- Stevan

------------------------------------------------------

(15) Date: Wed, 13 May 92 16:16:04 PDT From: dambrosi@research.CS.ORST.EDU (Bruce Dambrosio)

Stevan -

My short comment hardly qualifies me to render an opinion, I'm neutral. I do hope, however, that the precedent of publication doesn't change the flow of future discussion. thanks - Bruce

-------------

(16) Date: Thu, 14 May 92 10:31:07 EDT From: "John M. Carroll"

stevan since you registered me as a voter, i'll vote. i'm just an applied ontologist in the audience of this symposium, but i've found it interesting (though i do agree with mcdermott -13 may- as to its wandering itinerary). publishing it as 'position papers' would seem to me to factor out some of the unique value of interactive debate (though squeezing out some of the asynchrony and redundancy that come along with e-mail is probably a good idea). bravo and onward. jack

------------------------

(17) Date: Fri, 15 May 92 18:31:54 -0400 From: mclennan@cs.utk.edu

Stevan,

Publishing the "What is Computing" dialogue is fine with me. I do think it will need some smoothing to avoid being too repetitious, but it should be possible to do.

Bruce MacLennan Department of Computer Science The University of Tennessee Knoxville, TN 37996-1301

--------------------------

(18) From: sjuphil!tmoody@uu.psi.com (T. Moody) Date: Mon, 18 May 92 13:36:32 EDT

Stevan,

Publishing this discussion in some edited form is an excellent idea. I regret that I have not been more active, but I have gotten a great deal out of reading thing, and I am sure that others would, too.

-- Todd Moody

------------------------------

(19) Date: Tue, 19 May 1992 21:22:37 From: Pat Hayes

Stevan - yes, Im here , but I am out of touch with email for days at a time. This will continue for about another two weeks. Sorry. I approve of the project and will try to get back into it ina few days.

Pat

------------------------------

Date: Wed, 20 May 92 22:52:16 EDT From: "Stevan Harnad"

Date: Wed, 20 May 1992 17:03:14 -0400 (EDT) From: Franklin Boyle

Stevan Harnad writes (in response to Dave Chalmers):


>
>sh> I think the word "structure" is equivocal here. A computer simulation
>
>sh> of the solar system may have the right causal "structure" in that the
>
>sh> symbols that are interpretable as having mass rulefully yield
>
>sh> symbols that are interpretable as gravitational attraction and
>
>sh> motion. But there's no mass, gravity or motion in there, and
>
>sh> that's what's needed for REAL causality. In fact, the real
>
>sh> causality in the computer is quite local, having to do only
>
>sh> with the physics of the implementation (which is irrelevant to
>
>sh> the computation, according to functionalism). So when you
>
>sh> speak equivocally about a shared "causal structure," or about
>
>sh> computational structure's being a "variety of causal structure," I
>
>sh> think all you mean is that the syntax is interpretable AS IF
>
>sh> it were the same causal structure as the one being modelled
>
>sh> computationally. In other words, it's just more ungrounded,
>
>sh> extrinsic semantics.

Well said. To elaborate, much of the physics of the system depends on the causal behavior of electric charge. But the combinations of 'high' and 'low' voltage values that instantiate symbols *control* the computationally relevant physical changes; e.g., a voltage change which opens a data path from the starting symbol of a subroutine to a register in the CPU. The symbols cause these changes as a result of their structures via pattern matching.

Though each individual voltage value that is part of a symbol's physical instantiation causes change according to physical laws, the voltage combinations are able to cause changes AS THOSE STRUCTURES because of the circuit architectures of electronic devices, such as comparators. The structures of these devices enable all the individual, nomologically determined electrical changes taken together to result in an overall change (e.g., a simple change from a high to low voltage) which reflects the similarity between the *arrangement* of voltages that constitutes the structure of the instantiated symbol and that of its matcher. This latter, overall change is not described by physical laws. Rather, the regularities such changes generate are described by rules (because symbols cause change according to their extended structures) and, hence, underlie the computational behavior of digital computers.

In most physical systems (enzyme catalysis in cells, for example, and, of course, digital computers are two exceptions), there is no structure fitting, only nomologically determined changes in the measured attributes of interacting objects (e.g., momentum). This is what happens with planets in their orbits as well as in analog computers. These systems behave according to REAL causality, as you put it, precisely because the changes are nomologically determined from the measured attributes of the interacting objects themselves. In contrast, computer simulations are simulations because measured-attribute changes are not the result of the measured attibutes of the interacting objects (in this case symbols), but, rather, their extended structures, and, furthermore, because such changes do not affect the physical aspects of the interacting objects themselves, as they do in the case of planetary motion. That is, symbols control the manipulation of other symbols (e.g., moving them around in memory) by controlling changes in measured attributes (voltages) of particular circuits, but not of each other.

On the other hand, if symbols do not cause changes in those symbols through pattern matching, then they are not affecting system behavior by virtue of their forms, and, so, the system would simply not be a computer in that case. Thus, planets in orbits are not computers.

If we describe digital computers as we would any other physical system, that is, in terms of physical state descriptions, it would relegate the structures of the symbols piecemeal to boundary conditions (like physical laws, these boundary conditions are associations between measured attributes -- state variables and structural quantities). Such descriptions, therefore, would miss the fact that certain voltage changes are the result of structure fitting, and, hence, would not capture the computational aspects of digital computers because they would not capture the causality due to the symbols.

Searle's statement, "syntax is not intrinsic to physics", summarizes quite nicely the fact that physical state descriptions, which are the standard way of describing the world because they are about measured attributes of physical objects which physical laws associate, do not capture the structural causality of syntactic structures; that is, structures which are causal via pattern matching. In other words, the physical behavior of computers can be described by a set of integro-differential equations and constraint equations without ever having to account for the causality of extended structure in any explicit way.


>
>sh> I think I can safely say all this and still claim (as I do) that
>
>sh> I accept the Church/Turing Thesis that computation can simulate
>
>sh> anything, just as natural language can describe anything.
>
>sh> We just mustn't confuse the simulation/description with the real
>
>sh> thing, no matter how Turing-Equivalent they might be. So if we
>
>sh> would never mix up an object with a sentence describing it, why
>
>sh> should we mix up an object with a computer simulating it?

Great. Actually I would have changed the first sentence to: "...that computation can simulate anything that can be described...", precisely because digital computers enable descriptions (usually in the form of patterns which can be structurally matched by rule antecedents) to be causal.

Franklin Boyle

--------------------------------------------------------------

Date: Tue, 19 May 92 20:57:02 EST From: David Chalmers

There are too many deep issues here to treat them in any anywhere near the depth they deserve, but here goes. I'll start with computation and move up upwards through the rarefied heights of cognition, semantics, and qualia.

(1) WHAT IS COMPUTATION?


>
>sh> I certainly couldn't agree with you on computation without dissociating
>
>sh> myself from this part of your view. But let me, upon reflection, add
>
>sh> that I'm not so sure your criterion for computation does the job (of
>
>sh> distinguishing computation/computers from their complement) after all
>
>sh> (although I continue to share your view that they CAN be distinguished,
>
>sh> somehow): I don't see how your definition rules out any analog system
>
>sh> at all (i.e., any physical system). Is a planetary system a computer
>
>sh> implementing the laws of motion? Is every moving object implementing a
>
>sh> calculus-of-variational computation? The requisite transition-preserving
>
>sh> mapping from symbols to states is there (Newton's laws plus boundary
>
>sh> conditions). The state transitions are continuous, of course, but you
>
>sh> didn't specify that the states had to be discrete (do they?).

A planetary system is not a computer, because it's not universal. I should also note that computation requires counterfactual sensitivity to various different possible inputs, and it's not clear what will count as an "input" to the solar system. But apart from that worry, there's no problem with saying that the solar system is implementing any number of specific computations, e.g. the trivial 1-state FSA as well as a lot of cyclic n-state FSAs. It's probably not implementing a calculus-of-variations computation, as such a computation would require a particular kind of state-transitional structure that this system does not embody. (There may be some sense in which the system is "I/O" equivalent to such a computation, but computational equivalence requires more than I/O equivalence, of course.)

Remember that it's not a problem for my view that every system implements some computation or other. What matters is that every system does not implement *every* computation.

As for continuity or discreteness, that depends on the computational formalism that one uses. Certainly all of the usual formalisms use discrete states. Of course, a continuous physical system (like the planetary system) can implement a discrete computation: we just have to chop up its states in the right way (e.g. divide an orbit into 4 discrete quadrants).


>
>sh> And what about syntax and implementation-independence, which are surely
>
>sh> essential properties of computation? If the real solar system and a
>
>sh> computer simulation of it are both implementations of the same
>
>sh> computation, the "supervenient" property they share is certainly none
>
>sh> of the following: motion, mass, gravity... -- all the relevant
>
>sh> properties for being a real solar system. The only thing they seem to
>
>sh> share is syntax that is INTERPRETABLE as motion, mass, gravity, etc.
>
>sh> The crucial difference continues to be that the interpretation of being
>
>sh> a solar system with all those properties is intrinsic to the real solar
>
>sh> system "computer" and merely extrinsic to the symbolic one. That does
>
>sh> not bode well for more ambitious forms of "supervenience." (Besides, I
>
>sh> don't believe the planets are doing syntax.)

This certainly isn't an objection to my construal of computation. It's *computation* that's implementation-independent, not, e.g., solar-system-hood. The fact that the solar system might be implementing a computation is not affected by the fact that other implementations of that computation aren't solar systems.

Most properties in the world don't supervene on computational structure, as they don't even supervene on causal organization. To be a process of digestion, for instance, more than a certain causal organization is required: what's also needed is a specific kind of physio-chemical makeup. This physio-chemical makeup is *conceptually constitutive* (in part) of something's being digestion. Similarly for solar systems. It's conceptually constitutive of solar-system-hood that a system have a certain geometric shape, a certain chemical makeup, a certain size, and so on, and these physical properties are not determined by abstract causal organization. Take a system that shares abstract causal organization with a solar system -- the Bohr atom, say, or a boy swinging a bucket around his head -- then it's still not a solar system, because it lacks those extra properties that are constitutive of solar-system-hood. So no one would dream of being a computationalist about solar-system-hood, or about digestion.

The strong-AI hypothesis is that unlike these properties, *cognition* is a property that supervenes on abstract causal organization. This may or may not be obvious at first glance, but note that unlike digestion and solar-system-hood, it's not ruled out at first glance: there doesn't seem to be any physical property independent of causal organization that's conceptually constitutive of cognition.

In general, computational simulation will succeed at most in duplicating those properties that supervene on causal structure. We can argue all day about whether cognition is such a property, but the important point here is that pointing to properties that don't supervene on causal structure is no objection to my construal of computation.


>
>sh> I think the word "structure" is equivocal here. A computer simulation
>
>sh> of the solar system may have the right causal "structure" in that the
>
>sh> the symbols that are interpretable as having mass rulefully yield
>
>sh> symbols that are interpretable as gravitational attraction and motion.
>
>sh> But there's no mass, gravity or motion in there, and that's what's
>
>sh> needed for REAL causality. In fact, the real causality in the computer
>
>sh> is quite local, having to do only with the physics of the implementation
>
>sh> (which is irrelevant to the computation, according to functionalism).
>
>sh> So when you speak equivocally about a shared "causal structure," or
>
>sh> about computational structure's being a "variety of causal structure," I
>
>sh> think all you mean is that the syntax is interpretable AS IF it were
>
>sh> the same causal structure as the one being modelled computationally. In
>
>sh> other words, it's just more, ungrounded, extrinsic semantics.

Not at all. I mean that every implementation of a given computation has a *real* *causal* *structure*, and in fact that there's a certain causal structure that every implementation of a given computation shares. That's precisely what the definition of implementation guarantees. When a given 2-state FSA is implemented on my computer, for instance, there are real physical state-types in the implementation such that being in state A causes a transition into state B, and vice versa. When a neuron-by-neuron simulation of the brain is implemented on my computer, there are real physical states (registers, or memory locations, or whatever) in the implementation corresponding to the state of each neuron, and these states interact with each other in a causal pattern isomorphic to a pattern of interaction among the neurons.

To clarify, by "causal structure" I mean, roughly, *organizational* properties of a system: i.e., the patterns of interactions between various states, without taking into account what those states actually are. For instance an atom, at least according to the Bohr model, might share some causal structure with the solar system, but it differs in many properties that aren't organizational properties, such as size, mass, and intrinsic physical structure.

This has to be kept quite separate from questions about semantics. I haven't yet even mentioned any possible associated "semantics" of the computation. And again, I wouldn't dream of claiming that a simulation of the solar system has the same *gravitational* properties as the solar system, or that a simulation of the brain has the same *biochemical* properties as the brain, and I don't know why you think this is implied by my position. Gravitation and biochemistry don't supervene solely on causal structure, obviously.

(2) COMPUTATION AND COGNITION

We now pass briefly to the question of whether *cognition* might be a property that supervenes on causal structure, and on computational structure in particular.


>
>sh> There is a straw man being constructed here. Not only do all
>
>sh> Functionalists agree that mental states depend on causal structure, but
>
>sh> presumably most nonfunctionalist materialists do too (neurophysical
>
>sh> identity theorists, for example, just think the requisite causal
>
>sh> structure includes all the causal powers of -- and is hence unique to
>
>sh> -- the biological brain).

Well, no, at least not the way I'm using "causal structure". Given any specification of the causal structure of the brain -- even all the way down to atoms, or whatever -- then that causal structure could in principle be implemented in a different medium, such as silicon. We'd just have to set it up so that our little bits of silicon are interacting with each other according to the same patterns as the neurons, or the atoms or whatever, were interacting with each other. (Of course the silicon model might be a lot bigger than the brain, and it might have a lot of *extra* causal structure that the brain doesn't have, but that's not a problem.) Now a neurophysiological identity theorist would certainly say that the silicon system wouldn't have the same mental states. So the way I'm using the term (and I think this is standard usage), a neurophysiological identity theorist would not agree that mental states supervene on causal structure.

Perhaps you don't agree that mental states depend solely on causal structure either, because you seem to assign an essential role to I/O transducers, and presumably it makes a difference just what kinds of physical things -- heat, light, or whatever -- are being transduced. Whereas a strict functionalist like myself would hold that at least when it comes to fixing phenomenal mental states, the specific physical nature of what's being transduced is irrelevant. On this view, a system that merely reproduced the causal organization of the transduction in a different medium would have the same phenomenal properties.

As I said in the last note, even if one accepts (a) that computational structure fixes causal structure (which follows from my construal of implementation), and (b) that causal structure fixes mental structure, there still arises the question of whether computational structure can fix the right *kinds* of causal structure that are responsible for mentality. I think that it can: we just have to capture the causal structure of the brain, say, at a fine enough level of description, and describe that causal structure in an appropriate computational language -- as a finite state automaton, for instance, though preferably as an FSA with combinatorially structured states. Then every implementation of that FSA will share that causal structure. Some people might hold that *no* finite level of description can capture everything that's going on, due e.g. to the potential infinite precision in continuous systems, but I think that the presence of background noise in biological systems suggests that nothing essential to cognition can ride on that infinite precision.

Before passing on to the next topic, I should note that I don't think that "Is cognition computation?" is quite the right question to ask. The right question, rather, is "Is computation sufficient for cognition?" An advocate of strong AI might reasonably hold that cognition in the brain is not itself computation, but that computation is nevertheless capable of reproducing the relevant properties (e.g. causal structure) on which cognition depends. This becomes particularly clear when we move to specific computational formalisms, such as Turing machines. I certainly don't think that the brain is a Turing machine, but I think that nevertheless Turing machine computations are capable of cognition. It's a subtle point, but too often advocates of AI are saddled with unnecessary claims such as "the brain is a computer", or "the mind is a program".

(3) COMPUTATION AND SEMANTICS


>
>sh> "Syntactic" means based only on manipulating physical symbol tokens
>
>sh> (e.g., squiggle, squoggle) whose shape is arbitrary in relation to what
>
>sh> they can be interpreted as meaning. I am sure one can make
>
>sh> squiggle-squoggle systems, with arbitrary formal rules for manipulating
>
>sh> the squiggles and squoggles -- like Hesse's "Glass Bead Game" but even
>
>sh> more absurd, because completely meaningless, hence uninterpretable in
>
>sh> any systematic way -- and one could perhaps even call these
>
>sh> "computations" (although I would call them trivial computations). But I
>
>sh> have assumed that whatever it turns out to be, surely one of the
>
>sh> essential features of nontrivial computations will be that they can
>
>sh> bear the systematic weight of a semantic interpretation (and that
>
>sh> finding an interpretation for a nontrivial symbol system will be
>
>sh> crytographically nontrivial, perhaps even NP-complete).

The question is only whether semantic content is itself *constitutive* of something's being a computation. To that question, the answer seems obviously to be no. Construct an arbitrary large Turing machine by throwing together quadruples randomly. It's most unlikely that there will even *be* a nontrivial semantic interpretation for this. Construct a Pascal program by making random decisions consistent with the BNF specification of the language. Almost certainly, this program won't be interpretable as being about anything at all. Nevertheless, it's still a *program*, and an implementation of it is still *computation*, at least according to standard usage, which I think is the right usage. It's probably not a very interesting computation, but it's computation.

Most interesting computations will probably turn out to have some kind of semantic interpretation -- otherwise why would we bother with them? (Actually, some interesting computations might not, e.g. those computations invoked in solving the "Busy Beaver" problem for Turing machines. These computations are interesting, but the interest appears to lie entirely in their syntax. Similarly, many cellular automata computations, like Conway's game of life, are interesting primarily for their syntactic form.) But the notion that lies at the foundation of the computationalist view about cognition is not "interesting computation", it's "computation" straight. Making some sense of the notion of "interesting computation" is an interesting question in its own right, but it's independent of Searle's original question about what makes something a computation.

(4) QUALIA AND SEMANTICS.

Now we move away from computation to the nitty-gritty philosophical questions about different kinds of mental properties. Unlike the issues about computation and implementation (which are surprisingly underrepresented in the literature), these issues already have a vast philosophical literature devoted to them. What I'll have to say here won't be particularly original, for the most part. It's also more philosophically technical than the earlier parts, so some readers might want to drop out now (if they've made it this far, which is unlikely).


>
>sh> At some point (mediated by Brentano, Frege and others), the mind/body
>
>sh> problem somehow seems to have split into two: The problem of "qualia"
>
>sh> (subjective, experiential, mental states) and the problem of
>
>sh> "intentionality" (semantics, "aboutness"), each treated as if it were
>
>sh> an independent problem. I reject this bifurcation completely. I
>
>sh> believe there is only one mind/body problem, and the only thing that
>
>sh> makes mental states be intrinsically about anything at all is the fact
>
>sh> that they have experiential qualities.

To set out the lay of the land, I agree that there's only one mind-body Problem worthy of a capital P, and that's the problem of qualia. That's not to say that qualia are the only kind of mental states: as I outlined in my last post, there are also "psychological states", those characterized by their role in the production of behaviour rather than by their phenomenal feel. However, there's no more a mind-body Problem about these than there is a "life-body Problem", for instance. The very fact of a system's being alive is a fact about it's incorporating the right kinds of mechanism and producing the right kind of behaviour (where the key behaviour and mechanisms are adaptation, reproduction, and metabolism, more or less). There's no "further fact" that needs explaining. The same goes for psychological states. What's special about qualia, and makes them seem unlike almost everything else in the world, is that there seems to be a further fact in need of explanation, even after one has told the full story about the mechanisms and so on.

Where I differ from you is in assimilating intentional states like beliefs to the class of psychological states, rather than to the class of phenomenal states. *All there is* to the fact of a system believing that P is that it has the right kind of causal economy, with mechanisms that tend to produce P-appropriate behaviour in the right sort of ways, and that are causally related to the subject matter of P in the right sort of way. The possession of semantic content isn't a further fact over and above these mechanisms: it *conceptually supervenes* on the existence of those mechanisms, to use the philosophical parlance.


>
>sh> If there were nothing it was like (subjectively) to have beliefs and
>
>sh> desires, there would be no difference between beliefs and desires that
>
>sh> were just systematically interpretable AS IF they were about X
>
>sh> (extrinsic semantics) and beliefs and desires that were REALLY about X
>
>sh> (intrinsic semantics).

One might parody this argument by saying:

If there were nothing it was like (subjectively) to be alive, there would be no difference between systems that were just systematically interpretable AS IF they were alive (extrinsic life) and systems that were REALLY alive (intrinsic life).

Obviously, any system that is functioning in the right way is not just "as if" alive, it's really alive, qualia or no qualia. The same goes for belief. Maybe this means that there's not much difference between "as if" believing and "real" believing, but why should that bother us? We don't worry about a difference between "as if" tables and "real" tables, after all.

(There does remain one "as if" vs. "real" distinction, which is that a system might *behave* as if it believes that P without believing that P (actors do this, for instance, and Block's fabled Humongous Lookup Table might do it even better). But this problem can be handled without invoking qualia: functionalism requires that to determine the facts about a belief, one must invoke not only the facts about behaviour but the facts about patterns of internal causation. The patterns of causation in actors and lookup tables don't qualify. Spelling out the right criteria on internal causation is a long, intricate story, but qualia don't need to be invoked anywhere.)


>
>sh> There are qualia, however, as we all know. So even with a grounded
>
>sh> TTT-capable robot, we can still ask whether there is anybody home in
>
>sh> there, whether there is any haver of the beliefs and desires, to whom
>
>sh> they are intrinsically [i.e., subjectively] meaningful and REALLY about
>
>sh> what they are interpretable as being about. And we can still be dead
>
>sh> wrong in our inference that there is somebody home in there -- in which
>
>sh> case the robot's semantics, for all their causal groundedness, would in
>
>sh> reality be no more intrinsic than those of an ungrounded book or
>
>sh> computer.

Qualia or no qualia, beliefs are still "intrinsic" (modulo questions about narrow and wide content), in just the same way that life is intrinsic. It's just that they're not *phenomenal*.

The fundamental problem with making qualia essential to semantic content is that qualia seem to be *the wrong kind of thing* to determine that content (except perhaps for certain kinds of perceptual content). As I said earlier, my belief about Joan of Arc may have some associated (though hard to pin down) qualia, but it's very difficult to see how those qualia are *constitutive* of the semantic content of the belief. How could the *feel* of the belief possibly make it any more about Joan of Arc than it would have been otherwise?

Your position, I take it, is roughly that: "as if" semantic content *plus* qualia *equals* "real" semantic content. My position is that qualia seem to contribute almost nothing to fixing the semantic content of most beliefs, except perhaps for certain perceptual beliefs. So whatever it is that is constitutive of "real" semantic content, qualia don't play much of a role. This may mean that there won't be much of a "real"/"as if" distinction to worry about (modulo the considerations about behavioural equivalence), but that's life.


> dc> Your position seems to be, on the contrary, that qualia are
> dc> determinative of semantic content. Take Joe, sitting there with some
> dc> beliefs about Joan of Arc. Then a hypothetical system (which is at
> dc> least a conceptual possibility, on your view and mine) that's
> dc> physically identical to Joe but lacks qualia, doesn't believe anything
> dc> about Joan of Arc at all. I suggest that this seems wrong. What can
> dc> qualia possibly add to Joe's belief to make them any more about Joan
> dc> than they would have been otherwise? Qualia are very nice things, and
> dc> very important to our mental life, but they're only a matter of *feel*
> dc> -- how does the raw feel of Joe's belief somehow endow it with semantic
> dc> content?
>
>
>sh> But Dave, how could anyone except a dualist accept your hypothetical
>
>sh> possibility, which simply amounts to the hypothetical possibility that
>
>sh> dualism is valid (i.e., that neither functional equivalence nor even
>
>sh> physical identity can capture mental states!)?

Well, I'm only saying that this is a *conceptual* possibility, which surely it is on your view and mine, not an empirical possibility. I have little doubt that as an empirical fact, any system physically identical to me will have the same qualia. But it's entirely coherent to *imagine* a system physically identical to me but lacking qualia. Indeed, if it wasn't for first-person knowledge of qualia, one would never suspect that such a brain-structure would have qualia at all! (Note that someone (like Lewis or Armstrong, or Dennett on one of his less eliminativist days) who holds that all there is to the *concept* of qualia is the notion of a state that plays a certain causal role won't accept this. But this view simply seems to legislate the problem of qualia into something else entirely.) Qualia are individuated by their phenomenal feel, which seems to be conceptually independent of any physical properties.

So far this view doesn't immediately imply dualism. At least, many people who take qualia seriously accept this conceptual possibility, but still think that ontologically, qualia aren't anything over and above the physical. Personally, I find this view untenable, and think that the conceptual possibility of absent or inverted qualia must imply at least a limited kind of ontological dualism (so-called property dualism), as it implies that there are contingent facts about the world over and above the physical facts. But let's not go into that, for now.


> dc> I suggest that there is some kind of conceptual confusion going on
> dc> here, and that phenomenal and semantic properties ought to be kept
> dc> separate. Intentional states ought to be assimilated to the class of
> dc> psychological properties, with their semantic content conceptually
> dc> dependent on their role in our causal economy, and on their causal
> dc> relations to entities in the external world.
>
>
>sh> Apart from real TTT interactions, I don't even know what this passage
>
>sh> means: what does "assimilated to the class of psychological properties
>
>sh> with their semantic content conceptually dependent on their role in our
>
>sh> causal economy" mean? "[T]heir causal relations to entities in the
>
>sh> external world" I can understand, but to me that just spells TTT.

I'm not sure exactly what you're missing, but I recommend one of the standard analytical functionalist papers, like Lewis's "Psychophysical and Theoretical Identifications" (Aust J Phil, 1972), or even Ryle's _The Concept of Mind_. As for the TTT, I suggest carefully distinguishing the *conceptual* from the *empirical* dependence of mental properties on TTT-function. I take it that you accept empirical but not conceptual dependence (as you say, it's conceivable that the TTT might be wrong). By contrast, the analytic functionalist holds that mental properties are *conceptually* dependent on causal organization -- i.e. all there is to the notion of a system's being in a mental state is that it has a certain causal organization, and that it's appropriately related to the environment. The standard view, I take it, is that this is an unsatisfying analysis of phenomenal mental states such as qualia, but that it goes through quite well for most other mental states, such as beliefs.


>
>sh> (3) If qualia fade and the system stays TTT-grounded, I would say
>
>sh> aboutness was gone too (what would you say, and what would it amount to
>
>sh> to be WRONG about that, even from a God's-Eye view?)

Well, I think that it's empirically most unlikely that qualia *would* fade, as this would mean that phenomenal states and psychological states were radically "decoherent" from each other, in a subtle sense. (I have an eternally unfinished paper, "Absent Qualia, Fading Qualia, Dancing Qualia", on just this topic.) But it's certainly a conceptual possibility. So given this conceptual possibility, what would I say about the aboutness? I'd say that it would still be there. What would it amount to to be wrong about that? The same sort of thing it would amount to to be wrong about a system's being alive -- e.g., that one had misanalyzed the functional capacities of the system. Aboutness is no more of an extra, free-floating fact about a system than life is.

--Dave Chalmers.

---------------------------------------------

Date: Fri, 22 May 92 13:35:56 EDT From: "Stevan Harnad"

Three basic points characterize my disagreement with David Chalmers:

(1) Computational structure is not the same as causal structure. When a digital computer simulates an airplane, they are computationally equivalent but they are not causally equivalent. Causal equivalence would mean having the same causal powers, in the same "medium" (except for causally irrelevant implementational differences). An internal combustion and electric plane would be causally equivalent in their capacity to fly in the air. A simulated airplane and a real airplane are not causally equivalent but only formally equivalent (in some respects).

(2) What makes thinking different from flying is NOT that it "supervenes" on causal structure the way, say, life might, but that it is UNOBSERVABLE (or rather, observable only to the thinker). This is what allows us to forget the differences between simulated thinking and real thinking in a way that we cannot do with simulated flying and real flying.

(3) The "aboutness" of thinking is not independent of the question of qualia, it is completely parasitic on it. A system that has no qualia has no aboutness, because there is no one home in there for the symbols to be "about" anything TO.

If a system's symbols are uninterpretable, the absence of aboutness is fairly obvious.

If a system's symbols are systematically interpretable, as the symbols in a book or a TT-passing computer are, the lack of aboutness is less obvious, but only because of the hermeneutic power the interpretation wields over us; but this interpretability-as-being-about-something is not grounded in the book or computer but parasitic on the grounded meanings in the head of the interpreter.

If a system can pass the TTT, then its symbols are grounded, but if it still lacked qualia, it would still lack aboutness (and we would have to turn to the TTTT, according to which some TTT implementations do have minds and some don't).

If a TTTT-indistinguishable implementation still lacked qualia, then it would still lack aboutness, only the implementations with qualia would have minds, and dualism would be correct.


>dc> It's *computation* that's implementation-independent, not
>dc> solar-system-hood... other implementations of that computation aren't
>dc> solar systems.
>dc>
>dc> The strong-AI hypothesis is that unlike these properties, *cognition*
>dc> is a property that supervenes on abstract causal organization. This may
>dc> or may not be obvious at first glance, but note that unlike digestion
>dc> and solar-system-hood, it's not ruled out at first glance: there
>dc> doesn't seem to be any physical property independent of causal
>dc> organization that's conceptually constitutive of cognition.

Suppose there is a Lisp program that simulates the solar system. Here is one form of implementation-independence (the kind I mean): That same program (a recipe for syntactic symbol manipulations) can be run on a Vax or a Sparc (etc.); the computations are implementation-independent and all the implementations are both formally and causally equivalent. Here is another form of "implementation-independence": This is the real solar system, that is the Lisp program running on a Sparc. They are both "performing the same computations." These two "implementations" are just formally, not causally equivalent.

The only thing that sets apart "cognition" (thinking) "at first glance" from, say, moving, is that moving is observable and thinking is not.

Fortunately, unlike in the case of, say, "living" (about which I think some of our wrong-headed intuitions may actually have been parasitic on imagining somebody being at home in there, experiencing), in the case of thinking we at least have first-person testimony (like Searle's, when he tells us we would be wrong to conclude from what we could and could not observe, that he understood Chinese) to remind us that there's more to thinking than just implemented, syntactic symbol manipulation.

(With solar systems "computing," I don't know what "computation" is any more, so let's just talk about properties of all implementations of the same syntactic symbol manipulations.)


>dc> there's a certain [real] causal structure that every
>dc> implementation of a given computation shares. That's precisely what the
>dc> definition of implementation guarantees. When a given 2-state FSA is
>dc> implemented on my computer, for instance, there are real physical
>dc> state-types in the implementation such that being in state A causes a
>dc> transition into state B, and vice versa. When a neuron-by-neuron
>dc> simulation of the brain is implemented on my computer, there are real
>dc> physical states (registers, or memory locations, or whatever) in the
>dc> implementation corresponding to the state of each neuron, and these
>dc> states interact with each other in a causal pattern isomorphic to a
>dc> pattern of interaction among the neurons.

But, alas, among the causal structures shared by the brain and the neural simulation of it is included neither the requisite causal structure for passing the TTT (observable) nor (lockstep with the former, on my hypothesis) the requisite causal structure for thinking (which happens to be unobservable to all but the [in this case nonexistent] subject).


>dc> by "causal structure" I mean, roughly, *organizational* properties of a
>dc> system: i.e., the patterns of interactions between various states,
>dc> without taking into account what those states actually are... This has
>dc> to be kept quite separate from questions about semantics.

It would help if we could speak less abstractly, resorting even to examples. There are lots of possible patterns of interaction between states. The only relevant kind for me (in discussing what I, at least, mean by computation) is the manipulation of symbols purely on the basis of their shapes, i.e., syntactically, as in a digital computer. Of course syntactic interactions are independent of semantics (although the only ones of interest are the ones that are semantically interpretable).


>dc> Given any specification of the causal structure of the brain -- even
>dc> all the way down to atoms, or whatever -- then that causal structure
>dc> could in principle be implemented in a different medium, such as
>dc> silicon. We'd just have to set it up so that our little bits of silicon
>dc> are interacting with each other according to the same patterns as the
>dc> neurons, or the atoms or whatever, were interacting with each other...
>dc> a neurophysiological identity theorist would not agree that mental
>dc> states supervene on causal structure.

Nor would I, if this were what "causal structure" was.

I have no problem with synthetic brains, made out of all kinds of unnatural parts -- as long as they retain the relevant causal powers of the brain (which for me that is just TTT-power -- for a TTTT-theorist, further neurobiological causality would matter too, perhaps even protein synthesis). I don't care what materials a computer is made of. But if all it does is manipulate symbols on the basis of syntactic rules, no matter how systematically all those syntactic goings-on can be equated with and interpretated as what goes on in the brain, nothing "mental" is "supervening" on it (because the requisite causal structure has not been duplicated).


>dc> Perhaps you don't agree that mental states depend solely on causal
>dc> structure either, because you seem to assign an essential role to I/O
>dc> transducers, and presumably it makes a difference just what kinds of
>dc> physical things -- heat, light, or whatever -- are being transduced.
>dc> Whereas a strict functionalist like myself would hold that at least
>dc> when it comes to fixing phenomenal mental states, the specific physical
>dc> nature of what's being transduced is irrelevant. On this view, a system
>dc> that merely reproduced the causal organization of the transduction in a
>dc> different medium would have the same phenomenal properties.

All I want to do is refrain from over-interpreting interpretable systems just because the thinking that they are interpretable as doing happens to be unobservable. Fortunately, the TTT, which requires real transduction (otherwise its not the TTT) is observable. Capacity for interactions with the world is hence part of the requisite causal structure for thinking.

Let me make it even simpler. Take my position to be equivalent to the hypothesis that thinking "supervenes" on BEING an optical transducer, such as a retina. There are many different kinds of optical transducer, natural and synthetic, but they must all be able to transduce real light; without that, they simply lack the requisite causal structure to be a transducer.

(Remember, you, as a computationalist [or "symbolic functionalist"] hypothesize that thinking "supervenes" on computation/TT-power alone; I, because of the symbol grounding problem, reject that and hypothesize that thinking "supervenes" only on hybrid systems with TTT-power ["robotic functionalism"], and that necessarily includes trandsuction and excludes computation alone.)


>dc> I don't think that "Is cognition computation?" is quite the right
>dc> question to ask. The right question, rather, is "Is computation
>dc> sufficient for cognition?" An advocate of strong AI might reasonably
>dc> hold that cognition in the brain is not itself computation, but that
>dc> computation is nevertheless capable of reproducing the relevant
>dc> properties (e.g. causal structure) on which cognition depends...
>dc> I certainly don't think that the brain is a Turing machine, but I think
>dc> that nevertheless Turing machine computations are capable of cognition.
>dc> It's a subtle point, but too often advocates of AI are saddled with
>dc> unnecessary claims such as "the brain is a computer", or "the mind is a
>dc> program".

Computation is sufficient but not necessary for cognition? I.e., the brain may not be a computer, but a computer can still think? Sounds even more far-fetched than the stronger equivalence claim -- and just as wrong, for just about the same reasons.


>dc> The question is only whether semantic content is itself *constitutive*
>dc> of something's being a computation. To that question, the answer seems
>dc> obviously to be no. Construct an arbitrary large Turing machine by
>dc> throwing together quadruples randomly. It's most unlikely that there
>dc> will even *be* a nontrivial semantic interpretation for this. It's
>dc> probably not a very interesting computation, but it's computation.
>dc>
>dc> Most interesting computations will probably turn out to have some kind
>dc> of semantic interpretation -- otherwise why would we bother with them?
>dc> ... But the notion that lies at the foundation of the computationalist
>dc> view about cognition is not "interesting computation", it's
>dc> "computation" straight. Making some sense of the notion of "interesting
>dc> computation" is an interesting question in its own right, but it's
>dc> independent of Searle's original question about what makes something a
>dc> computation.

This all seems to be terminological quibbling. Whatever you want to call the rest, only interpretable computations are at issue here.


>dc> To set out the lay of the land, I agree that there's only one mind-body
>dc> Problem worthy of a capital P, and that's the problem of qualia. That's
>dc> not to say that qualia are the only kind of mental states... there are
>dc> also "psychological states", those characterized by their role in the
>dc> production of behaviour rather than by their phenomenal feel. However,
>dc> there's no more a mind-body Problem about these than there is a
>dc> "life-body Problem"... There's no "further fact" that needs
>dc> explaining... What's special about qualia, and makes them seem unlike
>dc> almost everything else in the world, is that there seems to be a
>dc> further fact in need of explanation, even after one has told the full
>dc> story about the mechanisms and so on.
>dc>
>dc> *All there is* to the fact of a system believing that P is that it has
>dc> the right kind of causal economy, with mechanisms that tend to produce
>dc> P-appropriate behaviour in the right sort of ways, and that are
>dc> causally related to the subject matter of P in the right sort of way.
>dc> The possession of semantic content isn't a further fact over and above
>dc> these mechanisms: it *conceptually supervenes* on the existence of
>dc> those mechanisms, to use the philosophical parlance.

What this seems to leave out is why there should be any connection between thinking and qualia at all! It's also not clear with what justification you call beliefs "mental" or even "psychological" states. The reason there's no further fact about "aboutness" is that without qualia there's no such thing. In a system where there is nobody home, there's no one for whom anything can be "about" anything. One could still speak of grounding (TTT-power), because that, like life, does depend exclusively on observable properties. But in an insentient automaton there's simply nothing mental to speak of, be it qualitative or intentional.


>dc> Obviously, any system that is functioning in the right way is not
>dc> just "as if" alive, it's really alive, qualia or no qualia. The
>dc> same goes for belief. Maybe this means that there's not much
>dc> difference between "as if" believing and "real" believing, but why
>dc> should that bother us? We don't worry about a difference between
>dc> "as if" tables and "real" tables, after all.
>dc>
>dc> Qualia or no qualia, beliefs are still "intrinsic" (modulo questions
>dc> about narrow and wide content), in just the same way that life is
>dc> intrinsic. It's just that they're not *phenomenal*.

Terminology again. If by "intrinsic" you mean the causal substrate of beliefs is located only in the head, I agree; if you mean it's just computational, I don't. The life analogy, invoked by so many, is simply irrelevant (except inasmuch as vitalism was always parasitic, knowingly or unknowingly, on animism, as I suggested earlier).


>dc> qualia seem to be *the wrong kind of thing* to
>dc> determine that content (except perhaps for certain kinds of perceptual
>dc> content). As I said earlier, my belief about Joan of Arc may have some
>dc> associated (though hard to pin down) qualia, but it's very difficult to
>dc> see how those qualia are *constitutive* of the semantic content of the
>dc> belief. How could the *feel* of the belief possibly make it any more
>dc> about Joan of Arc than it would have been otherwise?

Because only qualia would give the beliefs a subject, a believer!

There's both a misunderstanding and an oversimplification here. TTT-grounding is what "determines the content" of thoughts, both perceptual and abstract (and it does so in a bottom-up way, so sensory grounding is primary, and higher-order concepts are grounded in lower-order ones) [see my earlier replies to Roitblat's objections to sensory bottom-uppism]. The methodological assumption is that TTT-power is sufficient for both qualia and aboutness (but this could be wrong); what's certain is that qualia are a necessary condition for aboutness: No qualia, no aboutness (just, perhaps, groundedness).

[Although Searle -- with whom I do not necessarily agree in all matters -- does not seem to have realized it yet, it is the fact that qualia are a necessary condition for aboutness that makes his own direct testimony -- to the effect that he does not understand Chinese in the Chinese Room -- both relevant and devastating to computationalism: There's something it is "like" to understand, and Searle is in a position to testify that NO SUCH THING is "supervening" on his own implementation of the TT-passing computations when he memorizes and executes the requisite symbol manipulations. The fact that some theorists, like Mike Dyer, are prepared to believe that under these conditions there would be another "system" inside Searle, one that WAS actually understanding, is just evidence of how the unobservability of understanding and the hermeneutic grip of TT-interpretability can conspire to drive one further and further into sci-fi fantasies. I think it is for similar reasons that Pat Hayes is struggling to redefine what counts as an implementation in such a way as to exclude Searle's memorizization and execution of the program.]


>dc> Your position, I take it, is roughly that: "as if" semantic content
>dc> *plus* qualia *equals* "real" semantic content. My position is that
>dc> qualia seem to contribute almost nothing to fixing the semantic content
>dc> of most beliefs, except perhaps for certain perceptual beliefs. So
>dc> whatever it is that is constitutive of "real" semantic content, qualia
>dc> don't play much of a role. This may mean that there won't be much of a
>dc> "real"/"as if" distinction to worry about (modulo the considerations
>dc> about behavioural equivalence), but that's life.

As I said, qualia don't "fix content" (except in the bottom-up sense I mentioned), but they are certainly what makes groundedness mental.


>dc> Take Joe, sitting there with some beliefs about Joan of Arc. Then a
>dc> hypothetical system (which is at least a conceptual possibility, on
>dc> your view and mine) that's physically identical to Joe but lacks
>dc> qualia, doesn't believe anything about Joan of Arc at all. I suggest
>dc> that this seems wrong. What can qualia possibly add to Joe's belief to
>dc> make them any more about Joan than they would have been otherwise?
>dc>
>dc> Well, I'm only saying that this is a *conceptual* possibility, which
>dc> surely it is on your view and mine, not an empirical possibility... But
>dc> it's entirely coherent to *imagine* a system physically identical to me
>dc> but lacking qualia... So far this view doesn't immediately imply dualism.
>dc> At least, many people who take qualia seriously accept this conceptual
>dc> possibility, but still think that ontologically, qualia aren't anything
>dc> over and above the physical... Personally, I find this view untenable...

A system in which there is no one home has no "beliefs." The conceptual possibility that the TTT may not be strong enough to guarantee that someone is home is perhaps fruitful to worry about, because it has methodological consequences, but the possibility that the TTTT would not be strong enough is not interesting (to me -- I'm not a philosopher), because it just amounts to the possibility that dualism is true.


>dc> As for the TTT, I suggest carefully distinguishing the *conceptual*
>dc> from the *empirical* dependence of mental properties on TTT-function. I
>dc> take it that you accept empirical but not conceptual dependence (as you
>dc> say, it's conceivable that the TTT might be wrong). By contrast, the
>dc> analytic functionalist holds that... all there is to the notion of a
>dc> system's being in a mental state is that it has a certain causal
>dc> organization, and that it's appropriately related to the environment...
>dc> this is an unsatisfying analysis of phenomenal mental states such as
>dc> qualia, but... goes through quite well for most other mental states,
>dc> such as beliefs.

It's a satisfying analysis of beliefs if you forget that beliefs are supposed to have a subject.


>dc> Well, I think that it's empirically most unlikely that qualia *would*
>dc> fade [as synthetic parts are swapped for natural ones in the brain],
>dc> as this would mean that phenomenal states and psychological states were
>dc> radically "decoherent" from each other, in a subtle sense... But it's
>dc> certainly a conceptual possibility. So... I'd say that [aboutness]
>dc> would still be there. What would it amount to to be wrong about that?
>dc> The same sort of thing it would amount to to be wrong about a system's
>dc> being alive -- e.g., that one had misanalyzed the functional capacities
>dc> of the system. Aboutness is no more of an extra, free-floating fact
>dc> about a system than life is.

I agree: It's just byproduct of whatever it is that generates qualia (I'm betting on TTT-capacity).

Stevan Harnad

--------------------------------------------------------

Date: Fri, 22 May 92 13:56:36 EDT From: "Stevan Harnad" To: harnad@gandalf.rutgers.edu Subject: Re: What is Computation?

Date: Wed, 20 May 92 22:39:48 EST From: David Chalmers

In reply to Franklin Boyle:

fb> With respect to causality, it is not enough to say just that the fb> "appropriate state-transitional relations are satisfied" [Chalmers, fb> 1992]. Rather, *how* the state-transitional relations are realized must fb> be accounted for as well. That is, *how* the physical interactions fb> among the constituent objects of the system in question actually cause fb> physical changes necessary to go from one state to the next must be fb> accounted for.

I'm open to the idea that certain constraints need to be imposed on the state-transition relations. For a start, they have to be *causal*, and there's room for dispute over exactly what that comes to. A minimal condition is that the conditionals underwriting the relations must sustain counterfactuals (i.e., they can't be simple material conditionals), but it's not impossible that more is required.

One can devise some puzzle cases, where some system appears to qualify as implementing an FSA, say, under this criterion, but where one might think that it should be ruled out. For example, it turns out that an implementation of a given FSA, together with a device that simply records all inputs so far, will implement any I/O equivalent FSA (proving this is left as an exercise to the reader; the general idea is to identify the new state-types with the appropriate disjunction of states of the old system). This kind of case can probably be excluded by imposing some kind of uniformity requirement on the causal connections, but the details of this are not entirely clear to me.

That being said, I don't find your specific argument for constraints on state-transition relations too compelling:

fb> *How* effects are brought about is important because fb> insofar as computations are processes that involve entities we hold to fb> represent (whether or not they are intrinsically referential), we have fb> to know that these representing entities are responsible for the changes fb> we observe _according_to_how_they_represent_what_they_do_ (e.g. through fb> their forms) in order to be able to call them computations in the first fb> place. Otherwise, we end up with Putnam's or Chalmers's fb> characterizations of computation, both of which are mute on the issue of fb> physical representation, even though they talk about physical states fb> (unless I'm supposed to be reading a lot more into what they're saying fb> than I am, such as unpacking the term "state correspondence" fb> [Chalmers]-- please let me know), and, therefore, admitting too many fb> systems as computational.

I agree with this, but I think that my construal of computation is capable of doing just what you say. I take the fairly standard position that representing entities represent *in virtue of their causal roles*, i.e. in virtue of the way that they affect and are affected by other states of the system, as well as the environment. According to this view, it doesn't matter precisely how the causation in question is achieved; all that matters is *that* it is achieved. Similarly, it doesn't matter *what* the internal states are that are affected; all that matters is their role in the overall causal economy of the system. So the criterion I outlined, which is silent on (a) the intrinsic nature of the states and (b) the specific manner of causation, does fine.

Of course this definition doesn't say anything explicit about reference. This is because, as I've said, I don't think that the notion of reference is conceptually prior to that of computation. Neither, for that matter, is the notion of computation prior to that of reference. Rather, I think that both of them should be analyzed in terms of the prior notion of causation. So we shouldn't expect the definition of computation (more accurately, of implementation) to say anything explicit about reference: we should simply expect that it will be *compatible* with an analysis of reference, whenever that comes along. Hopefully, given our analyses of computation and of reference, it will turn out that computational structure determines at least some aspect of representational power. The analysis of computation I've given satisfies this, being designed to be compatible with a causal-role analysis of reference.

Your view seems to be that representing entities represent by virtue of their internal form, rather than by virtue of their causal role. If this were the case, then it's possible that this construal of computation wouldn't be up to the job of fixing representational powers. However, I don't see any good reason to accept this view. I agree that the internal form of a representation can be very important -- e.g. the distributed representations in connectionist networks have complex internal structure that's central to their representational capacities. However, it seems to me that this internal form is important precisely because it *allows* a system of representations to play the kinds of causal roles that qualify them as representations. The causal role is conceptually prior, and the internal form is subsidiary.

--Dave Chalmers.

-----------------------------------------------------------

From harnad Fri May 22 14:10:29 1992 To: harnad@gandalf.rutgers.edu Subject: Re: What is Computation

Date: Thu, 21 May 1992 14:55:35 -0400 From: mcdermott-drew@CS.YALE.EDU (Drew McDermott) Subject: Second thoughts

Some second thoughts on "What is Computation?"

(1)


>
>sh> The second concerns whether just a computer implementing a computer
>
>sh> program can have a mind.


>dm> I despair of ever making progress on this question without further
>dm> empirical progress on computational modeling of thought and behavior.
>dm> The ratio of verbiage produced to opinions changed is depressingly
>dm> small.

I meant "depressingly large," of course.

(2) I should make it clear that all of the definitions that I (and Chalmers) have proposed are for *digital* computers. I don't think anything like that works for analog computers. Indeed, it seems as if any physical system *can* be considered to be an analog computer, in that it could be used to make predictions about the behavior of any other system modeled by the same differential equations (or whatever). (One might want to add a requirement that the inputs of the system be controllable, so that it could be used as a computer; but I wouldn't want the analogous requirement for digital computers, and there are other problems, so let's skip it.)

(3) Given our definitions, it seems obvious to me that the brain is not a digital computer -- but we shouldn't hold this against the brain. The brain's function is to control behavior and model the world. Digital technology would be the best for these purposes, but the organic world has had to make do with analog approximations to digital technology. Perhaps there is an interesting question here about when we can detect that a nondigital system is an approximation to a digital one.

Drew McDermott

-------------------------------------------------------

From: Stevan Harnad

There are two dimensions to distinguish: (1) continuous vs. discrete and (2) "analog" vs. symbolic. The latter is, I think, the relevant distinction for this discussion. It apposes the analog world of objects (chairs, tables, airplanes, furnaces, planets, computers, transducers, animals, people) with that SUBSET of the analog world that consists of implementations of formal symbol systems, manipulating symbol tokens purely on the basis of syntax, yet interpretable as describing or representing anything else in the analog world. This has little to do, I think, with whether the brain does or does not use digital signal processing technology.

Stevan Harnad

-----------------------------------------------------

-----------------------------------------------------

Date: Fri, 22 May 92 10:11:15 HST From: Herbert Roitblat

Stevan Harnad wrote:


>
>sh> Three basic points characterize my disagreement with David Chalmers:


>
>sh> (1) Computational structure is not the same as causal structure. When
>
>sh> a digital computer simulates an airplane, they are computationally
>
>sh> equivalent but they are not causally equivalent. Causal equivalence
>
>sh> would mean having the same causal powers, in the same "medium" (except
>
>sh> for causally irrelevant implementational differences). An internal
>
>sh> combustion and electric plane would be causally equivalent in their
>
>sh> capacity to fly in the air. A simulated airplane and a real airplane
>
>sh> are not causally equivalent but only formally equivalent (in some
>
>sh> respects).

I think Stevan has ahold of an important relation here that needs to be amplified. An electric airplane is not a simulation of an airplane it is an implementation of an airplane that is causally equivalent with respect to flying. That is, it is equivalent to a jet plane in a limited functional domain. It would be silly for us to argue whether the electric plane is really a plane.

He goes on to say:


>
>sh> (2) What makes thinking different from flying is NOT that it
>
>sh> "supervenes" on causal structure the way, say, life might, but that it
>
>sh> is UNOBSERVABLE (or rather, observable only to the thinker). This is
>
>sh> what allows us to forget the differences between simulated thinking
>
>sh> and real thinking in a way that we cannot do with simulated flying
>
>sh> and real flying.

Here I begin to disagree. The observability of thinking relates to our ability to have knowledge about thinking, but it does not necessarily affect the causal properties of thinking. I also disagree that thinking is observable to the thinker. Even Freud argued that we do not have access to much of what we think about. Unless one defines thinking as "the narrative I" (related to the notion of subvocal speech), which by definition is narrative and accessible, thinking occurs at many different levels. Our own awareness of our cognitive processes is at best unreliable and at worst severely misleading.

More critical in this context, however, is the switch in the nature of the items being compared. One comparison is between electric planes and jet planes, the second is between simulated thinking and real thinking. Whereas it is true that the simulation is never the thing being simulated, the relation between computer based and biologically based thinking may be better characterized as like that between electric planes and jet planes than as like that between simulated planes and jet planes. Perhaps we should consider the analogical relation between planes and birds. Powered flight began in some sense as a simulation of real bird flight (e.g., DaVinci). At what point did powered flight cease to be a simulation and begin to be an implementation? Part of what I am suggesting is an investigation of the implications of assuming that computers think. If we (perhaps temporarily) entertain the assumption that computers really think, though perhaps in some computer-like way, then what do we have to conclude about thinking and computation? Are there irremediable differences between human thinking and computer thinking? The argument against computers being capable of implementing minds can be translated without loss to the argument that there are certain irremediable differences between the two. One of these is claimed to be qualia.


>
>sh> (3) The "aboutness" of thinking is not independent of the question of
>
>sh> qualia, it is completely parasitic on it. A system that has no qualia
>
>sh> has no aboutness, because there is no one home in there for the symbols
>
>sh> to be "about" anything TO.

It seems to me that the concept if qualia is entirely irrelevant to the discussion. Qualia are relics of our dualistic past. English is deeply entrenched in the folk-psychology view of dualism that our very language practically implies its validity. Qualia are category errors. The idea of qualia depends on dualistic position that someone must be home. If we irradicate dualism, then we eliminate any need for qualia. A monist has no need for qualia only sense data. If we insist on qualia, it seems to me we prejudge the question of whether computers can implement thinking, because our dualistic legacy will not permit us to entertain the notion of someone "being home," that is, the argument becomes equivalent to asking whether the computer has a soul.

Devoid of implicit dualism the notion of qualia has no more to add to the discussion than the concept of witches has to add to health and illness. Being a committed monist, I have to argue that there is no one home INSIDE me. I do not have a homunculus, or I am not a homunculus controlling a body. I think, I am, but I do not think to myself in the way that the above quotation might suggest. If there is someone home inside, then we have the familiar problem of explaining the thinking of the one inside. Does the homunculus have qualia? Does my body only simulate the intentions of the homunculus?

Herb Roitblat

---------------------------

Date: Mon, 25 May 92 14:54:51 EDT From: "Stevan Harnad"

METHODOLOGICAL EPIPHENOMENALISM

Herbert Roitblat wrote:

hr> The observability of thinking relates to our ability to have knowledge hr> about thinking, but it does not necessarily affect the causal hr> properties of thinking. I also disagree that thinking is observable to hr> the thinker. Even Freud argued that we do not have access to much of hr> what we think about. Unless one defines thinking as "the narrative I" hr> (related to the notion of subvocal speech), which by definition is hr> narrative and accessible, thinking occurs at many different levels. Our hr> own awareness of our cognitive processes is at best unreliable and at hr> worst severely misleading.

(1) I didn't say the observability of thinking affects the causal properties of thinking. I said there is something it's like to think. Thinking has a subject: The thinker. It is not just an insentient process that is interpretable (in its structure and its outputs) AS IF it were thinking. Hence one dead give-away of the fact that there's no thinking going on, is that there's no thinker to think them.

(2) Of course in the head of a thinker a lot is going on that he is not aware of! Why should we be aware of everything going on in our heads, or even most of it, any more than we are aware of most of what's going on outside our heads? That kind of knowledge has to be gained by honest scientific toil.

However, let's not forget that all those "unconscious thoughts" happen to be going on in the head of a conscious thinker! Forgetting this critical fact is a subtle mistake that is made over and over again, but I think that on reflection it should become obvious that it is begging the question [regarding which systems do and do not really think] to conclude that, because systems like us, that really think, have a lot going on inside them that they are not aware of, we can therefore speak of "thinking" in a system that is not aware of anything! [Or worse, that because we have an Freudian "unconscious mind" in addition to our conscious mind, other systems could have JUST an "unconscious mind"!]

Until further notice, only systems that are capable of conscious thoughts are capable of "unconscious thoughts" (which I actually think is a misnomer in any case, but that's a long story we might be able to skip for present purposes). It does not even make sense to speak of a process as "unconscious" when it's going on inside a system that has no conscious processes: Is a thermostat unconsciously thinking "It's getting hot in here"? To me, this is all just mentalistic overinterpretation.

But never mind; perhaps there are relevant similarities between what goes on in a thermostat or a computer and in my head. Fine. Let's investigate those: Maybe the similarities will turn out to yield useful generalizations, maybe not. But let's not prejudge them by assuming in advance that they are anything more than suggestive similarities. To claim that thinking is just a form of computation is precisely this kind of prejudging. If thinking were unobservable in every respect, this claim would be a normal empirical hypothesis. But since thinking IS observable to the thinker, this leaves the door to a decisive kind of negative evidence -- precisely the kind Searle used in pointing out that he would not be understanding Chinese in the Chinese Room. (By your lights, he might still be understanding it, but "unconsciously"!)

hr> More critical in this context, however, is the switch in the nature of hr> the items being compared. One comparison is between electric planes and hr> jet planes, the second is between simulated thinking and real hr> thinking. Whereas it is true that the simulation is never the thing hr> being simulated, the relation between computer based and biologically hr> based thinking may be better characterized as like that between hr> electric planes and jet planes than as like that between simulated hr> planes and jet planes. Perhaps we should consider the analogical hr> relation between planes and birds. Powered flight began in some sense hr> as a simulation of real bird flight (e.g., DaVinci). At what point did hr> powered flight cease to be a simulation and begin to be an hr> implementation?

The natural/artificial flying analogy has been invoked by computationalists many times before, and it's just as beside the point as unconscious thoughts. I can only repeat the structure of the refutation:

(a) Unlike flying, thinking (or understanding) is unobservable (except to the thinker/understander who is doing the thinking/understanding).

(b) Searle's Argument shows that a Chinese TT-passing system would NOT understand (and the symbol grounding problem suggests why not).

(c) Therefore, at least the stronger TTT is required in order to allow us to continue to infer that the candidate system thinks -- and this test, like a flight test, cannot be passed my computation alone. Like flying, it requires a system that is capable of transduction (at least, and probably many other analog processes as well).

(d) THIS is where the natural/artificial flying analogy IS relevant (natural thinking: ours, artificial thinking: the TTT-passing robot's). But computation alone is no longer eligible (because of b and c).

hr> Part of what I am suggesting is an investigation of the implications of hr> assuming that computers think. If we (perhaps temporarily) entertain hr> the assumption that computers really think, though perhaps in some hr> computer-like way, then what do we have to conclude about thinking and hr> computation? Are there irremediable differences between human thinking hr> and computer thinking? The argument against computers being capable of hr> implementing minds can be translated without loss to the argument that hr> there are certain irremediable differences between the two. One of hr> these is claimed to be qualia.

If there had been no way of showing that thinking was not really going on in a computer, then the unobservability of thinking would have left this forever hopelessly underdetermined (although, unlike the underdetermination of ordinary physics, there would have been a fact of the matter: the universe as a whole contains no unobservable fact that confirms or disconfirms a Utopian physical theory that accounts for all the observables, but it does contain a fact that could disconfirm a Utopian cognitive theory -- be it a TT-, TTT-, or even TTTT-scale account of all the observable -- and that fact would be known only to the candidate system). But fortunately, in the case of computation that fact is known to us (thanks to Searle's periscope), and the fact is that there is no Chinese-understanding going on either in Searle (unless we are prepared to believe, with Mike Dyer, that memorizing meaningless symbols can lead to multiple personality disorder) or in the computer implementing the same program he is implementing (unless we either abandon the implementation-independence of computation or Pat Hayes succeeds in finding a nonarbitrary reason for believing that Searle is not really an implementation of the same program).

So, yes, there might or might not be some helpful similarities between thinking and computation, but thinking is definitely not just computation.

hr> It seems to me that the concept of qualia is entirely irrelevant to the hr> discussion. Qualia are relics of our dualistic past. English is deeply hr> entrenched in the folk-psychology view of dualism that our very hr> language practically implies its validity. Qualia are category errors. hr> The idea of qualia depends on dualistic position that someone must be hr> home. If we eradicate dualism, then we eliminate any need for qualia. hr> A monist has no need for qualia, only sense data. If we insist on hr> qualia, it seems to me we prejudge the question of whether computers hr> can implement thinking, because our dualistic legacy will not permit us hr> to entertain the notion of someone "being home," that is, the argument hr> becomes equivalent to asking whether the computer has a soul.

The wrong-headedness of "Cartesian Dualism" is a third theme (along with unconscious thinking and natural versus artificial flying) often invoked in support of "new thinking" about cognition. I think "dualism" is being counted out prematurely, and often with insufficient understanding of just what the mind/body problem (now supposedly a non-problem) really is (was). To me it's as simple as the intuition we all have that there is a difference between a creature that really feels it when you pinch him and another that doesn't (because it doesn't feel anything, no qualia, nobody home); we don't need Descartes for that, just the experience we all share, to the effect that we really have experiences!

The presence or absence of qualia, whether or not someone is home, etc., is as relevant or irrelevant to the question of whether a system thinks or is merely interpretable as if it thinks as the presence or absence of flying is to the question of whether or not a system can fly. I would not knowingly put my money on a system that did not have qualia as a model for the mind. However, when we get past TT/computational candidates to TTT/robotic (or even TTTT/neural) candidates, where Searle's Persiscope is no longer available, then I adopt methodological epiphenomenalism, assuming/trusting that qualia will "supervene" on the TTT-capacity, and troubling my head about them no further, since I cannot be any the wiser. What's at issue here, however, is still pure computationalism, where one CAN indeed be the wiser, and the answer is: Nobody home.

hr> Devoid of implicit dualism the notion of qualia has no more to add to hr> the discussion than the concept of witches has to add to health and hr> illness. Being a committed monist, I have to argue that there is no one hr> home INSIDE me. I do not have a homunculus, or I am not a homunculus hr> controlling a body. I think, I am, but I do not think to myself in the hr> way that the above quotation might suggest. If there is someone home hr> inside, then we have the familiar problem of explaining the thinking of hr> the one inside. Does the homunculus have qualia? Does my body only hr> simulate the intentions of the homunculus?

And the claim that admitting that qualia/consciousness exists would lead to an infinite homuncular regress is a fourth standard canard. Sure, people have made (and continue to make) the mistake of thinking that the inputs to an organism's brain are inputs to a homunculus inside. The best cure for this is the TTT: The system as a whole must be conscious, there has to be somebody home in there. But abandon all mentalism as soon as you address what might be going on inside the system, and concern yourself only with its capacity to generate TTT-scale performance, trusting that qualia will piggy-back on that capacity. Methodological epiphenomenalism, and no homunculus.

Stevan Harnad

--------------------------------------

Date: Sun, 24 May 92 22:24:27 -0400 From: mclennan@cs.utk.edu

Stevan,

My apologies for not replying sooner; I had to preparing a talk and attend a workshop. You wrote:


>
>sh> (3) Searle's Chinese Room Argument and my Symbol Grounding Problem
>
>sh> apply only to discrete symbolic computation. Searle could not implement
>
>sh> analog computation (not even transduction) as he can symbolic
>
>sh> computation, so his Argument would be moot against analog computation.

I'm afraid I don't understand. I see no reason why we can't have an analog version of the Chinese Room. Here it is:

Inputs come from (scaleless) moving pointers. Outputs are by twisting knobs, moving sliders, manipulating joysticks, etc. Various analog computational aids -- slide rules, nomographs, pantagraphs, etc. -- correspond to the rule book. Information may be read from the input devices and transferred to the computational aids with calipers or similar analog devices. Searle implements the analog computation by performing a complicated, ritualized sensorimotor procedure -- the point is that the performance is as mechanical and mindless as symbol manipulation. Picture an expert pilot flying an aircraft simulator. We may suppose that this analog room implements a conscious cognitive process no more farfetched than understanding Chinese, viz. recognizing a human face and responding appropriately. For concreteness we may suppose the analog computation produces the signal "Hi Granny" when presented with an image of Searle's grandmother. (My apologies to John and his grandmother.)

As in the traditional CR, the values manipulated by Searle have no *apparent* significance, except as props and constraints in his complicated dance. That is, Searle qua analog computer sees the analog values as meaningless pointer deflections and lever positions. However, with the aid of an interpreter (such as he would also need for the Chinese symbols) he might see the same analog signal as his grandmother's face.

It appears that "seeing as" is central to both the digital and analog cases. Does Searle see his performance as a syntactic (meaningless) ritual or as a semantic (meaningful) behavior? That the locus of the distinction is Searle is demonstrated by the ease with which his experience of it -- as meaningful or meaningless -- can be altered. It might be so simple as pointing out an interpretation, which would trigger a "Gestalt shift" or phenomenological reorientation, and allow these quantities and computations to be seen as saturated with meaning. Searle's experience of meaningfulness or not depends on his phenomenological orientation to the subject matter. Of course, mixed cases are also possible, as when we engage in (discrete or continuous) behaviors that have *some* significance to us, but which we don't fully understand. (Many social/cultural practices fall in this category.)

Finally, as a computer scientist devoting much of his effort to analog computation (1987, in press-a, in press-b), I am somewhat mystified by the critical distinction you draw between digital and analog computation. What convinces you that one is REAL computation, whereas the other is something else (process? pseudo-computation?)? If I'm not doing computer science please tell me what I am doing!

I hope these comments shed some light on the nature of computation (whether analog or digital), and symbol manipulation (whether discrete or continuous).

Bruce MacLennan

REFERENCES

MacLennan, B. J. (1987). Technology-independent design of neurocomputers: The universal field computer. In M. Caudill & C. Butler (Eds.), Proceedings, IEEE First International Conference on Neural Networks (Vol. 3, pp. 39-49). New York, NY: Institute of Electrical and Electronic Engineers.

MacLennan, B. J. (in press-a). Continuous symbol systems: The logic of connectionism. In Daniel S. Levine and Manuel Aparicio IV (Eds.), Neural Networks for Knowledge Representation and Inference. Hillsdale, NJ: Lawrence Erlbaum.

MacLennan, B. J. (in press-b). Characteristics of connectionist knowledge representation. Information Sciences, to appear.

-----------------------------------

Date: Mon, 25 May 92 16:59:49 EDT From: "Stevan Harnad"

ANALOG SYSTEMS AND WHAT'S RIGHT ABOUT THE SYSTEM REPLY

Bruce McLennan wrote:

bm> I see no reason why we can't have an analog version of the Chinese bm> Room. Here it is: Inputs come from (scaleless) moving pointers. Outputs bm> are by twisting knobs, moving sliders, manipulating joysticks, etc. bm> Various analog computational aids -- slide rules, nomographs, bm> pantagraphs, etc. -- correspond to the rule book. Information may be bm> read from the input devices and transferred to the computational aids bm> with calipers or similar analog devices. Searle implements the analog bm> computation by performing a complicated, ritualized sensorimotor bm> procedure -- the point is that the performance is as mechanical and bm> mindless as symbol manipulation.

I'm not sure whether you wrote this because you reject Searle's argument for the discrete symbolic case (and here wish to show that it is equally invalid for the analog case) or because you accept it for the discrete symbolic case and here wish to show it is equally valid for the analog case. In either case, I'm glad you brought it up, because it gives me the opportunity to point out exactly how simple, decisive and unequivocal my own construal of Searle's Argument is, and how clearly it applies ONLY to the discrete symbolic case:

The critical factor is the "System Reply" (the reply to the effect that it's no wonder Searle doesn't understand, he's just part of the system, and the system undersands): The refutation of the System Reply is for Searle to memorize all the symbol manipulation rules, so that the entire system that gets the inputs and generates the outputs (passing the Chinese TT) is Searle. This is how he shows that in implementing the entire symbol system, in BEING the system, he can truthfully deny that he understands Chinese. "Le Systeme, c'est Moi" is the refutation of the System Reply (unless, like Mike Dyer, you're prepared to believing that memorizing symbols causes multiple personality, or, like Pat Hayes, you're prepared to deny that Searle is really another implementation of the same symbol system the TT-passing computer implements).

But look at what you are proposing instead: You have Searle twisting knobs, using analog devices, etc. It's clear there are things going on in the room that are NOT going on in Searle. But in that case, the System Reply would be absolutely correct! I made this point explicitly in Harnad 1989 and Harnad 1991, pointing out that even an optical transducer was immune to Searle's Argument [if anyone cared to conjecture that an optical transducer could "see," in the same way it had been claimed that a computer could "understand"], because Searle could not BE another implementation of that transducer (except if he looked with his real eyes, in which case he could not deny he was seeing), whereas taking only the OUTPUT of the transducer -- as in your example -- would be subject to the System Reply. It is for this very same reason that the conventional Robot Reply to Searle misfired, because it allowed Searle to modularize the activity between a computational core, which Searle fully implemented, and peripheral devices, which he merely operated: This is why this kind of division of labor (computation doing all the real cognitive work, which is then linked to the world, like a homunculus, via trivial transducers) is such a liability to computationalism. [I've always thought computationalists were more dualistic than roboticists!]

So you have given me the chance to state again, explicitly, that Searle's Chinese Room Argument and the Symbol Grounding Problem apply ONLY to discrete formal symbol systems, in which the symbols are manipulated purely syntactically (i.e., by operations based only on the symbols' "shapes," which are arbitrary in relation to what they can be interpreted as meaning) AND where the implementation is irrelevant, i.e., where every implementation of the symbol system, despite physical differences, has the same computational properties (including the mental ones, if cognition is really computation). There is surely some implementation-independence of analog computation too (after all, there's more than one way to implement a TTT-scale robot), but that does not leave room for a Searlean implementation -- at least not one in which Searle is the entire system. Hence transduction, analog computation and the TTT are immune to Searle's Argument (as well to as the Symbol Grounding Problem, since such systems are not just implemented symbol systems in the first place).

bm> Picture an expert pilot flying an aircraft simulator. We may suppose bm> that this analog room implements a conscious cognitive process no more bm> farfetched than understanding Chinese, viz. recognizing a human face bm> and responding appropriately... As in the traditional CR, the values bm> manipulated by Searle have no *apparent* significance, except as props bm> and constraints in his complicated dance. That is, Searle qua analog bm> computer sees the analog values as meaningless pointer deflections and bm> lever positions. However, with the aid of an interpreter (such as he bm> would also need for the Chinese symbols) he might see the same analog bm> signal as his grandmother's face.

This may be true, but unfortunately it is irrelevant to anything that's at issue here (just as it's irrelevant whether Searle could eventually decrypt the Chinese symbols in the original Chinese Room). If the rules of the game allow the system to be anything but Searle himself, all bets are off, for by that token Searle could even "be" part of the real brain without understanding anything -- the brain as a whole, the "system" would be doing the understanding -- as many critics of Searle have pointed out (but for altogether the wrong reason, erroneously thinking that this fact refutes [or is even relevant to] Searle's original argument against discrete symbolic computation!). I hope this is clearer now.

bm> It appears that "seeing as" is central to both the digital and analog bm> cases. Does Searle see his performance as a syntactic (meaningless) bm> ritual or as a semantic (meaningful) behavior? That the locus of the bm> distinction is Searle is demonstrated by the ease with which his bm> experience of it -- as meaningful or meaningless -- can be altered. It bm> might be so simple as pointing out an interpretation, which would bm> trigger a "Gestalt shift" or phenomenological reorientation, and allow bm> these quantities and computations to be seen as saturated with bm> meaning. Searle's experience of meaningfulness or not depends on his bm> phenomenological orientation to the subject matter. Of course, mixed bm> cases are also possible, as when we engage in (discrete or continuous) bm> behaviors that have *some* significance to us, but which we don't fully bm> understand. (Many social/cultural practices fall in this category.)

Alas, to me, all these "Gestalt flips" are irrelevant, and the symbolic/analog distinction is the critical one.

bm> Finally, as a computer scientist devoting much of his effort to analog bm> computation (1987, in press-a, in press-b), I am somewhat mystified by the bm> critical distinction you draw between digital and analog computation. What bm> convinces you that one is REAL computation, whereas the other is something bm> else (process? pseudo-computation?)? If I'm not doing computer science bm> please tell me what I am doing! bm> bm> I hope these comments shed some light on the nature of computation bm> (whether analog or digital), and symbol manipulation (whether discrete or bm> continuous).

Unfortunately, rather than shedding light, this seems to collapse the very distinction that a logical case can be built on, independent of any mentalistic projections. I can only repeat what I wrote in response to your earlier posting (to which you have not yet replied):


> bm> a physical device is an analog computer to the extent that we
> bm> choose and intend to interpret its behavior as informing us about
> bm> some other system (real or imaginary) obeying the same formal
> bm> rules. To take an extreme example, we could use the planets as an
> bm> analog computer... >
>
>sh> (2) If all dynamical systems that instantiate differential equations
>
>sh> are computers, then everything is a computer (though, as you correctly
>
>sh> point out, everything may still not be EVERY computer, because of (1)).
>
>sh> Dubbing all the laws of physics computational ones is duly ecumenical,
>
>sh> but I am afraid that this loses just about all the special properties
>
>sh> of computation that made it attractive (to Pylyshyn (1984), for
>
>sh> example) as a candidate for capturing what it is that is special about
>
>sh> cognition and distinguishes it from from other physical processes.
>
>
>sh> (3) Searle's Chinese Room Argument and my Symbol Grounding Problem
>
>sh> apply only to discrete symbolic computation. Searle could not implement
>
>sh> analog computation (not even transduction) as he can symbolic
>
>sh> computation, so his Argument would be moot against analog computation.
>
>sh> A grounded TTT-passing robot (like a human being and even a brain) is
>
>sh> of course an analog system, describable by a set of differential
>
>sh> equations, but nothing of consequence hangs on this level of
>
>sh> generality (except possibly dualism).

There is still the vexed question of whether or not neural nets are symbol systems. If they are, then they are subject to the symbol grounding problem. If they are not, then they are not, but then they lack the systematic semantic interpretability that Fodor & Pylyshyn (1988) have stressed as crucial for cognition. So nets have liabilities either way as long as they, like symbols, aspire to do all of cognition (Harnad 1990); in my own theory, nets play the much more circumscribed (though no less important) role of extracting the sensory invariants in the transducer projection that allow symbols to be connected to the objects they name (Harnad 1992).

Stevan Harnad

Fodor, J. & Pylyshyn, Z. (1988) Connectionism and cognitive architecture: A critical analysis. Cognition 28: 3 - 71. [also reprinted in Pinker & Mehler 1988]

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. (1990) Symbols and Nets: Cooperation vs. Competition. S. Pinker & J. Mehler (Eds.) (1988) "Connections and Symbols." Connection Science 2: 257-260.

Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54.

Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag.

MacLennan, B. J. (1987). Technology-independent design of neurocomputers: The universal field computer. In M. Caudill & C. Butler (Eds.), Proceedings, IEEE First International Conference on Neural Networks (Vol. 3, pp. 39-49). New York, NY: Institute of Electrical and Electronic Engineers.

MacLennan, B. J. (in press-a). Continuous symbol systems: The logic of connectionism. In Daniel S. Levine and Manuel Aparicio IV (Eds.), Neural Networks for Knowledge Representation and Inference. Hillsdale, NJ: Lawrence Erlbaum.

MacLennan, B. J. (in press-b). Characteristics of connectionist knowledge representation. Information Sciences, to appear.

Pylyshyn, Z. (1984) Computation and Cognition. Cambridge MA: MIT/Bradford

--------------------------------------------

From: "Stevan Harnad" Date: Mon, 25 May 92 23:47:41 EDT

Date: Mon, 25 May 92 18:57:54 -0400 From: yee@envy.cs.umass.edu (Richard Yee)

SIMULTANEOUS COMPUTATIONS? (So What is a Computer?)

Can a single physical system simultaneously implement two or more non-trivial computations? Is there any significant difference between, say, a simple calculator and a (universally programmable) computer that is running a program that describes the calculator? Is a calculator a computer? Must a computer be programmable?

In an earlier posting entitled "DON'T TALK ABOUT COMPUTERS" (Apr 20, posted Apr 22), I argued that we should avoid using the term "computer" because it is ambiguous. In his reply entitled "SO WHAT IS COMPUTATION?" (Apr 22), Stevan Harnad thought I was introducing a non-standard notion of computation. He began:


>sh> Much of Yee's comment is based an a distinction between formal and
>sh> nonformal "computation," whereas my arguments are based completely on
>sh> computation as formal symbol manipulation. We will need many examples
>sh> of what nonformal computation is, plus a clear delineation of what is
>sh> NOT nonformal computation ... (It would also seem hard to
>sh> pose these questions without talking about computers, as Yee enjoins
>sh> us!)

My previous submission probably tried to juggle too many concepts in too small a space. In particular, I wanted to draw attention to TWO distinctions, not one. The first---the main subject of this message---is the distinction between *universal* and *non-universal* computation. This is entirely a standard distinction, drawn from the Theory of Computation. The second distinction is between formal and non-formal *symbol processing* and its relationship to the two types of computation. Admittedly, most of the action surrounds this question, but I will leave it for a future submission. First-things first.

Harnad concluded:


>sh> One cannot make coherent sense of this [the distinctions being made]
>sh> until the question "What is computation?", as posed in the header
>sh> to this discussion, is answered. Please reply in ordinary language
>sh> before turning again to technical formalisms, because this first pass
>sh> at formalism has merely bypassed the substantive questions that have
>sh> been raised.

OK, for the time being let us not cloud the issues with questions about formal vs. non-formal symbol processing. Unfortunately, technical formalisms remain at the heart of the matter. I think that discussions about computation should refer to the Theory of Computation (ToC). I want to argue the importance of maintaining a clear distinction between the class of Turing machines (TM's) and its PROPER SUBCLASS of universal TM's (UTM's). First, however, I will address Harnad's question about computation.

I. What is Computation?

ToC defines hierarchies of computational classes, where the most notable class is defined by the capabilities of Turing machines. I endorse the standard Church-Turing definition of computation (e.g., Lewis & Papadimitriou, 1981), which roughly states:

"computation" = what any TM does. (1)

Note carefully, it does NOT simply say:

"computation" = what any universal TM (UTM) does. (2)

Definition (1) clearly includes all UTM's, but (2) would not include all TM's. Computation must encompass what ALL TM's do, not just what universal ones do. More on this in sections II & III.

As for identifying actual instances of (approximations of?) computation in the world, I endorse most of David Chalmers' and related discussions of this subject (NB: with regard to "computation," not necessarily with regard to "beliefs," "qualia", etc.). In other words, like any theory or mathematical construct, the Turing machine model (like a triangle) is a formal abstraction of a portion of human experience. Subsequent reversing of the model (using it to view the world) involves finding physical systems that when suitably abstracted, fit the formal model (e.g., viewing New York, LA, and Miami as forming a triangle). The more constraints a model places on the world (i.e., the more *predictive* it is), the more difficult it will be to find a physical system that accidentally fits the model (i.e., that accidentally satisfies all the predictions. Try finding cities that form an isosceles right triangle, or an octagon). I take it that this, or something like it, is the "cryptographic constraint" and/or the property of "sustaining counterfactuals" which preclude arbitrary physical systems from implementing arbitrarily complex computations.

Nevertheless, a physical system will often support multiple abstract descriptions as different TM computations---just as an arrangement of cities typically can be construed as having several different (irregular) geometrical shapes. Because the human brain is a complex physical system, it undoubtedly implements numerous different TM models, all by accident. The question, of course, is whether there could exist a TM model that could both (a) "be cognitive" and (b) be implemented by the brain in a physically plausible way. Condition (a) means being a complete, verifiable, explanatory model of an individual's cognitive processes, and condition (b) means being a causal description at the level of biological structures and processes (neurons, neurotransmitters, evolution?, etc.) and not at the levels of, say, quantum mechanics or abstract concepts (which would involve "ungrounded symbols"). The complexity entailed in condition (a) and the implementational/descriptive constraints of (b) make it vanishingly unlikely that both could be satisfied through chance.

II. The TM/UTM Distinction

Much of the symbol-grounding discussion, of course, revolves around the possibility of satisfying condition (a): Could there be a TM model that would really be cognitive if implemented (by whatever means)? Rather than trying to answer this question here, I merely want to point out one *incorrect* way to set about the task. It is incorrect, in general, to attribute properties of UNIVERSAL TM's (computers?) to ALL TM's.

Harnad asks:
>sh> Please give examples of what are and are
>sh> not "non-universal TM computations" and a principled explanation
>sh> of why they are or are not.

"Universal" basically means "programmable," and most TM's are not programmable. An example of a non-universal TM is a simple calculator that performs, say, only addition. It is non-universal because one cannot give it an input (i.e., a program) that would result in its producing, say, chess-playing behaviors. [1]

What is NOT a TM computation (universal or otherwise) is a bit thornier question. TM computations include only discrete-time processes requiring a finite amount of information processing per time step. Thus, any process, e.g., an analog one, that REQUIRES (not simply "uses") infinitesimal time-steps and/or infinite-precision computations (e.g., involving irrational numbers such as pi) might be a candidate for a non-TM "computation." [2]

Computation thus includes all programmable and non-programmable TM processes, and it excludes all requirements for infinite information processing (in time or space), which would presumably exclude analog processes (if they exist).

III. Some Implications of the TM/UTM Distinction

Suppose one answered the question "What is computation?" with:

"computation" = what any computer does. (3)

This answer would be incomplete without a specific computer model. The two obvious choices are:

"computer" = TM, (4) => (1) or

"computer" = (PC, workstation, mainframe, etc.) = UTM. (5) => (2)

If you believe that *programmability* is an essential feature of computers, then you are leaning toward definition (5), and I think that many people have such a picture in mind when they talk about "computers." However, I also suspect that if forced to choose EXPLICITLY between (4) and (5), many would choose (4) because it is more general, i.e., it corresponds to definition (1), ToC's definition of computation.

But now we have a problem. Many who think and talk mainly about programmable computers, UTM's, might implicitly believe themselves to be discussing all Turing machines. They would not be. UTM's are not universal for every aspect of computation (in fact, UTM's are quite atypical). One cannot simply transform arguments regarding UTM's directly into conclusions about all TM's. THE IMPLICATION DOES NOT FLOW IN THAT DIRECTION.

The distinction between TM's and UTM's should be clear-cut. If so, then the questions posed at the beginning of this message should be answerable without philosophical debate.

Q1: Can a single physical system simultaneously implement two or more non-trivial computations? A1: Yes. Every UTM, for example, simultaneously implements a unique universal computation and another arbitrary TM computation, which depends on an input program. (It would be incoherent to recognize one computation and not the other because both satisfy exactly the same conditions for being a true implementation.)

Q2: Is there a significant difference between a simple calculator and a (universally programmable) computer (a UTM) that is running a program that describes the calculator? A2: Yes, the calculator implements ONE non-trivial computation (that we know of), while the UTM simultaneously implements TWO computations (that we know of).

Q3: Is a (simple) calculator a computer? A3: It is a TM, but not a UTM.

Q4: Must a computer be programmable? A4: A UTM must be programmable. A TM may be non-programmable, partially-programmable, or universally-programmable (a UTM).

(and continuing...)

Q5: If a UTM is running an unknown program P, can one thereby guarantee that computation X is NOT occurring? A5: No. P might describe a TM that implements X. In such a case, the physical system comprised of the UTM and its inputs, which include program P, would simultaneously implement the TM and, hence, compute X.

Q5b: What if we *knew* that X is not the UTM's computation? Could we then guarantee that X is not occurring? A5b: No, there are two computations to account for (see A5, A1).

I believe that the TM/UTM distinction has some significance for the debate surrounding the computability of mind. Some of the preceding answers cannot even be coherently formulated if one is stuck with the single term "computer." Also, the fact that the Chinese Room (CR) argument is based on a UTM should give one pause. If it is illogical to generalize from UTM's to all TM's, then does the CR say anything about the capabilities of TM's in general? More specifically, does it "prove" anything about non-programmable (rulebook-less) TM's? If one is interested in all TM's, why talk only about programmable ones? Finally, even if UTM's process symbols purely formally, can one thereby conclude that all TM's must be purely formal symbol processors?

Of course, such issues will draw us from the firm ground of the Theory of Computation, but in venturing away we should try to maintain as clear a picture as possible. Instead of thinking and talking about "computers," I think we should at least consider the ramifications of using, or of failing to use, the more-precise models of Turing machines and---when specifically intended---universal Turing machines. The TM/UTM difference is the difference between being a program and being programmed. I trust that the former is the issue of interest.

Notes:

----------------------

[1] A TM could be partially programmable without being universal. That is, I might be able to program a calculator to compute certain formulae, e.g., celsius-fahrenheit conversions, but, again, I need not be able to make it play chess.

[2] Much of Roger Penrose's book (1989) speculates about the necessity of such infinite information-processing for mental functioning. However, it is debatable whether such processes could actually "compute something" in any proper sense, and it is not even clear that such (analog) processes actually exist.

References:

----------------------

@Book{Lewis-Papadimitriou:81, author = "Lewis, H. R. and C. H. Papadimitriou", title = "Elements of the Theory of Computation", publisher = "Prentice Hall", year = "1981", address = "Englewood Cliffs, NJ" }

@Book{Penrose:89, author = "Penrose, Roger", title = "The Emperor's New Mind", publisher = "Oxford University Press", year = "1989", address = "New York", }

-----------------------------------------------------

From: Stevan Harnad

SO WHAT IS IT THAT EVERY TM DOES?

Richard Yee has done a good job explicating the TM/UTM distinction (and has even taken some steps toward formulating an analog/TM distinction, although he thinks nothing may fit the left hand side of that distinction). I look forward to his next posting, when he tells us what it is that every TM does ["computation" = what any TM does] (and what, besides the hypothetical analog systems that he suspects may not exist) is NOT a TM, and NOT doing what every TM does, but something else (and what that something else is). (What is to be avoided in all this, I take it, is making everything a TM, and hence what everything does computation -- which, it were true, would make the statements like "Cognition is just Computation" or "The brain is just a TM" just so many tautologies.) It would also be helpful to explicate the implication-independence of computation, and just what might or might not be expected to "supervene" on it.

My hypothesis is that what TM's do is syntax: formal symbol manipulation (reading squiggles, comparing them with squoggles, erasing squiggles and replacing them by squoggles, etc.), and that whatever else might supervene on every implementation of the same computation in THIS sense, mentality does not number among them. In any case, it is only formal symbol manipulation (call it something else if "computation" is not the right word for it) that is vulnerable to Searle's Chinese Room Argument and the Symbol Grounding Problem.

Stevan Harnad

-----------------------------------------------------

Date: Tue, 26 May 92 12:02:50 EDT From: "Stevan Harnad"

Date: Tue, 26 May 92 10:44:40 -0400 From: davism@turing.cs.nyu.edu (Martin Davis)

Richard Yee writes emphasizing the importance of the distinction between arbitrary TMs and universal TMs.

There are some technical problems in making this distinction precise (having to do with separating the encoding of I/O data to the TM, from the actual computational process).

Long ago, I wrote on this matter:

``A Note on Universal Turing Machines,'' Automata Studies, C.E. Shannon and J. McCarthy, editors, Annals of Mathematics Studies, Princeton University Press, 1956.

``The Definition of Universal Turing Machine,'' Proceedings of the American Mathematical Society, vol.8(1957), pp. 1125-1126.

Martin Davis

------------------------------

Date: Tue, 26 May 92 23:36:14 EDT From: "Stevan Harnad"

Date: Tue, 26 May 1992 12:37:14 -0400 (EDT) From: Franklin Boyle

I would like to preface the following response to Dave Chalmers by repeating what Stevan, I believe, said about the What is Computation? discussion in reply to those who cautioned that it was straying from answering the original question (when it began moving towards cognition, the Chinese Room, etc.); that what is in the backs of all our minds is the desire to better understand cognition and, in particular, whether computation is sufficient to produce it.

Cognitive issues are important to this discussion because cognition is the result of a rather unique physical system (the brain) that is causally influenced by its environment as well as interactions among its consitutent parts. But unlike planetary systems and airplanes, the brain has some rather remarkable properties; in particular, its intrinsic capacity for reference. Now this is not a property of planets or solar systems or any other system we know of. So if we define computation such that planets can be construed as implementing some computation and, therefore, that they are computing, as Chalmers maintains, then we had better make sure we understand the nature of representation in such systems and how representing entities are causal, that is, how they relate to the "particular kind of state-transitional structure" [Chalmers] that serves as the basis for calling a solar system an implementation of a particular computation. Why? Because that is how the analogous argument goes for the brain as implementing a particular computation, and, thus, whether or not cognition is computation. But in this latter case, we have to account for intrinsic reference.

In other words, when we define computation, we had better be explicit about its physical characteristics because when we come to the question about the brain as a system which implements a particular computation, then whatever definition we've produced has to bear the full weight of being able to account for such things as an intrinsic referential capacity. If we allow any physical system to be an implementation of some computation, we will most likely end up with little in the way of principled criteria for determining whether cognition is computation.

>From *my* perspective, the brain is not implementing a computation just as planets in orbits are not, but for a different reason; because of structure-preserving superposition (SPS), instead of nomologically determined change (as occurs in planetary systems), as the causal mechanism for how physical change associated with its information processing is primarily brought about (see my previous postings). Both are fundamentally different from pattern matching, which, I claim, underlies computation.

Issues about consciousness, qualia, etc. should be part of another discussion on mind and brain, but symbol grounding and even the Chinese Room (unless it veers off toward arguments about multiple personalities, etc.) should be part of the "What is Computation?" discussion because they involve issues of causality and representation which are fundamental to computation. They also represent attempts to get at the kinds of things the "state transitional structure" of some computer program is going to have to support, especially in the case of the brain; e.g., "understanding" that comes about presumably because of referential characteristics of the representation.

To reiterate, if computation is going to be considered capable of producing cognition, then its state transition structure, or causal "organization", as Chalmers puts it, is going to have to explain this capacity. And, frankly, I don't believe this functionalist criterion alone can handle it. You need more than causal organization.

I therefore feel that the bifurcated discussions should, at least when the question of publication arises, be merged, with editing, under the "What is Computation?" heading.


>dc> As for continuity or discreteness, that depends on the computational
>dc> formalism one uses. Certainly all of the usual formalisms use discrete
>dc> states. Of course, a continuous physical system (like the planetary
>dc> system) can implement a discrete computation: we just have to chop up
>dc> its states in the right way (e.g. divide an orbit into 4 discrete
>dc> quadrants).

I don't believe there is a computational formalism that can legitimately be described as "computational" if it isn't discrete in a specific way. This doesn't mean that a system which is computing does not involve continuous processes (indeed, it must, if it's a physical system). But such processes are there only in a supporting capacity. They are not really part of the computation per se.

What makes computation discrete is the pattern matching process, not the nature of the representing entities. In computers, for example, symbols are discrete combinations of "high" and "low" voltages, whereas in cellular enzyme catalysis, which is also a pattern matching process (whether we call it a computation is still an issue), the tertiary structure of the enzyme onto which the subtrate molecules fit is continuous. But for both, pattern matching is a discrete event because it involves structure fitting and therefore leads to a distinct change; the switching of a particular circuit voltage from high to low or a single covalent bond, respectively. Pattee (1972) describes this type of constraint associated with structure fitting as a "decision-making" constraint. That is, the change is like a decision, which is a discrete event; a choice among alternatives.

In so-called analog computers and planetary systems, as in all other physical systems, interactions between objects cause state changes. But if you consider what is doing the representing in these two systems -- the *values* of measured attributes of the interacting objects -- you see that the representation is very different from one that is embodied by the forms of objects. Since changes in the values of measured attributes are nomologically determined, the representation in such systems not only depends on numerical values, but also on numerically based constraints (i.e., physical laws and boundary conditions) between representing (as well as nonrepresenting) entities. These are not decision-making constraints. Associations between measured attributes are not causal, yet these associations specify numerical relationships which, it would seem, would be very difficult to construe as representative of the arbitrary relationships between symbols in a pattern matching system. Structure fitting is not captured by these relationships because structures are extended, which is why they get broken up piecemeal into numerically based boundary conditions in physical state descriptions. Though this may not matter so much in the case of simple planetary systems, it does matter for cognition.

This is why you can't just say that the orbit of a planet can be divided into 4 discrete quadrants and expect that the system is, therefore, implementing a particular computation. The causal process involved in going from one quadrant to the next is nothing like a decision-making process; it is a nomologically determined change based on Newton's second law of motion applied to a particular system -- there is no choice among alternatives determined by the representing entities present in the system. Thus you are merely *interpreting* the system as implementing a computation because the states happen to correspond. But it is *not* an implementation (the interpretation, which is a description, can, of course, be implemented on a digitial computer. In other words, we know that the reverse is true, that a digital computer can simulate planetary motion).


>dc> Similarly for solar systems. It's conceptually consitutive of solar-
>dc> system-hood that a system have a certain geometric shape, a certain
>dc> chemical makeup, a certain size, and so on, and these physical properties
>dc> are not determined by abstract causal organization.
>dc> .....
>dc> The strong-AI hypothesis is that unlike these properties, *cognition*
>dc> is a property that supervenes on abstract causal organization. This
>dc> may or may not be obvious at first glance, but note that unlike digestion
>dc> and solar-system-hood, it's not ruled out at first glance: there doesn't
>dc> seem to be any physical property independent of causal organization
>dc> that's conceptually constitutive of cognition.

As I've suggested in previous posts and above, there *are* physical properties other than causal organization which, in your terminology, are conceptually consitutive of cognition-- namely, *how* cause is brought about. Why the latter constraint is "conceptually consitutive" (if I understand what you mean by this expression) of a process's being cognition is that if the brain is to have information about objects in the world -- their structures, motions, etc. -- then it has to actually receive the projections of those objects' structures, motions, etc. Otherwise, how could we know about them? Just saying some measured attribute or extended structure embodies it is not sufficient.

In pattern matching systems like digital computers, each structure, whether deliberately encoded by us, say as propositions, or entered directly through a video peripheral as a bitmap, causes the same behavior as long as there is a matcher to trigger the same response in both cases. Thus, it makes no difference what the structures are that do the representing in such systems. Each structure itself carries no information about its referent in pattern matching systems simply because *any* structure can cause the same outcome (of course the structures do carry information *for us* as external observers!). Furthermore, the vast amount of information implicit in the complex structures that constitute the environment cannot be just in the causal organization, because that organization is, for pattern matching systems, composed of a lot of simple actions (structureless) that trigger subsequent matches between objects (symbols) whose *actual* structures are superfluous.

This is just a physical explanation of why, as Harnad puts it, there is "nobody home" in such systems. Nor can there ever be.


>dc> To clarify, by "causal structure" I mean, roughly, *organizational*
>dc> properties of a system: i.e., the patterns of interactions between
>dc> various states, without taking into account what those states actually
>dc> are. For instance an atom, at least according to the Bohr model,
>dc> might share some causal structure with the solar system, but it differs
>dc> in many properties that aren't organizational properties, such as size,
>dc> mass, and intrinsic physical structure.

What are these "patterns of interactions between various states"? Are they just *sequences* of states or the individual interactions between particular objects that are constituents of the system? What you call "interactions between various states" are, I assume, really interactions between the constituent objects of those states, for that is what leads to new states. If it's just sequences of different states that can be mapped onto each other, without any accounting for what in those states (particular objects or their measured attributes) is actually doing the representing and whether the representing entities are causing change, then you haven't really got any principled criteria for what makes something computational.

To return to what I said in the beginning of this post about cognition, and its importance to discussions of computation; you have got to account physically for the problem of representation, since that is a fundamental part of cognition, and, therefore, should be the same for computation as well -- if you intend for computation eventually to do some cognitive work for you.

-Franklin Boyle

Pattee, H. H. (1972) Physical Problems of Decision-Making Constraints. International Journal of Neuroscience, 3:99-106.

---------------------------------------------------------

Date: Wed, 27 May 92 22:17:19 EDT From: "Stevan Harnad"

ANALOG SYSTEMS AND THE SUBSET OF THEM THAT IMPLEMENT SYMBOL SYSTEMS

For obvious reasons, I would like to understand and agree with what Franklin Boyle has written, because, on the face of it, it looks to be among the tiny minority of contributions to this Symposium that is not in substantial disagreement with my own! Nevertheless, I have some problems with it, and perhaps Frank can help with some further clarification.

fb> unlike planetary systems and airplanes, the brain has some rather fb> remarkable properties; in particular, its intrinsic capacity for fb> reference... So if we define computation such that planets can be fb> construed as implementing some computation... as Chalmers maintains, fb> then we had better make sure we understand the nature of representation fb> in such systems and how representing entities are causal... Why? fb> Because that is how the analogous argument goes for the brain as fb> implementing a particular computation, and, thus, whether or not fb> cognition is computation. But in this latter case, we have to account fb> for intrinsic reference... If we allow any physical system to be an fb> implementation of some computation, we will most likely end up with fb> little in the way of principled criteria for determining whether fb> cognition is computation.

I agree that excessive generality about "computation" would make the question of whether cognition is computation empty, but I don't see what THIRD possibility Frank has implicitly in mind here: For me, planets, planes, and brains are just stand-ins for ordinary analog systems. In contrast, a subset of these analog systems -- namely, computers doing computation -- are what they are, and do what they do, purely because they are implementations of the right symbol system (because they are constrained by a certain formal syntax, manipulating discrete symbols on the basis of their arbitrary shapes: "pattern matching," as Frank points out). So we have the physical analog world of objects, and some of these objects are also implementations of syntactic systems for which all specifics of the physical implementation are irrelevant, because every implementation of the same syntax is equivalent in some respect (and the respect under scrutiny here is thinking).

So I repeat, there seem to be TWO kinds of things distinguished here (actually, one kind, plus a special subset of it), namely, all physical systems, and then the subset of them that implement the same syntax, and are equivalent in that respect, independent of the physical properties of and differences among all their possible implementations. But the passage above seems to imply that there is a THIRD kind of stuff, that the brain will turn out to be that, and that that's the right stuff (which Frank calls "intrinsic capacity for reference").

I think we differ on this, because I would distinguish only analog and syntactic systems, assuming that the relevant cognitive capacities of the brain (a hybrid nonsymbolic/symbolic system) will turn out to draw essentially on both kinds of properties, not just the syntactic properties, as the computationalists claim (and definitely not just in the sense that syntax must always have a physical implementation); and that "intrinsic reference" will turn out to be synonymous with symbol groundedness: The meanings of the internal symbols of the TTT robot will be grounded in the robot's TTT-capacities vis-a-vis the objects, events and states of affairs to which the symbols can be systematically interpreted as referring.

fb> the brain is not implementing a computation just as planets in orbits fb> are not, but for a different reason; because of structure-preserving fb> superposition (SPS), instead of nomologically determined change (as fb> occurs in planetary systems), as the causal mechanism for how physical fb> change associated with its information processing is primarily brought fb> about (see my previous postings). Both are fundamentally different from fb> pattern matching, which, I claim, underlies computation.

I guess "SPS" is this third kind of property, but I don't really understand how it differs from an ordinary analog process. Here's what Frank wrote about it in his prior posting:


> fb> What other ways might physical objects cause change besides
> fb> through their [arbitrary, syntactic] forms? There are, I claim,
> fb> only two other ways: nomologically-determined change and
> fb> structure-preserving superposition (SPS). The former refers to
> fb> the kinds of changes that occur in "billiard-ball collisions".
> fb> They involve changes in the values of measured attributes
> fb> (properties whose values are numerical, such as momentum) of
> fb> interacting objects according to their pre-collisional
> fb> measured-attribute values in a physically lawful way (that is,
> fb> according to physical laws). Unlike pattern matching
> fb> interactions, these changes are not the result of structure
> fb> fitting.
> fb>
> fb> SPS is what I believe brains use. Like pattern matching (PM), it
> fb> also involves extended structure, but in a fundamentally
> fb> different way. Whereas PM involves the fitting of two
> fb> structures, which by its very nature, leads only to a simple
> fb> change such as the switching of a single voltage value from
> fb> "high" to "low" (in digital computers), SPS involves that actual
> fb> *transmission* of structure, like a stone imprinting its
> fb> structure in a piece of soft clay. That is, it is not the *form*
> fb> of a pattern or structure which must *conform* to the structure
> fb> of a matcher in order to effect system functioning (as in PM).
> fb> Rather, it is the *appearance* of that structure which causes
> fb> change because it is transmitted, so that the effect is a
> fb> structural formation of the specific features of the pattern's
> fb> extended structure (though I won't elaborate here, the difference
> fb> between form and appearance is somewhat akin to the difference
> fb> between the shadow of an object and the object itself). Two
> fb> different structures would physically superimpose to
> fb> automatically create a third. Harnad's [1990] symbol grounding
> fb> processes -- "analog re-presentation" and "analog reduction" -- I
> fb> take to be examples of SPS.

Leaving out the hermeneutics of "appearance" (which I think is a dangerous red herring), the above again simply seems to be distinguishing two kinds of analog processes, but this time with the distinction mediated by properties that are interpretable as "resembling" something rather than by formal syntactic properties that are interpretable as meaning something. So, enumerating, we have (1) the usual Newtonian kind of interaction, as between planets, then we have (2) a kind of structure-preserving "impact," leaving an effect that is somehow isomorphic with its cause (like an object and its photographic image?), and then finally we have (3) implementation- independent, semantically interpretable syntactic interactions. But (2) just looks like an ordinary analog transformation, as in transduction, which I don't think is fundamentally different from (1). In particular, if we drop talk of "appearances" and "resemblances," whatever physical connection and isomorphism is involved in (2) is, unlike (3), not merely dependent on our interpretation, hence not "ungrounded" (which is why I make extensive use of this kind of analog process in my own model for categorical perception).

My own proposal is that symbols are grounded in whatever internal structures and processes are required to generate TTT capacity, and I have no reason to believe that these consist of anything more than (1) pure analog properties, as in solar systems and their analogs, plus (2) syntactic properties, but with the latter grounded in the former, unlike in a pure (implemented but ungrounded) symbol system such as a computer. In this hybrid system (Harnad 1992 -- see excerpt below) neural nets are used to detect the invariants in the analog sensory projection that allow object categories to be connected to the symbols that name them; this model invokes no third, new property, just analog and syntactic properties.

fb> Issues about consciousness, qualia, etc. should be part of another fb> discussion on mind and brain, but symbol grounding and even the Chinese fb> Room... should be part of the "What is Computation?" discussion because fb> they involve issues of causality and representation which are fb> fundamental to computation... e.g., "understanding" ...comes about fb> presumably because of referential characteristics of the fb> representation.

But by my lights you can't partition the topic in this way, excluding the question of consciousness, because consciousness already enters as a NEGATIVE datum even in the Chinese Room: Searle testifies that he does NOT understand Chinese, therefore the implementation fails to capture intrinsic reference. Searle is reporting the ABSENCE of understanding here; that is an experiential matter. So understanding piggy-backs on the capacity to have qualia. Frank seems to agree (and to contradict this partitioning) when he writes:

fb> This is just a physical explanation of why, as Harnad puts it, fb> there is "nobody home" in such systems. Nor can there ever be.

To restate my own view, at any rate: It is an empirical hypothesis (just as computationalism, now refuted, was) that a real mind (real cognition, real thinking, somebody home, having qualia) will "supervene" on a system's TTT-capacity. Logically speaking, TTT-capacity is neither necessary nor sufficient for having a mind (this is a manifestation of the enduring -- and in my view insoluble -- mind/body problem), so the TTT-grounding hypothesis could be as wrong as computationalism. But unless someone comes up with an equivalent of Searle's Chinese Room Argument [and the ensuing Symbol Grounding Problem] against it, we can never know that the TTT hypothesis was wrong (because of the other-minds problem), so we should probably stop worrying about it. Nevertheless, it continues to be true in principle that having a real mind (somebody home, etc.) is a NECESSARY condition for the truth of the TTT hypothesis. It just happens to be a condition we will never have any way of knowing is fulfilled (which is why I am a methodological epiphenomenalist). Try denying that it's necessary without simply stipulating what thinking is by fiat (which would turn the "What is Cognition?" question into a Humpty-Dumpty matter, as empty as pan-computationalism would make the "What is Cognition?" question). To put it another way: "Groundedness = Aboutness" is just a fallible hypothesis, like any other, not a definition.

fb> You need more than causal organization.

I don't even believe syntactic systems have "causal organization" or "causal structure" in the sense Dave Chalmers claims. Their implementations of course have ordinary physical, causal properties, but these are irrelevant, by stipulation. So the only causality left is FORMAL (syntactic) causality, based on (pattern matching among) arbitrary shapes -- but with all of it being systematically interpretable as meaning something. To be sure, this is a powerful and remarkable property that symbol systems do have, but it is just a formal property, even when implemented physically. It is why nothing is really caused to move in a computer simulation of, say, the solar system or a billiard game.

fb> What makes computation discrete is the pattern matching process... fb> [which is] a discrete event because it involves structure fitting and fb> therefore leads to a distinct change; the switching of a particular fb> circuit voltage from high to low or a single covalent bond, fb> respectively. Pattee (1972) describes this type of constraint fb> associated with structure fitting as a "decision-making" constraint. fb> That is, the change is like a decision, which is a discrete event; a fb> choice among alternatives.

One can agree about the discreteness (without the unnecessary "decisional" hermeneutics), but it is still not clear what Pattee's mysterious "SPS" amounts to (although I know he invokes quantum mechanics, which I have a strong intuition is just as irrelevant as when Penrose invokes it: mysteries are not solved by applying a dose of yet another [and unrelated] mystery).

fb> In so-called analog computers and planetary systems, as in all other fb> physical systems, interactions between objects cause state changes. But fb> if you consider what is doing the representing in these two systems -- fb> the *values* of measured attributes of the interacting objects -- you fb> see that the representation is very different from one that is embodied fb> by the forms of objects. Since changes in the values of measured fb> attributes are nomologically determined, the representation in such fb> systems not only depends on numerical values, but also on numerically fb> based constraints (i.e., physical laws and boundary conditions) between fb> representing (as well as nonrepresenting) entities. These are not fb> decision-making constraints. Associations between measured attributes fb> are not causal, yet these associations specify numerical relationships fb> which, it would seem, would be very difficult to construe as fb> representative of the arbitrary relationships between symbols in a fb> pattern matching system. Structure fitting is not captured by these fb> relationships because structures are extended, which is why they get fb> broken up piecemeal into numerically based boundary conditions in fb> physical state descriptions. Though this may not matter so much in the fb> case of simple planetary systems, it does matter for cognition.

We are free to use either analog or discrete systems, natural or artificial, as tools for anything from digging a hole to reckoning time to putting a hex on an enemy. Their "representational" properties, if any, are purely extrinsic, dependent entirely on how we use them, just as the "sittability-upon" affordances of a chair are. I know that "extended structure" plays a critical role in Frank's own theory, but I have not yet been able to understand clearly what that role is. Whenever I have read about it, if I subtracted the hermeneutics, I found no remarkable property left over -- other than continuity in time and space, which is rather too general to be of any help, plus ordinary analog and syntactic interactions.

fb> Furthermore, the vast amount of information implicit in the complex fb> structures that constitute the environment cannot be just in the causal fb> organization, because that organization is, for pattern matching fb> systems, composed of a lot of simple actions (structureless) that fb> trigger subsequent matches between objects (symbols) whose *actual* fb> structures are superfluous.

Since everything seems to be susceptible to a finer-grained analysis where higher-level properties disappear, this too seems too general to be of any help in sorting out what is and is not computation or cognition.

Stevan Harnad

The following excerpt if from:

Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag:

Analog Constraints on Symbols

Recall that the shapes of the symbols in a pure symbol system are arbitrary in relation to what they stand for. The syntactic rules, operating on these arbitrary shapes, are the only constraint on the manipulation of the symbols. In the kind of hybrid system under consideration here, however, there is an additional source of constraint on the symbols and their allowable combinations, and that is the nonarbitrary shape of the categorical representations that are "connected" to the elementary symbols: the sensory invariants that can pick out the object to which the symbol refers on the basis of its sensory projection. The constraint is bidirectional. The analog space of resemblances between objects is warped in the service of categorization -- similarities are enhanced and diminished in order to produce compact, reliable, separable categories. Objects are no longer free to look quite the same after they have been successfully sorted and labeled in a particular way. But symbols are not free to be combined purely on the basis of syntactic rules either. A symbol string must square not only with its syntax, but also with its meaning, i.e., what it, or the elements of which it is composed, are referring to. And what they are referring to is fixed by what they are grounded in, i.e., by the nonarbitrary shapes of the iconic projections of objects, and especially the invariants picked out by the neural net that has accomplished the categorization. If a grounding scheme like this were successful, it would be incorrect to say that the grounding was the neural net. The grounding includes, inseparably (on pain of reverting to the ungrounded symbolic circle) and nonmodularly, the analog structures and processes that the net "connects" to the symbols and vice-versa, as well as the net itself. And the system that a candidate would have to BE in order to have a mind (if this hybrid model captures what it takes to have a mind) would have to include all of the three components. Neither connectionism nor computationalism, according to this proposal, could claim hegemony in modeling cognition, and both would have to share the stage with the crucial contribution of the analog component in connecting mental symbols to the real world of objects to which they refer.

--------------------------------------------

Date: Wed, 27 May 92 23:02:54 EDT From: "Stevan Harnad"

Date: Wed, 27 May 92 10:39:10 PDT From: Dr Michael G Dyer Subject: analog computation

Stevan,

Please elaborate on your reply to Bruce McLennan because I am now quite confused on just what your position is wrt intentionality and analog computation. For a moment, let's please ignore your symbol grounding issue -- i.e. let's admit that Searle, in pushing-pulling levers, etc. is doing PLENTY of "transduction" and so there IS (as you seem to require) "physical symbol grounding" (i.e. versus grounding a system within a simulated world, e.g. virtual reality systems).

Be that as it may, I thought the point of McLennan's thought experiment was that, although Searle is moving around lots of levers and examining lots of analog dials, etc. in order to simulate some Chinese-speaking persona (i.e. the Chinese-speaking persona supervenes on Searle's mental and physical capabilities to make the appropriate analog computations), Searle's own subjective experience would not be anything at all like that of a person who actually understands Chinese (and can also recognize his Chinese grandmother, if you make it a TTT system).

Since Searle's "periscope" (as you call this instrument) canNOT penetrate that Chinese mind, then WHAT makes analog computation any better (for you) than digital computation (which you claim canNOT penetrate the Chinese mind either).

To recap:

In the digital case Searle simulates the Chinese persona but Searle does not understand Chinese (nor know what the persona knows, etc.) so you and Searle conclude that there is only a simulation of understanding, not "real" understanding.

In the analog case Searle simulates the Chinese persona also and ALSO fails to understand Chinese, so again there is (from your own point of view) NO "real" understanding.

So WHAT's so special about analog computation?

Michael Dyer

---------------------------------------------------

From: Stevan Harnad

Here is how I put it in Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25:

Note that the [robot] simulation/implementation distinction already points to the critical status of transduction, since Searle's Chinese Room Argument fails completely for the robot version of the Turing Test, when the corresponding mental property at issue is the PERCEPTION of objects rather than the UNDERSTANDING of symbols. To see this, note that the terms of the Argument require Searle to show that he can take over all of the robot's functions (thereby blocking the Systems Reply) and yet clearly fail to exhibit the mental property in question, in this case, perceiving objects. Now consider the two possible cases: (1) If Searle simulates only the symbol manipulation BETWEEN the transducers and effectors, then he is not performing all the functions of the robot (and hence it is not surprising that he does not perceive the objects the robot is supposed to perceive). (2) If, on the other hand, Searle plays homunculus for the robot, himself looking at its scene or screen, then he is BEING its transducers (and hence, not surprisingly, actually perceiving what the robot is supposed to perceive). A similar argument applies to motor activity. Robotic function, unlike symbolic function, is immune to Searle's Chinese Room Argument.

[From Summary and Conclusions:] (7) The Transducer/Effector Argument: Prior "robot" replies to Searle have not been principled ones. They have added on robotic requirements as an arbitrary extra constraint. A principled "transducer/effector" counterargument, however, can be based on the logical fact that transduction is necessarily nonsymbolic, drawing on analog and analog-to-digital functions that can only be simulated, but not implemented, symbolically.

(8) Robotics and Causality: Searle's argument hence fails logically for the robot version of the Turing Test, for in simulating it he would either have to USE its transducers and effectors (in which case he would not be simulating all of its functions) or he would have to BE its transducers and effectors, in which case he would indeed be duplicating their causal powers (of seeing and doing).

-------------------------------------------------------------

And here is how I put it in Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54:

Suppose that the critical question we focused on in our TTT candidate's performance was not whether it understood symbols, but whether it could SEE. (The questions "Is it really intelligent?" "Does it really understand?" "Can it really see?" are all just variants of the question "Does it really have a mind?")...

So in the TTT variant of Searle's thought experiment there would again be two possibilities, just as there were in the Chinese Room: In the original TT case, the machine could either really be understanding Chinese or it could just be going through the motions, manipulating symbols AS IF it understood them. Searle's argument worked because Searle himself could do everything the machine did -- he could BE the whole system -- and yet still be obviously failing to understand.

In the TTT case of seeing, the two possibilities would again be whether the machine really saw objects or simply ACTED EXACTLY AS IF it did. But now try to run Searle's argument through: Searle's burden is that he must perform all the internal activities of the machine -- he must be the system -- but without displaying the critical mental function in question (here, seeing; in the old test, understanding). Now machines that behave as if they see must have sensors -- devices that transduce patterns of light on their surfaces and turn that energy into some other form (perhaps other forms of energy, perhaps symbols). So Searle seems to have two choices: Either he gets only the OUTPUT of those sensors (say, symbols), in which case he is NOT doing everything that the candidate device is doing internally (and so no wonder he is not seeing -- here the "System Reply" would be perfectly correct); or he looks directly at the objects that project onto the device's sensors (that is, he is BEING the device's sensors), but then he would in fact be seeing!

What this simple counterexample points out is that symbol-manipulation is not all there is to mental function, and that the linguistic version of the Turing Test just isn't strong enough, because linguistic communication could in principle (though not necessarily in practice) be no more than mindless symbol manipulation. The robotic upgrade of the TT -- the TTT -- is hence much more realistic, because it requires the candidate to interact with the world (including ourselves) in a way that is indistinguishable from how people do it in EVERY respect: both linguistic and nonlinguistic.

The fact that mere sensory transduction can foil Searle's argument should alert us to the possibility that sensorimotor function may not be trivial -- not just a matter of adding some simple peripheral modules (like the light-sensors that open the doors of a bank when you approach them) to a stand-alone symbol manipulator that does the real mental work. Rather, to pass the Total Turing Test, symbolic function may have to be grounded "bottom up" in NONSYMBOLIC sensorimotor function in an integrated, non-modular fashion not yet contemplated by current computer modelers. For example, in Harnad (1987), one possible bottom-up symbol-grounding approach is described in which the elementary symbols are the names of perceptual categories that are picked out by sensory feature detectors from direct experience with objects. Nonsymbolic structures such as analog sensory projections and their invariants (and the means for learning them) would play an essential role in grounding the symbols in such a system, and the effects of the grounding would be felt throughout the system.

Harnad, S. (1987) Category induction and representation. In: Harnad, S. (Ed.) Categorical perception: The groundwork of cognition. New York: Cambridge University Press

------------------------------------------------------

Date: Wed, 27 May 92 23:16:52 EDT From: "Stevan Harnad"

Date: Wed, 27 May 1992 16:13:07 -0400 (EDT) From: Franklin Boyle

Dave Chalmers wrote (in response to my concern about representation):


>dc> I agree with this, but I think that my construal of computation is capable
>dc> of doing just what you say. I take the fairly standard position that
>dc> representing entities represent *in virtue of their causal roles*, i.e. in
>dc> virtue of the way that they affect and are affected by other states of the
>dc> system, as well as the environment. According to this view, it
>dc> doesn't matter precisely how the causation in question is achieved;
>dc> all that matters is *that* it is achieved. Similarly, it doesn't matter
>dc> *what* the internal states are that are affected; all that matters is
>dc> their role in the overall economy of the system. So the criterion I
>dc> outlined, which is silent on (a) the intrinsic nature of the states
>dc> and (b) the specific manner of causation, does fine.
>dc>
>dc> Of course this definition doesn't say anything explicit about reference.
>dc> This is because, as I've said, I don't think that the notion of reference
>dc> is conceptually prior to that of computation. Neither, for that matter,
>dc> is the notion of computation prior to that of reference. Rather, I think
>dc> that both of them should be analyzed in terms of the prior notion of
>dc> causation. So we shouldn't expect the definition of computation (more
>dc> accurately, of implementation) to say anything explicit about reference:
>dc> we should simply expect that it will be *compatible* with an analysis
>dc> of reference, whenever that comes along. Hopefully, given our analysis
>dc> of computation and of reference, it will turn out that computational
>dc> structure determines at least some aspect of representational power.
>dc> The analysis of computation I've given satisfies this, being designed
>dc> to be compatible with a causal-role analysis of reference.

In the post you're responding to, I presented (albeit briefly) an analysis of causation into three "causal mechanisms" for how all change in the world is brought about. I said that computation was founded on one of those causal mechanisms; pattern matching. In my last post, I said that reference was important if you intend to use computation to answer questions about mind, and, thus, should be factored into how you define computation. I physically restricted computation to pattern matching systems because representation/reference in such systems, e.g., digital computers, has a peculiar form-independence that SPS-based systems, which I believe underlie the brain's information processing, do not. Something has to be able to account for this difference, and I doubt that "causal role" alone is capable. Searle has already suggested that it won't.


>dc> Your view seems to be that representing entities represent by virtue
>dc> of their internal form, rather than by virtue of their causal role. If
>dc> this were the case, then it's possible that this construal of computation
>dc> wouldn't be up to the job of fixing representational powers. However,
>dc> I don't see any good reason to accept this view. I agree that the internal
>dc> form of a representation can be very important -- e.g. the distributed
>dc> representations in connectionist networks have complex internal
>dc> structure that's central to their representational capacities. However,
>dc> it seems to me that this internal form is important precisely because it
>dc> *allows* a system of representations to play the kinds of causal roles
>dc> that qualify them as representations. The causal role is conceptually
>dc> prior, and the internal form is subsidiary.

Whether representing entities represent by virtue of their internal forms or not depends on the causal mechanism. If that mechanism is pattern matching, then their forms are superfluous, so any representational aspects they have are due to causal role alone. That's just functionalism, which is certainly compatible with systems like digital computers.

This is not true for SPS (structure-preserving superposition) as the causal mechanism. In that case, the form (or, more aptly, "appearance") of the representing entity or input signal *is* the change because its structure is *transmitted*. So in your terminology, the causal role of such representing entities is not "conceptually prior" (if I'm using this expression correctly). Their extended structures (i.e., their representing aspect) *are* the changes they cause.

Let me emphasize that I am talking here about *how* objects represent, which depends on the particular causal mechanism. For example, depending on whether the causal mechanism is pattern matching or SPS, the extended structure of a representing entity is either pattern matched as a "form" or structurally transmitted, respectively, yet it may be the same physical structure in both cases. This contrasts with "type" of representation -- e.g., propositional, diagrammatic, pictorial -- which is a matter of interpretation, not causal mechanism.

-Franklin Boyle

-----------------------------------------------

Date: Thu, 28 May 92 15:22:44 EDT From: "Stevan Harnad"

Date: Thu, 28 May 1992 14:52:21 -0400 (EDT) From: Franklin Boyle

Stevan,

I found your points about aspects of my theory of causal mechanisms to be well taken and I plan to respond with what will hopefully be a clarification. I think (hope) you will find that my division of causal mechanisms is exactly in line with your symbol grounding through analog processes (I just happen not to call my two nonsyntactic processes "analog" because I want to distinguish them causally; I believe such a distinction is important to the issue of reference/representation). In any case, I want to give a carefully constructed reply, since your comments target the basis of my ideas. Unfortunately, I'm going out of town tomorrow afternoon for the weekend and have a zillion things to do before leaving, so I will here give only a brief reply to one point (or paragraph), both to clarify a few things and, more importantly, to ward off a potential misattribution.


>
>sh> One can agree about the discreteness (without the unnecessary
>
>sh> "decisional" hermeneutics), but it is still not clear what Pattee's
>
>sh> mysterious "SPS" amounts to (although I know he invokes
>
>sh> quantum mechanics, which I have a strong intuition is just as
>
>sh> irrelevant as when Penrose invokes it: mysteries are not solved
>
>sh> by applying a dose of yet another [and unrelated] mystery).

First, I borrowed the use of "decisional" from Pattee to describe the kind of constraints involved in structure fitting (he is primarily interested in DNA, its expression as the tertiary structures of enzymes, and issues of language codes and living systems). I assure you that I use it only as a way of describing physical processes and am careful (I hope) not to let the interpretive notions associated with the term confuse my understanding of computation and cognition (in physics, such constraints are called "non-holonomic", which Pattee also uses. But that term, I don't believe, really helps the participants in this discussion group very much).

Second, SPS (structure-preserving superposition) is my term -- I haven't seen it used anywhere else (though "superposition" is certainly prevalent -- e.g., superposition of forces in physics or superposition of activation patterns in connectionist systems). It is meant to describe my third causal mechanism. Pattee talks only about pattern matching or structure fitting, not SPS (though he may talk about the superposition of quantum states).

Third, I agree with you completely on the issue of quantum mechanics. I'm not sure exactly what Pattee's current take on this is (perhaps Eric Dietrich knows of recent publications of his on this since, I believe, they are in the same department or program -- correct me if I'm wrong), but I do know he worries about the measurement process with respect to understanding DNA and language within the context of dynamical systems; and measurement is, of course, important in quantum mechanics. But I don't think QM is relevant to the issues of computation and cognition being discussed here. And I certainly agree with your description of Penrose's analysis.

I hope this clarifies my position a little bit and the origin of SPS. I'll post a much more complete reply to your post early next week.

-Franklin Boyle

--------------------------------------------------------

Date: Thu, 28 May 92 21:24:40 EDT From: "Stevan Harnad"

Date: Thu, 28 May 92 15:57:52 EDT From: dietrich@bingsuns.cc.binghamton.edu (dietrich)

Several times in the discussion on whether cognition is computation, someone invariably complains that certain arguments for computationalism seem to entail that everything is a computation (so it is no surprise that thinking is, too). The complainer then goes on to point out that (1) the thesis that everything is computation is vacuous, and (2) the inference from "everything is a computation" to "thinking is a computation" is also vacuous.

But neither of these claims is vacuous. The first claim is a general, universal hypothesis made in the time-honored tradition of science everywhere. The claims that everything is made of atoms, or that all objects tend to continue their motion unless otherwise disturbed, or that all species evolved are well known scientific hypotheses. They are not vacuous at all. Furthermore, the inference from, e.g., "everything is made of atoms" to "this keyboard is made of atoms" is a case of universal instantiation and constitutes a test of the hypothesis: if it should turn out that my keyboard is not made of atoms, then the hypothesis is false, and our physics is in deep trouble.

So it is with computationalism. The claim that everything is a computation is not vacuous. And the inference from it to the claim that thinking is computing is likewise not vacuous. And the further inference to my thinking is computing is a test (implicitly, anyway), because if it should turn out that my thinking is not computing (e.g., if my thinking involves executing functions that are equivalent to the halting problem or to arbitrarily large instances of the traveling salesman problem), then the claim is false. And testing for individual counterexamples is probably the only way we have of proceeding.

One can see that the claim that everything is a computation is not vacuous by noticing what it would mean if it were true and what it would take to make it false. There is an argument, by the way, that computationalism is not only not vacuous, but true. The argument is due to Chris Fields (Jetai, v.1, #3, 1989). Here is a brief synopsis.

A system is called "nonclassical" if any measurements of its internal states perturb its dynamics -- i.e., if some version of Heisenberg's principle holds for it. Given that psychological systems are nonlinear dynamical systems, it is likely that measurement perturbations of their behavior influence their future states. They are therefore, nonclassical systems. Nonclassical systems have an upper bound on the number of states we can measure, because there is an upper bound on the resolution with which states can be measured. We can detect at most a countable number of states. And this means that the behavior of the system, at the state-change level of description, can be described *completely* by a Turing machine, i.e., a partial recursive function.

(There is more to his argument, but this will do, I think.)

Fields's argument assumes that quantum mechanics is correct in its view about measurement. One can object that psychological systems (like humans) are in fact classical, but I for one don't hold out much hope for this rather desperate move.

So, here is what we have: Computationalism is not vacuous. If Fields's argument is correct, computationalism is a basic fact about the universe. Therefore, thinking is computing.

Sincerely,

Eric Dietrich

---------------------------------------------------------------

From: Stevan Harnad

Eric, I didn't catch the part about how you would go about disconfirming the hypothesis that (1) "everything is computation" or that (2) "thinking is computation." None of that complexity-based stuff about halting problems and the describability of "nonclassical systems" sounds like a potential empirical disconfirmation to me: Obviously what can't be computed can't be computed; but if everything IS computation (ex hypthesi), then obviously only what can be computed is being computed (QED). And the fact that something (e.g., a "nonclassical system," to an approximation) is computationally DESCRIBABLE (SIMULABLE) is no confirmation of the fact that that something IS just (implemented, implementation-independent) computation; so it looks even less like a means of disconfirming it. By contrast, (3) "everything continues in constant motion unless disturbed" and (4) "everything is made of atoms" sound pretty readily disconfirmable to me -- they just happen to be true (on all available evidence to date).

Fortunately, the specific hypothesis that "understanding Chinese is just (implemented, implementation-independent) computation" IS disconfirmable, indeed disconfirmed, by Searle's thought-experiment, and its failure is explained by the symbol grounding problem.

Stevan Harnad

--------------------------------------------------------------

Date: Mon, 29 Jun 92 00:02:54 EDT From: "Stevan Harnad"

Date: Wed, 17 Jun 92 13:32:03 -0400 From: mclennan@cs.utk.edu

"WORDS LIE IN OUR WAY!"

"Whenever the ancients set down a word, they believed they had made a discovery. How different the truth of the matter was! -- They had come across a problem; and while they supposed it to have been solved, they actually had obstructed its solution. -- Now in all knowledge one stumbles over rock- solid eternalized words, and would sooner break a leg than a word in doing so." -- Nietzsche (Dawn, 47)

1. THE SYSTEM REPLY

I wrote:

bm> I see no reason why we can't have an analog version of the Chinese bm> Room. Here it is: . . . .

and Stevan replied:


>
>sh> I'm not sure whether you wrote this because you reject Searle's argument
>
>sh> for the discrete symbolic case (and here wish to show that it is equally
>
>sh> invalid for the analog case) or because you accept it for the discrete
>
>sh> symbolic case and here wish to show it is equally valid for the analog
>
>sh> case. . . .

I was trying to argue that the analog/digital distinction could not be essential, because an analog version of the Chinese Room could be constructed, and, ceteris paribus, all arguments for or against it would still hold. I'll address this again below.


>
>sh> The critical factor is the "System Reply" (the reply to the effect that
>
>sh> it's no wonder Searle doesn't understand, he's just part of the system,
>
>sh> and the system understands): The refutation of the System Reply is for
>
>sh> Searle to memorize all the symbol manipulation rules, so that the
>
>sh> entire system that gets the inputs and generates the outputs (passing
>
>sh> the Chinese TT) is Searle. This is how he shows that in implementing
>
>sh> the entire symbol system, in BEING the system, he can truthfully deny
>
>sh> that he understands Chinese. "Le Systeme, c'est Moi" is the refutation
>
>sh> of the System Reply (unless, like Mike Dyer, you're prepared to
>
>sh> believing that memorizing symbols causes multiple personality. . . .

Elsewhere, Stevan said:


>
>sh> . . . . If Searle memorizes all the symbols and rules, he IS the
>
>sh> system. To suppose that a second mind is generated there purely in
>
>sh> virtue of memorizing and executing a bunch of symbols and rules is (to
>
>sh> me at least) completely absurd. . . .

Well, you've forced me to blow my cover. In fact I think a version of the System Reply (the Virtual Machines Reply) is essentially correct, but I was trying to stick to the question of computation, and avoid the much- discussed issue of the System Reply and multiple minds.

But let me state briefly my position on the System Reply: If Searle could instantiate the Chinese-understanding rules, there would in fact be two minds, one (Searle's) supervening directly on the neural substrate, the other (the Chinese Understander's) supervening on Searle's rule manipulation. There is no reason to suppose that Searle would exhibit anything like a multiple personality disorder; that's a strawman. The situation is the same as a Vax running a LISP interpreter. The hardware simultaneously instantiates two interpreters, a Vax machine-code interpreter and a LISP interpreter. (N.B. The Vax is not "part" of the LISP system; it includes it all.) If we imagine that an interpreter could be aware of what it's doing, then the Vax would be aware only of interpreting Vax instructions; it would say (like Searle), "I don't know a word of LISP! How can I be understanding it? I haven't seen a stitch of LISP code; all I see are Vax instructions!" On the other hand, the LISP program is in fact being interpreted, and, under the assumption, the LISP interpreter (but not the Vax) would be aware of doing it. This may seem absurd to you, but it seems obvious to me. Let there be no mistake though: Although I take the System Reply to be valid, I do not in fact think such a set of rules (for understanding Chinese) could exist. The reason however lies elsewhere. Mais passons, indeed!

2. ANALOG COMPUTATION

Elsewhere Stevan noted:


>
>sh> There are two dimensions to distinguish: (1) continuous vs. discrete
>
>sh> and (2) "analog" vs. symbolic.

I'm glad you made this distinction, because it exposes part of the reason for our disagreement. To me the essential distinction between analog and digital computation is precisely the distinction between the continuous and the discrete. I think the terms "continuous computation" and "discrete computation" would be more accurate, but history has given us "analog computation" and "digital computation."

To avoid misunderstanding, let me point out that there is no basis to the notion that the distinction between analog and digital computation consists in the fact that analog computing is based on an "analogy" between two physical processes, whereas digital is not. (That may have been the historical origin of the terms, but now we know better.) An "analogy" between two systems is central to both kinds of computation, because in both a formal structure underlies two systems, one the computer, the other the system of interest.

Here we find exactly the syntax and semantics you have been writing about: Computation is syntactic because it it is defined in terms of formal laws referring only to physical attributes of the state, independent of its interpretation. Although computation is syntactic, semantics is also relevant because we are (mostly) concerned with systems whose states and processes (whether continuous or discrete) can be interpreted as the states and processes of some other system of interest to us. (Of course, as several others have noted, in computer science we do in fact sometimes study random programs and other programs with no intended interpretation; the reason is that we are interested in the phenomena of computation per se.)

So I suggest we purge from this discussion the terms "analog computer" and "digital computer" since they are prone to misinterpretation. If the issue is discrete vs. continuous symbols, states or processes, let's say so, and forget the rest. Henceforth I'll follow my own advice, and you'll hear no more from me about "analog" or "digital" computers (except to discuss the words).

What then is the relevant distinction? Stevan said:


>
>sh> There are two dimensions to distinguish: (1) continuous vs. discrete
>
>sh> and (2) "analog" vs. symbolic. The latter is, I think, the relevant
>
>sh> distinction for this discussion. It apposes the analog world of objects
>
>sh> (chairs, tables, airplanes, furnaces, planets, computers, transducers,
>
>sh> animals, people) with that SUBSET of the analog world that consists of
>
>sh> implementations of formal symbol systems, . . . .

It seems to me that by "the analog world" you simply mean the real world. In the real world we can distinguish (1) things (chairs, tables, airplanes, furnaces, planets, computers, transducers, animals, people) and (2) computational things, which are also part of the real world, but are important to us by virtue of instantiating certain formal processes of interest to us. The formal processes are the syntax; the semantics refers to some other process having the same formal structure. By virtue of having a semantics they are "symbolic," regardless of whether their formal structure is continuous or discrete (a matter of degree in any case, which only becomes absolute in the mathematical ideal).

3. THE GRANNY ROOM

Now let me turn to the continuous analog of the Chinese Room, which I'll dub "the Granny Room" (since its supposed purpose is to recognize the face of Searle's grandmother). My point is that there is no essential difference between the discrete and continuous cases. I wrote:

bm> I see no reason why we can't have an analog version of the Chinese bm> Room. Here it is: Inputs come from (scaleless) moving pointers. Outputs bm> are by twisting knobs, moving sliders, manipulating joysticks, etc. bm> Various analog computational aids -- slide rules, nomographs, bm> pantagraphs, etc. -- correspond to the rule book. Information may be bm> read from the input devices and transferred to the computational aids bm> with calipers or similar analog devices. . . .

Stevan replied:


>
>sh> But look at what you are proposing instead: You have Searle twisting
>
>sh> knobs, using analog devices, etc. It's clear there are things going on
>
>sh> in the room that are NOT going on in Searle. But in that case, the
>
>sh> System Reply would be absolutely correct! I made this point explicitly
>
>sh> in Harnad 1989 and Harnad 1991, pointing out that even an optical
>
>sh> transducer was immune to Searle's Argument [if anyone cared to
>
>sh> conjecture that an optical transducer could "see," in the same way it
>
>sh> had been claimed that a computer could "understand"], because Searle
>
>sh> could not BE another implementation of that transducer (except if he
>
>sh> looked with his real eyes, in which case he could not deny he was
>
>sh> seeing), whereas taking only the OUTPUT of the transducer -- as in your
>
>sh> example -- would be subject to the System Reply. It is for this very
>
>sh> same reason that the conventional Robot Reply to Searle misfired,
>
>sh> because it allowed Searle to modularize the activity between a
>
>sh> computational core, which Searle fully implemented, and peripheral
>
>sh> devices, which he merely operated . . . .

Since I'm sympathetic to the System Reply, this doesn't bother me too much, but I don't see that the discrete Chinese Room is any more immune to it. I proposed all this apparatus to make the example (slightly) more plausible, but there is no reason it can't all be internalized as Searle proposed in the discrete case. After all, we can do continuous spatial reasoning entirely in our heads.

Further, even if Searle memorizes all the rules, there must still be some way to get the input to him and the output from him. If a slip of paper bearing the Chinese characters is passed into the room, then he must look at it before he can apply the memorized rules; similarly he must write down the result and pass it out again. How is this different from him looking at a continuous pattern (say on a slip of paper), and doing all the rest in his head, until he draws the result on another slip of paper? Whatever you propose to do in the discrete case, I will do in the continuous. The only difference is that *inside Searle's head* the processing will be discrete in one case and continuous in the other, but I don't see how you can make much hang on that difference.

It seems to me that in both the discrete and continuous cases the essential point is that:

bm> . . . . the values bm> manipulated by Searle have no *apparent* significance, except as props bm> and constraints in his complicated [mental] dance. . . .

In other words, his (mental) manipulations are purely syntactic; he's dealing with form but not content. There remains then the important question (which symbol grounding addresses) of how symbols -- whether continuous or discrete -- get their content.

4. WHAT'S A COMPUTER?

Stevan wrote:


>
>sh> (2) If all dynamical systems that instantiate differential equations
>
>sh> are computers, then everything is a computer (though, as you correctly
>
>sh> point out, everything may still not be EVERY computer, because of (1)).

I didn't say that "all dynamical systems that instantiate differential equations are computers," and certainly wouldn't conclude "everything is a computer." What I did claim was:

bm> . . . . a physical device is an analog computer to the extent that we bm> choose and intend to interpret its behavior as informing us about some bm> other system (real or imaginary) obeying the same formal rules. . . .

And later:

bm> . . . . In addition to the things that bm> are explicitly marketed as computers, there are many things that may be bm> used as computers in an appropriate context of need and availability.

That's far from saying everything is -- or even can be -- a computer! A computer, like a screwdriver, is a tool. Just as for screwdrivers, the possibility of being a computer depends both on its being physically suited to the job, as well as on its being seen as useful for the job. A knife can be a screwdriver (if we're smart enough to see it as such), but a blob of Jello cannot, no matter how creative our *seeing as*. Some physical systems can be digital (i.e., discrete) computers, others cannot; some can be analog (i.e., continuous) computers, others cannot. And most of these things will not be computers of any sort unless we see and use them as such.


>
>sh> Dubbing all the laws of physics computational ones is duly ecumenical,
>
>sh> but I am afraid that this loses just about all the special properties
>
>sh> of computation that made it attractive (to Pylyshyn (1984), for
>
>sh> example) as a candidate for capturing what it is that is special about
>
>sh> cognition and distinguishes it from from other physical processes.

True enough. But just because these are the terms in which the question has been phrased doesn't mean that they are the terms in which it can be answered. As I said:

bm> Therefore a hypothesis such as "the mind is a computer" is not bm> amenable to scientific resolution . . . . bm> . . . . A better strategy is to formulate the hypothesis in bm> terms of the notion of instantiated formal systems, which is more bm> susceptible to precise definition.

If "instantiated discrete formal system" is what we mean (or, as I would claim: instantiated formal system, whether discrete or continuous), then why don't we say so? This notion can be formally defined; "computer" and "computation" cannot, in my opinion. (Sloman, Judd and Yee have made similar suggestions.) As you said, part of the attractiveness of the computational view is a manifest constituent structure and a systematic interpretation, but this doesn't require discrete symbols, as I'll argue below. (Pace Fodor, Pylysyn et al.)

5. INTERPRETABILITY

Stevan wrote:


>
>sh> (1) My cryptographic criterion for computerhood was not based on the
>
>sh> uniqueness of the standard interpretation of a symbol system or the
>
>sh> inaccessibility of nonstandard interpretations, given the standard
>
>sh> interpretation. It was based on the relative inaccessibility
>
>sh> (NP-Completeness?) of ANY interpretation at all, given just the symbols
>
>sh> themselves (which in and of themselves look just like random strings of
>
>sh> squiggles and squoggles).

This agrees with my claim above that for something to be a computer it must normally be *seen as* a computer, in other words, that its formal properties must apply to some other system of interest to us, and hence be interpretable. But I see no reason to drag in issues like NP-completeness (which probably cannot be applied rigorously in this context anyway) to impose precision on an essentially informal concept (computation). Better to talk about the relation between instantiated formal systems and their interpretations. In any case, I think the issue of interpretability (or the relative ease thereof) is irrelevant to what I take to be the substantive scientific (empirical) issue: Can cognition be adequately modeled as an instantiated discrete formal system?

6. CONTINUOUS SYMBOL SYSTEMS


>
>sh> There is still the vexed question of whether or not neural nets are
>
>sh> symbol systems. If they are, then they are subject to the symbol
>
>sh> grounding problem. If they are not, then they are not, but then they
>
>sh> lack the systematic semantic interpretability that Fodor & Pylyshyn
>
>sh> (1988) have stressed as crucial for cognition. So nets have liabilities
>
>sh> either way as long as they, like symbols, aspire to do all of cognition
>
>sh> (Harnad 1990); in my own theory, nets play the much more circumscribed
>
>sh> (though no less important) role of extracting the sensory invariants in
>
>sh> the transducer projection that allow symbols to be connected to the
>
>sh> objects they name (Harnad 1992).

All too vexed perhaps; a common consequence of asking the wrong question. We must distinguish (at least): (1) physical systems obeying differential equations, (2) continuous formal systems, and (3) continuous symbol systems (MacLennan 1988, in press-a, in press-b). We all know what class (1) is: most of the universe, so far as physics tells us. Class (2) is a subclass of class (1): systems of interest because they instantiate a given set of differential equations, but for which the actual physical quantities governed by the equations are irrelevant (that's why they're formal). (I'm glossing over the distinction between the (Platonic) abstract formal system and the (physical) instantiated formal system, but I think that's clear enough.) Continuous formal systems are treated as syntactic processes; that is, semantics is irrelevant to them qua formal system. Class (3) are those continuous formal system for which an interpretation is posited. The actual interpretation may not be specified, but we are concerned with how the continuous states and processes are related to the domain of interpretation. As noted again and again by many people, there's not much point in creating uninterpretable formal systems, so the practical distinction between (2) and (3) is whether we are interested in syntax only or syntax + semantics. (I hope the exact parallel with discrete (dynamic / formal / symbol) systems is apparent.)

As to "the vexed question of whether neural networks are symbol systems" -- it depends what you mean by neural network. Some physical systems implement Hopfield networks, but they belong in class (1), unless our interest in them consists in their implementing the abstract process, in which case they are in class (2). However, if the implemented Hopfield net refers to some other domain, perhaps an optimization problem, then it's class (3). I expect that most of the neural networks in our brains are class (3). Since class (2) is mostly of theoretical interest, it seems unlikely to be found in nature. (Of course there may be brain processes - perhaps not involving neurons at all - that nevertheless coincidentally instantiate abstract neural nets, such as Hopfield nets; these go in class (1), as do processes for which the material embodiment is critical: transducers, for example; or perhaps they are a fourth class, since they cross the 1/3 boundary. In any case symbol grounding is as relevant to continuous symbol systems as it is to discrete.)

What we normally require of discrete symbol systems, and what allows them to reduce meaningful processes to syntax, is that the interpretation be systematic, which means that it respects the constituent structure of the states. Is there anything analogous for continuous symbol systems? Indeed there is, and to find it we only need look at systematicity more abstractly. Constituent structure merely refers to the algebraic structure of the state space (e.g., as defined by the constructor operations). (There are many sources for this, but I'll take the opportunity to shamelessly plug MacLennan 1990, Chs. 2, 4.) Systematicity then simply says that the interpretation must be a homomorphism: a mapping that respects the algebraic structure (though perhaps losing some of it). The point is that these ideas are as applicable to continuous symbol systems as to the better-known discrete symbol systems. In both cases the "symbols" (physical states) are arbitrary so long as the "syntax" (algebraic structure) is preserved. If you will grant the possibility of continuous symbol systems, then I hope you will also agree that they are of critical importance to cognitive science.

REFERENCES

Fodor, J. & Pylyshyn, Z. (1988) Connectionism and cognitive architecture: A critical analysis. Cognition 28: 3 - 71. [also reprinted in Pinker & Mehler 1988]

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical and Experimental Artificial Intelligence 1: 5-25.

Harnad, S. (1990) Symbols and Nets: Cooperation vs. Competition. S. Pinker & J. Mehler (Eds.) (1988) "Connections and Symbols." Connection Science 2: 257-260.

Harnad, S. (1991) Other bodies, Other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1: 43-54.

Harnad, S. (1992) Connecting Object to Symbol in Modeling Cognition. In: A. Clarke and R. Lutz (Eds) Connectionism in Context Springer Verlag.

MacLennan, B. J. (1988) Logic for the new AI. In J. H. Fetzer (Ed.), Aspects of Artificial Intelligence (pp. 163-192). Dordrecht: Kluwer.

MacLennan, B. J. (1990) Functional programming: Practice and theory. Reading, MA: Addison-Wesley.

MacLennan, B. J. (in press-a) Continuous symbol systems: The logic of connectionism. In Daniel S. Levine and Manuel Aparicio IV (Eds.), Neural Networks for Knowledge Representation and Inference. Hillsdale, NJ: Lawrence Erlbaum.

MacLennan, B. J. (in press-b) Characteristics of connectionist knowledge representation. Information Sciences, to appear.

Pylyshyn, Z. (1984) Computation and Cognition. Cambridge MA: MIT/Bradford.

-----------------------------------------------------------

Date: Mon, 29 Jun 92 00:04:52 EDT From: "Stevan Harnad"

WHAT ARE "CONTINUOUS SYMBOLS"?

Bruce McLennan has introduced some interesting new concepts into this discussion, in particular, the notion of continuous vs. discrete formal systems and their implementations. Some of the issues he raises are technical ones on which I do not have the expertise to render a judgment, but I think I can safely comment on the aspects of what Bruce has written that bear on what thinking can or cannot be and on what Searle's Argument has or has not shown. I will also try to say a few words about whether there is a symbol grounding problem for "continuous symbol systems."

To summarize what I will argue below: Bruce has adopted an informal "tool" model for computers, to the effect that any system we can use to compute anything we may want to compute is a computer (including planetary systems used to compute dates, etc.). Some of these tools will compute in virtue of being programmable, discrete-state digital computers, some will compute in virtue of obeying a set of differential equations. The only restriction this places on what is or is not a computer is (1) whatever limitation there may be on what can be computed in this sense (and perhaps on what we may want to compute) and (2) whatever happen to be the properties that tools may have or lack with respect to any computation we may want to do with them.

An implication of this seems to be that (someone else's) brain is a computer to the extent that we use it (or can use it) as a tool to compute. That does not seem to be a very useful or specific conclusion; it seems rather too parasitic on how someone ELSE might use someone's brain as a tool, rather than addressing what a brain is intrinsically.

I don't find this sense of "computing" very useful (perhaps others will), nor do I find it very informative to be told that it is in this sense that the brain is really "computing" (since so much else is too, and it's hard to imagine what is not, and why not).

Now I pass to comment mode:

bm> I was trying to argue that the analog/digital distinction could not be bm> essential, because an analog version of the Chinese Room could be bm> constructed, and, ceteris paribus, all arguments for or against it would bm> still hold.

And I in turn suggested that an analog version could NOT be constructed because Searle could not implement analog "computations" all by himself the way he can implement discrete symbolic ones (his ability to do it ALL is essential, otherwise he is rightly open to the "System Reply," to the effect that it is not surprising if he does not understand, since the system as a whole understands, and he is not the whole system). I have tried to make it quite explicit that Searle's argument is valid ONLY against the hypothesis that thinking "supervenes" on any and every implementation of the right discrete symbol manipulations.

bm> Well, you've forced me to blow my cover. In fact I think a version of the bm> System Reply (the Virtual Machines Reply) is essentially correct, but I bm> was trying to stick to the question of computation, and avoid the much- bm> discussed issue of the System Reply and multiple minds. bm> bm> But let me state briefly my position on the System Reply: If Searle could bm> instantiate the Chinese-understanding rules, there would in fact be two bm> minds, one (Searle's) supervening directly on the neural substrate, the bm> other (the Chinese Understander's) supervening on Searle's rule bm> manipulation. There is no reason to suppose that Searle would exhibit bm> anything like a multiple personality disorder; that's a strawman. The bm> situation is the same as a Vax running a LISP interpreter. The hardware bm> simultaneously instantiates two interpreters, a Vax machine-code bm> interpreter and a LISP interpreter. (N.B. The Vax is not "part" of the bm> LISP system; it includes it all.) If we imagine that an interpreter could bm> be aware of what it's doing, then the Vax would be aware only of bm> interpreting Vax instructions; it would say (like Searle), "I don't know a bm> word of LISP! How can I be understanding it? I haven't seen a stitch of bm> LISP code; all I see are Vax instructions!" On the other hand, the LISP bm> program is in fact being interpreted, and, under the assumption, the LISP bm> interpreter (but not the Vax) would be aware of doing it. This may seem bm> absurd to you, but it seems obvious to me. Let there be no mistake bm> though: Although I take the System Reply to be valid, I do not in fact bm> think such a set of rules (for understanding Chinese) could exist. The bm> reason however lies elsewhere. Mais passons, indeed!

Before we pass on, let me suggest that you don't quite have the logic of Searle's Argument straight: First, if one rejects the premise that a discrete symbol system could pass the TT then one accepts Searle's conclusion that thinking is NOT just (implemented, implementation-independent, discrete) symbol manipulation. So you must at least accept the premise arguendo if you are to have anything at all to say about the validity or invalidity of Searle's argument and/or the System Reply to it.

So let's suppose, with Searle AND his opponents, that it is possible for a discrete symbol system to pass the TT; then I don't think anything of what you say above would go through. "Virtual Systems" in a discrete symbol manipulator are a HOUSE OF CARDS: If they are ungrounded at the appropriate symbolic level (the Lisp "interpreter"), then they are just as ungrounded at the lower level (the Vax machine-code "interpreter") and vice versa, because (until further notice) symbols alone are ungrounded at all "levels" (that's what's on trial here)!

The "interpreters" the machine instantiates are of course not the interpreters I'm talking about, for those "interpreters" too are merely symbols and symbol manipulations that are INTERPRETABLE as interpreters, just as the symbols at either of those virtual levels are merely interpretable as English or LISP or machine language. As hard as it is (for hermeneutic reasons) to forget it or ignore it once you've actually interpreted them, in reality, it's all just squiggles and squoggles! The highest "virtual" level is no more grounded than the lowest one. Their relation is rather like that of the double-interpretability of ACROSTICS: But apart from the interpretation WE project onto it, it's all just (systematically interpretable) syntactic gibberish -- like the symbols in a static book (in an unknown language).

That's why I keep saying that hypostasizing virtual levels (as when "we imagine that an interpreter could be aware of what it's doing") is just hermeneutics and sci-fi: Sure, the symbol system will bear the weight of the two levels of interpretation, and yes, it's uncanny that there seem to be two systematic levels of messages in there, and systematically inter-related levels to boot, as in an acrostic. But that's just what systematically interpretable symbol-manipulation is all about. Let's not make it even more mysterious than necessary in supposing that there could be an AWARENESS corresponding to the level of the LISP interpreter, for that's precisely the kind of OVERinterpretation that is ON TRIAL in the Chinese room, at any and all levels of interpretability. To try to force this supposition through as a rebuttal to Searle is just to repeat the impugned premises in a louder tone of voice!

The trouble with hermeneutics is that it makes it seem as if the shoe is on the wrong foot here: Mere interpretability-as-if, be it ever so systematic, is JUST NOT ENOUGH, irrespective of "level," and that's just what Searle's Argument demonstrates.

Now, that having been said, if what you suggest is that the TT could only be passed by what you call "continuous symbol systems," then we will have to learn more about just what continuous symbol systems are. If they should turn out to be neurons and glia and neurotransmitters, Searle would have no quarrel with that -- and even if he did, he couldn't implement them (apart from the 1st order brain he is already implementing), so his Argument would be moot. It would be equally moot if the "continuous symbol system" that could pass the TT were the planetary system, or even a simple A/A optical/acoustic transducer that transduced light intensity and spatial pattern into, say, sound intensity and some other two-dimensional analog medium. Searle could not BE an implementation of that entire system, so no wonder he lacks whatever might in reality be "supervening" on that implementation.

But I have to add that if a "continuous symbol system" rather than a discrete one could pass the TT, there would still have to be (discrete?) symbols in it corresponding to the meanings of our words and thoughts, would there not? And unless that system could also pass the TTT, those symbols would still be subject to the symbol grounding problem. Hence the system, though not penetrable by Searle's periscope, would still be ungrounded.

bm> To me the essential distinction between analog and bm> digital computation is precisely the distinction between the continuous bm> and the discrete. bm> bm> To avoid misunderstanding, let me point out that there is no basis to the bm> notion that the distinction between analog and digital computation bm> consists in the fact that analog computing is based on an "analogy" bm> between two physical processes, whereas digital is not. (That may have bm> been the historical origin of the terms, but now we know better.) An bm> "analogy" between two systems is central to both kinds of computation, bm> because in both a formal structure underlies two systems, one the bm> computer, the other the system of interest.

I agree; and what I happen to think is that rather than "analogy," what is critical here is the PHYSICAL INVERTIBILITY OF THE TRANSFORMATION FROM "OBJECT" TO "IMAGE." In discretization, some invertibility is lost; in symbolization, syntactic conventions (mediated by human interpretation) take the place of direct physical connections (except in a computer with dedicated peripherals -- an interesting and important special case). But people have had almost as much trouble with the analog/digital distinction as with defining computer/computation, and mine too are just vague intuitions, so passons...

bm> Here we find exactly the syntax and semantics you have been writing about: bm> Computation is syntactic because it is defined in terms of formal laws bm> referring only to physical attributes of the state, independent of its bm> interpretation. Although computation is syntactic, semantics is also bm> relevant because we are (mostly) concerned with systems whose states and bm> processes (whether continuous or discrete) can be interpreted as the bm> states and processes of some other system of interest to us.

I regret that I find this far too general to be helpful. If Newton's Laws are computational laws, and systems that obey them are computers, so be it, and I no longer contest that the brain (and everything else) is just a computer, doing computation. But then this generality has really said nothing of substance about anything at all: it is merely tantamount to reminding us that the nervous system, like the solar system, is a physical system, governed by natural laws that can be described formally. Who would have denied that? But this is a far cry from the claim of mentality for virtual levels of symbol interpretability in a discrete formal symbol manipulator, which is all I (and Searle) ever intended as our target.

bm> the issue is discrete vs. continuous symbols, states or processes bm> bm> It seems to me that by "the analog world" you simply mean the real world. bm> In the real world we can distinguish (1) things (chairs, tables, bm> airplanes, furnaces, planets, computers, transducers, animals, people) and bm> (2) computational things, which are also part of the real world, but are bm> important to us by virtue of instantiating certain formal processes of bm> interest to us. The formal processes are the syntax; the semantics refers bm> to some other process having the same formal structure. By virtue of bm> having a semantics they are "symbolic," regardless of whether their formal bm> structure is continuous or discrete (a matter of degree in any case, which bm> only becomes absolute in the mathematical ideal).

By the analog world I just mean physical objects and processes, whether discrete or continuous, that have whatever properties they have intrinsically, and not merely as a matter of interpretation. Some of these objects and processes can also be used to "compute" things we want to compute. Of those that are used that way, some compute in virtue of instantiating differential equations, others in virtue of instantiating discrete formal symbol manipulations. The sense of "computer" I have in mind is the latter, not the former.

Although I have not yet given it sufficient thought, it may be that even the former kind of system (which corresponds to just about anything, I should think) -- the kind of system that is an "implementation" of differential equations -- also has a kind of "symbol grounding problem," entirely independent of the question of mind-modeling, in that two radically different systems may obey formally equivalent equations (a mechanical system, say, and an electrodynamic one) and it's just a matter of interpretation whether the terms in the equation refer to mass or charge (I'm afraid I don't know enough physics to pick the right equivalents here). In that sense the symbols in the formal equation are "ungrounded," but as far as I know nothing much hangs on this kind of ungroundedness: On the contrary, the analogies between the different physical systems that obey equations of exactly the same form are of interest in unifying the laws of physics.

One of the features of a formal system is that you can write it down on paper. That means discrete symbols. Even the symbols of continuous differential equations are discrete. I don't think it is appropriate to say that the continuous processes to which the symbols refer are themselves "continuous symbols" -- although I can see the motivation for this (by analogy with the implementation of discrete equations in a digital computer, in which the symbols really do correspond to binary states of the computers flip-flops).

To put it another way: I think the DISanalogies between the discrete implementation (by a computer) of the discrete symbols in a computer program and the continuous implementation (by, say, the solar system) of the discrete symbols in a set of differential equations far outweigh the analogies and are more pertinent to the question of mind-modeling and symbol grounding (and certainly to Searle's Argument and the "Is Cognition Computation" question). But perhaps the analogies are pertinent to the "What is Computation?" question.

bm> Now let me turn to the continuous analog of the Chinese Room, which I'll bm> dub "the Granny Room" (since its supposed purpose is to recognize the face bm> of Searle's grandmother). My point is that there is no essential bm> difference between the discrete and continuous cases. bm> bm>
>sh> But look at what you are proposing instead: You have Searle twisting bm>
>sh> knobs, using analog devices, etc. It's clear there are things going on bm>
>sh> in the room that are NOT going on in Searle. But in that case, the bm>
>sh> System Reply would be absolutely correct! bm> bm> Since I'm sympathetic to the System Reply, this doesn't bother me too bm> much, but I don't see that the discrete Chinese Room is any more immune to bm> it. I proposed all this apparatus to make the example (slightly) more bm> plausible, but there is no reason it can't all be internalized as Searle bm> proposed in the discrete case. After all, we can do continuous spatial bm> reasoning entirely in our heads.

Now this I cannot follow at all! What on earth has "continuous spatial reasoning" in the head got to do with it? We have a robot, among whose internal functions there is, for example, transduction of light energy into some other form of energy. How is Searle to do that in its place, all in his head? In the case of the discrete symbol crunching it was quite clear what Searle had to do, and how, but what on earth are you imagining here, when you imagine him "implementing" a robot that, say, sees, partly in virtue of transducing light: How is Searle to do this without seeing, and without using any props (as he does when he memorizes all the symbol manipulation rules and processes all incoming symbols in his head)?

bm> Further, even if Searle memorizes all the rules, there must still be some bm> way to get the input to him and the output from him. If a slip of paper bm> bearing the Chinese characters is passed into the room, then he must look bm> at it before he can apply the memorized rules; similarly he must write bm> down the result and pass it out again. How is this different from him bm> looking at a continuous pattern (say on a slip of paper), and doing all bm> the rest in his head, until he draws the result on another slip of paper? bm> Whatever you propose to do in the discrete case, I will do in the bm> continuous. The only difference is that *inside Searle's head* the bm> processing will be discrete in one case and continuous in the other, but I bm> don't see how you can make much hang on that difference.

It is certainly true (and a source of much misunderstanding) that in the TT the symbols that come in are a form of input; but this (at least by my lights) is what makes the TT so equivocal, for no one supposes that the input to a real person is pure symbols: A real person, like a robot, has to have sensory transducers (optical, acoustic, or vibrotactile) to be able to perceive the sensory forms that he then interprets as linguistic symbols -- he must, in other words, have TTT capacity, even if this is not directly tested by the TT. But in the TT the problem of PERCEIVING the linguistic input is finessed: No provisions are made for TTT capacity; the symbols are simply processed "directly."

Well Searle's version of the TT simply finesses it in exactly the same way: The question of how the symbols are "perceived" is not raised, and we ask only about whether they are "understood" (the grounding problem inhering in all this should be quite evident). So we do not consider it to be a count against Searle's implementation of the entire System in the case of the TT that he did not implement the transduction of the symbols, because that is irrelevant to the TT, which is asking only about the understanding of the symbols and not their perception.

But this bracketing or modularization of perception cannot go unchallenged, and indeed the challenge is explicit in the TTT, where transduction becomes essential to the very capacity being tested: seeing.

So the answer is, for the sake of argument we agree to consider the transduction of the symbols to be trivial and modular in the case of the TT, and hence there is no "System" objection to the effect that Searle has failed to implement the transduction -- or, better, has implemented it using his own senses. But in the case of the TTT the transduction cannot be modularized without begging the question; moreover, if Searle uses his own senses he DOES see (which, if it were admitted as evidence at all -- as it probably should not be -- would have to be seen as supporting, rather than refuting, the TTT).

To put it even more briefly: sensory transduction is irrelevant to the TT but essential to the TTT; they cannot simply be equated in the two cases. And Searle simply cannot do such things "in his head."

bm> [In both the discrete and continuous case] his (mental) manipulations bm> are purely syntactic; he's dealing with form but not content. There bm> remains then the important question (which symbol grounding addresses) bm> of how symbols -- whether continuous or discrete -- get their content.

I think I agree. As I suggested earlier, whether the structures and processes inside a robot are continuous or discrete, some of them must correspond to words and thoughts, and these must be grounded (at least according to my hypothesis) in the robot's capacity to discriminate, identify and manipulate the objects, events and states of affairs that they are interpretable as being about. Otherwise they are merely dangling (even if ever so systematically) from outside interpretations.


> bm> a physical device is an analog computer to the extent that we
> bm> choose and intend to interpret its behavior as informing us about
> bm> some other system (real or imaginary) obeying the same formal rules

Fine; but in that case most things are actual or potential analog computers, including, a fortiori, me. But this definition seems to depend far too much on how we choose to USE systems, and what OTHER systems we choose to use them to explain.


> bm> In addition to the things that are explicitly marketed as computers,
> bm> there are many things that may be used as computers in an appropriate
> bm> context of need and availability.

bm> That's far from saying everything is -- or even can be -- a computer! A bm> computer, like a screwdriver, is a tool... Some physical systems can be bm> digital (i.e., discrete) computers, others cannot; some can be analog bm> (i.e., continuous) computers, others cannot. And most of these things bm> will not be computers of any sort unless we see and use them as such.

This is certainly pertinent to the "What is Computation?" discussion, but not, I think, to its underlying cognitive motivation. Also, "a physical device is an analog computer to the extent that we choose and intend to interpret its behavior as informing us about some other system (real or imaginary) obeying the same formal rules" seems to leave the doors very wide -- as wide as our imaginations.


> bm> Therefore a hypothesis such as "the mind is a computer" is not
> bm> amenable to scientific resolution . . . .

Not if we include analog computation, perhaps.


> bm> . . . . A better strategy is to formulate the hypothesis in
> bm> terms of the notion of instantiated formal systems, which is more
> bm> susceptible to precise definition. bm> bm> If "instantiated discrete formal system" is what we mean (or, as I would bm> claim: instantiated formal system, whether discrete or continuous), then bm> why don't we say so? This notion can be formally defined; "computer" and bm> "computation" cannot, in my opinion. (Sloman, Judd and Yee have made bm> similar suggestions.) As you said, part of the attractiveness of the bm> computational view is a manifest constituent structure and a systematic bm> interpretation, but this doesn't require discrete symbols, as I'll argue bm> below. (Pace Fodor, Pylysyn et al.)

An instantiated discrete formal system (systematically interpretable) is what I, at least, mean by a computer.

bm> for something to be a computer it must normally be *seen as* a bm> computer, in other words, that its formal properties must apply to some bm> other system of interest to us, and hence be interpretable.

It seems to me it was bad enough that the meanings of the symbols in the computer were just in the mind of the interpreter, but now even whether or not something is a computer is just in the mind of the interpreter. What hope has a doubly ungrounded notion like this to capture what's actually going on in the mind of the interpreter!

bm> We must distinguish (at least): (1) physical systems obeying differential bm> equations, (2) continuous formal systems, and (3) continuous symbol bm> systems (MacLennan 1988, in press-a, in press-b). We all know what class bm> (1) is: most of the universe, so far as physics tells us. Class (2) is a bm> subclass of class (1): systems of interest because they instantiate a bm> given set of differential equations, but for which the actual physical bm> quantities governed by the equations are irrelevant (that's why they're bm> formal). (I'm glossing over the distinction between the (Platonic) bm> abstract formal system and the (physical) instantiated formal system, but bm> I think that's clear enough.) Continuous formal systems are treated as bm> syntactic processes; that is, semantics is irrelevant to them qua formal bm> system. Class (3) are those continuous formal systems for which an bm> interpretation is posited. The actual interpretation may not be bm> specified, but we are concerned with how the continuous states and bm> processes are related to the domain of interpretation. As noted again and bm> again by many people, there's not much point in creating uninterpretable bm> formal systems, so the practical distinction between (2) and (3) is bm> whether we are interested in syntax only or syntax + semantics. (I hope bm> the exact parallel with discrete (dynamic / formal / symbol) systems is bm> apparent.)

I'm afraid I can't follow this. Most physical systems in the world are describable and predictable by differential equations. In that sense they are "instantiations" of those differential equations (which can also be written out on paper). We may or may not specify the intended interpretation of the formal equations as written out on paper. And we may or may not use one physical instantiation of the same set of equations to describe and predict another physical instantiation. But apart from that, what do (1) - (3) really amount to? I really don't know what a "continuous formal system" or a "continuous symbol system" is supposed to be. Equations written out on paper certainly are not continuous (the scratches on the paper are discrete symbol tokens). They may, however, correctly describe and predict continuous physical systems. That does not make those continuous physical systems either "formal" or "symbolic." In contrast, the instantiation of a discrete formal system in a digital computer running a program is indeed a discrete (implemented) formal system, because although, being physical, the computer has continuous properties too, these are irrelevant to its implementing the discrete formal system in question. And I have so far found no substantive distinction between "formal" and "symbolic."

bm> As to "the vexed question of whether neural networks are symbol systems" bm> -- it depends what you mean by neural network. Some physical systems bm> implement Hopfield networks, but they belong in class (1), unless our bm> interest in them consists in their implementing the abstract process, in bm> which case they are in class (2). However, if the implemented Hopfield bm> net refers to some other domain, perhaps an optimization problem, then bm> it's class (3). I expect that most of the neural networks in our brains bm> are class (3). Since class (2) is mostly of theoretical interest, it bm> seems unlikely to be found in nature. (Of course there may be brain bm> processes - perhaps not involving neurons at all - that nevertheless bm> coincidentally instantiate abstract neural nets, such as Hopfield nets; bm> these go in class (1), as do processes for which the material embodiment bm> is critical: transducers, for example; or perhaps they are a fourth bm> class, since they cross the 1/3 boundary. In any case symbol grounding is bm> as relevant to continuous symbol systems as it is to discrete.)

I am still unsure whether "continuous symbol systems" do or do not have a symbol grounding problem; in fact, I'm still not sure what a continuous symbol system is. And as I suggested earlier, the fact that something can be (1), (2) or (3) depending on what use we happen to choose to use them for does not seem to be a very helpful fact. Surely the brain is what it is irrespective of what we (outsiders) may want to use it for. What we are looking for (as with everything else) is the CORRECT description.

bm> What we normally require of discrete symbol systems, and what allows them bm> to reduce meaningful processes to syntax, is that the interpretation be bm> systematic, which means that it respects the constituent structure of the bm> states. Is there anything analogous for continuous symbol systems? bm> Indeed there is, and to find it we only need look at systematicity more bm> abstractly. Constituent structure merely refers to the algebraic bm> structure of the state space (e.g., as defined by the constructor bm> operations). (There are many sources for this, but I'll take the bm> opportunity to shamelessly plug MacLennan 1990, Chs. 2, 4.) Systematicity bm> then simply says that the interpretation must be a homomorphism: a bm> mapping that respects the algebraic structure (though perhaps losing some bm> of it). The point is that these ideas are as applicable to continuous bm> symbol systems as to the better-known discrete symbol systems. In both bm> cases the "symbols" (physical states) are arbitrary so long as the bm> "syntax" (algebraic structure) is preserved. If you will grant the bm> possibility of continuous symbol systems, then I hope you will also bm> agree that they are of critical importance to cognitive science.

I am agnostic about continuous symbol systems (in part because I am not competent to evaluate the technical point you make above). If there is a generalization of discrete formal symbols and symbol manipulations to continuous formal symbols and symbol manipulations with constituent structure, compositionality and systematicity (including systematic interpretability) that is useful and predictive, I could of course have no objections. The only question I would be inclined to raise concerns the "language of thought" notion that motivated proposing discrete symbols and symbol strings as a theory of mental states in the first place: The symbols in a language are, I think, necessarily discrete. What would a continuous candidate look like? As I formulated it, the symbol grounding problem is very much linked to the notion of discrete symbols in a language of thought. It is they who are ungrounded in a computer implementation. It is not even clear to me how to pose the question of groundedness for "continuous symbols."

Stevan Harnad

-------------------------------------------------------------

Date: Mon, 29 Jun 92 15:16:57 EDT From: "Stevan Harnad"

Date: Fri, 5 Jun 92 02:28:13 EST From: David Chalmers

Thanks to Franklin Boyle for his thoughtful replies. I confess to not fully understanding his position, but as far as I understand it, I gather that he's saying that for the purposes of determining what's computation and cognition, we have to look more closely than simply at the causal state-transitional structure. What matters isn't just the pattern of transitions, it's (a) the specific nature of the state-transitions, and (b) the specific nature of the causal relations.

As far as (a) is concerned, we have to distinguish between real changes in structure, e.g. "receiving a projection of a structure", from mere changes in the "measured attributes" of a system (e.g. voltages). As far as (b) is concerned, we have to distinguish between "structure-preserving superposition", in which the form (or appearance) of one state somehow imprints itself on another, from mere "pattern matching" and "structure fitting". The wrong kinds of state-transition and causation give you computation; the right kinds might give you cognition.

My reply to this proposal is pretty simple: I'm not sure that these distinctions come to anything, and I see no reason why they should make a difference between cognition and non-cognition. It seems to me that the form or appearance that various states embody is just irrelevant to a system's status as cognitive, or as computational. We could probably make an implementation of a Turing machine out of plasticine, where lumps corresponding to "symbols" change shape by colliding up against each other; it would still be a computation. And while we don't know how neural causation works, it doesn't seem entirely implausible that the basis is in the transmission of information via various "measured attributes" not unlike voltage: e.g. potentials and firing frequencies.

I've probably misunderstood this position completely, but it seems to me that however these distinctions are drawn, there's no principled reason why computation or cognition should lie on only one side of the line. (Well, maybe there's one principled reason: if the Chinese room argument were valid, there might be a motivation for a line like this. But of course the Chinese room argument isn't valid :-) .)

In reply to some more specific points:

fb> If we allow any physical system to be an implementation of fb> some computation, we will most likely end up with little in the way of fb> principled criteria for determining whether cognition is computation.

Let's dispose of this canard for once and for all. Even if every system implements some computation, this doesn't imply that every system is engaged in cognition, for the simple reason that only *certain kinds* of computation qualify as cognition. Not even the strongest of believers in strong AI has said that implementing *any* program is sufficient for cognition. It has to be the right kind of program (or, more generally, the right kind of computation).

So: just because the solar system implements a trivial 4-state FSA, we don't suddenly have an interplanetary mind: 4-state FSAs aren't the kinds of things that *think*. Isolating those kinds of computation that qualify as cognition is an interesting, highly non-trivial question in its own right. Presumably, only certain highly complex computations will be sufficient for cognition; solar systems, along with rocks and most everything else in the world, won't have the requisite causal structure to qualify.

What would make the notion of computation vacuous would be is every system implemented *every* computation. But that's just not the case.

fb> I don't believe there is a computational formalism that can fb> legitimately be described as "computational" if it isn't discrete in a fb> specific way. This doesn't mean that a system which is computing does fb> not involve continuous processes (indeed, it must, if it's a physical fb> system). But such processes are there only in a supporting capacity. fb> They are not really part of the computation per se.

I disagree with this. See the work of Bruce McLennan. With the usual variety of computation, we specify the causal patterns between discrete state-transitions via some formalism, with appropriate implementation conditions; one can do precisely the same thing for patterns of continuous state-transitions. It's just that the formalism will be something more reminiscent of differential equations than Boolean logic. However, we should probably stick with discrete computation for the purposes of this discussion.

fb> This is why you can't just say that the orbit of a planet can be fb> divided into 4 discrete quadrants and expect that the system is, fb> therefore, implementing a particular computation. The causal process fb> involved in going from one quadrant to the next is nothing like a fb> decision-making process; it is a nomologically determined change based fb> on Newton's second law of motion applied to a particular system -- fb> there is no choice among alternatives determined by the representing fb> entities present in the system.

I said earlier that the solar system is probably a bad example, as it has no counterfactual sensitivity to various inputs; and this is what you seem to be worrying about here. If we leave out sensitivity to inputs, *every* implementation of a computation undergoes nomologically determined change; it doesn't have any choice (unless we're talking about nondeterministic computation, of course).

fb> As I've suggested in previous posts and above, there *are* physical fb> properties other than causal organization which, in your terminology, fb> are conceptually consitutive of cognition-- namely, *how* cause is fb> brought about. Why the latter constraint is "conceptually consitutive" fb> (if I understand what you mean by this expression) of a process's being fb> cognition is that if the brain is to have information about objects in fb> the world -- their structures, motions, etc. -- then it has to actually fb> receive the projections of those objects' structures, motions, etc. fb> Otherwise, how could we know about them? Just saying some measured fb> attribute or extended structure embodies it is not sufficient.

There's some kind of strange, deeply-embedded assumption here: that true "knowledge" requires embedding of an object's actual "structure" inside a cognitive system. To think about spheres, does one need something spherical inside one's head? Surely not. This sounds like the kind of scholastic theory that Cummins dismisses in the first few pages of his book. Even if you don't mean something quite as literal as this, there seems to me to be nothing wrong in saying that the brain embodies the information it carries in mere "measured attributes" such as potentials, frequencies, and so on, as long as these bear the requisite causal relation to the outside world and play the appropriate functional role within the system.

fb> What are these "patterns of interactions between various states"? Are fb> they just *sequences* of states or the individual interactions between fb> particular objects that are constituents of the system? What you call fb> "interactions between various states" are, I assume, really fb> interactions between the constituent objects of those states, for that fb> is what leads to new states. If it's just sequences of different states fb> that can be mapped onto each other, without any accounting for what in fb> those states (particular objects or their measured attributes) is fb> actually doing the representing and whether the representing entities fb> are causing change, then you haven't really got any principled criteria fb> for what makes something computational.

"Sequences" is on the right track, except (a) we need a lot more than a single "sequence" -- we need to specify the different "sequences" that will arise e.g. for different inputs; (b) the states here needn't be monadic, as in simple FSAs; the overall state at a given time may be combinatorially structured (as e.g. in a Turing machine, or a neural network), with lots of substates to a given state (e.g. the state of the brain at a given time can be looked at as the combination of a lot of substates, e.g. the states of individual neurons); the causal structure of the system will then depend on the state-transitions between the substates -- it's insufficient in general to describe the system by a simple sequence of monadic states; (c) the relation between consecutive items in a "sequence" must be *causal*.

Look at the interactions as between the "constituent objects" of the states, rather than between the states themselves, if you like; it doesn't make any difference. On the view that I'm taking, it doesn't matter what a particular state corresponds to physically -- a "measured attribute" or a particular "form" or whatever -- as long as there's a fact of the matter about whether the system is in a given state, and as long as these states have the right overall pattern of causation between them.

--Dave Chalmers.

----------------------------------------------------------------------

Date: Mon, 29 Jun 92 15:15:56 EDT From: "Stevan Harnad" Subject: Re: What is Computation?

[Apologies for the delay in posting this; the prior posting from Bruce McLennan was actually received earlier, then inadvertently erased, so it had to be requested again. Hence the apparent nonconsecutive order of the postings. -- SH]

Date: Tue, 2 Jun 1992 17:00:04 -0400 (EDT) From: Franklin Boyle

Stevan Harnad writes:


>
>sh> I agree that excessive generality about "computation" would make
>
>sh> the question of whether cognition is computation empty, but I
>
>sh> don't see what THIRD possibility Frank has implicitly in mind
>
>sh> here: For me, planets, planes, and brains are just stand-ins for
>
>sh> ordinary analog systems. In constrast, a subset of these analog
>
>sh> systems -- namely, computers doing computation -- are what they
>
>sh> are, and do what they do, purely because they are implementations
>
>sh> of the right symbol system (because they are constrained by a
>
>sh> certain formal syntax, manipulating discrete symbols on the basis
>
>sh> of their arbitrary shapes: "pattern matching," as Frank points out).
>
>sh> So we have the physical analog world of objects, and some of
>
>sh> these objects are also implementations of syntactic systems for
>
>sh> which all specifics of the physical implementation are irrelevant,
>
>sh> because every implementation of the same syntax is equivalent
>
>sh> in some respect (and the respect under scrutiny here is thinking).

When you describe computers as, "a subset of these analog systems [that] are what they are, and do what they do, purely because they are implementations of the right symbol system (because they are constrained by a certain formal syntax, manipulating discrete symbols on the basis of their arbitrary shapes: ...)", you are characterizing them at a level of description above that used to distinguish between the three causal mechanisms I claim to be important to this discussion. In the above quoted passage, all physical systems are classified as "analog", with a particular subset of these functioning in a specific way because "they are implementations of the right symbol system".

You go on to say that this functioning is the result of "manipulating discrete symbols on the basis of their arbitrary shapes" and then acknowledge that I refer to this as "pattern matching". But terms such as "manipulation" and "symbols" conform with the use of "pattern matching" as a *functional* description of a particular process, not a *physical* description of how that process physically accomplishes what it does. I believe this is, in part, one reason why (though it may just be a symptom) you distinguish only two kinds of things:


>
>sh> So I repeat, there seem to be TWO kinds of things distinguished
>
>sh> here (actually, one kind, plus a special subset of it), namely, all
>
>sh> physical systems, and then the subset of them that implement the
>
>sh> same syntax, and are equivalent in that respect, independent of the
>
>sh> physical properties of and differences among all their possible
>
>sh> implementations.

Again, "causal mechanism" is *below* the level of "symbol", which is an interpretive notion. Whether something is a symbol in a systematically interpretable symbol system is of no consequence to the physical process of pattern matching. What matters to pattern matching, as I use it, is physical structure (regardless of its physical realization -- electrical, biomolecular, etc.) and the structural "fitting" of physical structures. One can talk about pattern and matcher structures and an action triggered by a successful match (e.g., a particular voltage change or a single covalent bond), without ever invoking interpretive terminology such as "symbol" and "syntax".

Still at the physical level, we can also say that the cause of the voltage change or covalent bond formation was the result of the "fitting" of the two physical structures. It is not necessary to go into why this is physically so, but Pattee [1986] discusses it as do I, though with a different, complementary explanation [Boyle, in preparation]. Suffice it to say that it does, and there is no need to talk in terms of symbol manipulation, etc., in order to explain why. So my claim is that this particular structure-fitting process, regardless of the fact that it involves analog processes (due to various manifestations of electrical forces -- free charge, molecular, atomic), is one kind of causal mechanism: it enables extended physical structures to be causal.

I claimed in one of my previous posts that computational systems, if we are to avoid a vacuous definition of computation, must have pattern matching as the causal mechanism underlying what are referred to as "computational regularities", that is, the many-to-one relationship between physical and computational states in so-called computational systems [Pylyshyn, 1984]. This physically-principled criterion avoids over-interpreting planetary systems, for example, as being computational. Furthermore, this particular causal mechanism conforms with the more abstract notion of a system being "constrained by a certain formal syntax, manipulating discrete symbols on the basis of their arbitrary shapes". Pattern matching is real, relying on structural constraints to physically distinguish it from the collision of two billiard balls, for example.


>
>sh> But the passage above seems to imply that there is a THIRD kind
>
>sh> of stuff, that the brain will turn out to be that, and that that's the
>
>sh> right stuff (which Frank calls "intrinsic capacity for reference").
>
>sh> ...
>
>sh> fb> ...
>
>sh> I guess "SPS" is this third kind of property, but I don't really
>
>sh> understand how it differs from an ordinary analog process.
>
>sh> ...
>
>sh>
>
>sh> fb> What other ways might physical objects cause change besides
>
>sh> fb> through their [arbitrary, syntactic] forms? There are, I claim,
>
>sh> fb> only two other ways: nomologically-determined change and
>
>sh> fb> structure-preserving superposition (SPS). The former refers
>
>sh> fb> to the kinds of changes that occur in "billiard-ball collisions".
>
>sh> fb> They involve changes in the values of measured attributes
>
>sh> fb> (properties whose values are numerical, such as momentum) of
>
>sh> fb> interacting objects according to their pre-collisional measured-
>
>sh> fb> attribute values in a physically lawful way (that is, according to
>
>sh> fb> physical laws).
>
>sh> fb> ...
>
>sh> fb> Like pattern matching (PM), [SPS] also involves extended
>
>sh> fb> structure, but in a fundamentally different way. Whereas PM
>
>sh> fb> involves the fitting of two structures, which by its very nature,
>
>sh> fb> leads only to a simple change such as the switching of a single
>
>sh> fb> voltage value from "high" to "low" (in digital computers), SPS
>
>sh> fb> incolves the actual *transmission* of structure, like a stone
>
>sh> fb> imprinting its structure in a piece of soft clay.

First, the use of "stuff" to describe SPS gives the impression that it is some new kind of substance or that it depends on properties peculiar to a particular medium, rather than being a causal mechanism that describes a particular way in which physical objects can causally affect each other.

My three causal mechanisms are intended to describe all the ways in which physical objects can affect each other physically. That there are three is based on the claim that physical objects have two, what I call, physical "aspects" (for lack of a better term -- I don't like to use "property"). These are 1) their measured attributes -- numerically-valued quantities like momentum whose values are constrained by physical laws and situation-specific constraints, and 2) their extended physical structures. This I take to be self-evident. Any other aspects that we associate with physical objects have to be functional or relational aspects which are interpretive notions, and, therefore, like "symbol", are abstractions above the present physical level of analysis.

Now, the only ways physical objects (whose interactions are what cause change in the world) can effect changes are by changing one (or both) of the physical aspects of other physical objects (as well as their own) when they interact. Furthermore, one (or both) of their physical aspects is (are) responsible for the resulting changes.

I've already described one of the three causal mechanisms above, which I call "pattern matching". For this process, it is the extended structure of an object that leads to physical change. And what sort of physical change does it lead to? A change in the value of a measured attribute, such as the voltage value of a particular circuit in a computer. Why does it lead to this kind of change? Because there is another physical structure which has a similar (perhaps identical, though that is not necessary, or complementary) pattern -- an arrangement of contours, combination of voltage values, etc. -- so that they physically "fit". Now it doesn't matter how this fitting occurs, whether it is a sort of "all at once" fitting, as in enzyme catalysis, or whether it occurs over a longer period of time and is spread out over different locations. The important characteristic is that the resulting change, which is due to all the individual (local) analog processes that underlie this kind of object interaction (e.g., individual transistor switchings due to electric charges in and potentials across semiconducting materials, or the local molecular forces involved in the structural positioning of biomolecules), happened because the object's structure acted as a constraint, "channeling" the effects of all these local changes into the final outcome. The change is simple because it is brought about by STRUCTURE FITTING.

Now you say that there are "all physical systems, and then the subset of them that implement the same syntax". But this kind of statement, which refers to the physical as analog and then uses interpretive language like "syntax" will not buy you the kind of physical distinctions I'm making. I divide everything except this special subset into two kinds of causal mechanisms: what I call nomologically-determined change and structure- preserving superposition. They are analyzed at the same descriptive level -- the physical/causal level -- that pattern matching was. Your analog vs syntactic seems to be a mixing of levels, whereas my three causal mechanisms are not.

Continuing with the type of analysis applied to pattern matching above, nomologically-determined change involves changes in the measured attribute values of interacting objects, caused by those objects' interaction and constrained according to physical laws and situation-specific constraints. This is what happens in any object interaction, even ones that involve structure fitting. But in structure fitting, one of the final changes is due to the fitting of structures, while the rest are nomologically determined. Most physical interactions in the world result in nomologically determined changes *only*. Such interactions are exemplified by billiard ball collisions.

So far we have structure leading to measured attribute changes (pattern matching) and measured attributes leading to measured attribute changes (nomologically determined change). Now what about structure leading to structural changes? That is what occurs in SPS. A stone colliding with a piece of soft clay is a model of such a process. The stone's surface structure is effectively transmitted to the clay. Of course, measured attributes of the clay and stone are also changed as a result of the collision, but the surface structure change in the clay is due to the surface structure of the stone, not its particular momentum value, for example. The causal mechanism is different than pattern matching because pattern matching involves structure fitting. For the latter, the effect is a measured attribute (structureless) change, rather than a change in structure as in SPS. The reason I use "superposition" to refer to this third causal mechanism is because it involves the superposing of one object's structure onto another's.

So all of the above causal mechanisms are PHYSICAL processes. I have not talked about symbols, syntax, representation, reference or anything of that nature, so I assume you will agree that there has been no hermeneutics creeping in so far. (At this point, one might ask: What about measured attributes leading to structure changes? This would occur, for example when two objects collide with such force that one or both break into smaller pieces. But the structures of the newly formed surfaces are due to the particular material and how the energy was channeled through it, probably due to its internal structure (cleavage planes, for example), so that the structures of the pieces are not really determined by any sort of relationships between them and the values of the measured attributes of the original objects. In other words, you can't have structureless entities creating non-arbitrary (with respect to the values of those entities) structures. Thus, I have lumped these kinds of effects in with nomologically determined changes.)


>
>sh> Leaving out the hermeneutics of "appearance" (which I think is a
>
>sh> dangerous red herring), the above again simply seems to be
>
>sh> distinguishing two kinds of analog processes, but this time with
>
>sh> the distinction mediated by properties that are interpretable as
>
>sh> "resembling" something rather than by formal syntactic properties
>
>sh> that are interpretable as meaning something. So, enumerating, we
>
>sh> have (1) the usual Newtonian kind of interaction, as between planets,
>
>sh> then we have (2) a kind of structure-preserving "impact," leaving an
>
>sh> effect that is somehow isomorphic with its cause (like an object and
>
>sh> its photographic image?), and then finally we have (3) implementation-
>
>sh> independent semantically interpretable syntactic interactions. But (2)
>
>sh> just looks like an ordinary analog transformation, as in transduction,
>
>sh> which I don't think is fundamentally different from (1). In particular,
>
>sh> if we drop talk of "appearances" and "resemblances," whatever
>
>sh> physical connection and isomorphism is involved in (2) is, unlike
>
>sh> (3), not merely dependent on our interpretation, hence not
>
>sh> "ungrounded" (which is why I make extensive use of this kind of
>
>sh> analog process in my model for categorical perception).

Not distinguishing between (2) and (1) leads to a problem similar to the one we have with describing digital computers; a descriptive dualism that talks about symbols, syntax and function in order to describe their computational behavior, and changes in voltage and other measured attributes of their physical components in order to describe their physical behavior. The first uses terms that are ungrounded, while the second uses physically grounded terms. If you want to understand the computational behavior of computers in a physically principled way, then you must ground its (internal) computational behavior. This is done via pattern matching and the causality of extended physical structure enabled by such a process. We cannot be satisfied with the standard functionalist gloss that there are causal relationships between computational states. This doesn't really ground the computational behavior of computers. It merely acknowledges the fact that computers are physical systems.

A similar argument can be made for distinguishing between SPS and standard physical descriptions. Clearly, when extended structure effects a structural change in other structures, it involves changes in those structures' local attributes (e.g., voltage values, say, in a neural network or positions of particular points on a structure's surface, say, in the stone/clay model). But describing such a structural process in this way -- e.g., as the transduction of photon energy to neuronal electric potentials -- loses the fact that there was a coherent structure which was the cause of the particular arrangement of individual local attribute changes and that this newly created structure may then go on to affect other structures. As with computation, it reduces the process to a set of changes described by standard physical state description terminology, so that if we wanted to consider that this kind of change is what underlies thinking (analogous to computation in the computer), then we would have to resort to information processing terminology in order to talk about it, like we do for computation; a terminology that is ungrounded.

Why do we need a physical framework based on the causal mechanisms I am proposing as responsible for physical change? Because the causality of extended structure is not explicitly accounted for in standard physical state descriptions. That is, for both computation and its counterpart in the brain -- thinking, however it is enabled -- it is the causality of extended structure,through pattern matching and SPS, respectively, that makes it computation and thinking, respectively. Just saying that there are analog processes is not sufficient, because all physical processes involve analog processes, including pattern matching (as you've acknowledged). Structure has to be recognized as controlling the behaviors of certain physical systems AS STRUCTURE, not just as a set of boundary conditions restricting the range of values of certain state variables. By lumping everything into analog processes or transformations that are non-syntactic, you are unable to distinguish between these.

Why should such a distinction matter? Because I believe the brain qua mind works at the level of structure transmission, no matter how much others want to reduce its behavior to neurophysiological descriptions based on measured- attribute analog transformations or transductions. If you don't ground structure transmission in some physical framework, then mind will always be described in ungrounded terms just as computation, described in terms of symbols and rules to match them, is ungrounded. This often leads people to believe that mind is emergent, rather than the result of a specific type of structural control.

I'm surprised that you don't see the necessity of this division, since it seems to me you would be hard pressed to explain how what you call "analog reduction" could produce iconic category structures if such a process were nothing more than lots of transductions of measured attributes of neurons without some more overarching structural constraints. Perhaps you don't think such constraints are necessary, but if that's the case, then all you can probably hope for is mind as an emergent property (which I don't agree with).

Finally, my use of "appearance" to describe extended structure is meant to distinguish how extended structure is causal in SPS, as opposed to how it is causal in pattern matching. For the latter, I call extended structure "form" because to be causal it must conFORM to another (matching) structure. Its structural *appearance* is not part of the effect. There is no interpretation involved in saying it this way; no smuggling in of a homunculus to figure out what the structure appears to look like (e.g., a tree, an elephant, etc.). "Appearance" and "form" are terms that are meant simply to help describe, in a more concise way, differences in how extended structure can effect change. It is still all physical, so I think I've steered clear of your "hermeneutical hall of mirrors".


>
>sh> My own proposal is that symbols are grounded in whatever
>
>sh> internal structures and processes are required to generate TTT
>
>sh> capacity, and I have no reason to believe that these consist of
>
>sh> anything more than (1) pure analog properties, as in solar
>
>sh> systems and their analogs, plus (2) syntactic properties, but
>
>sh> with the latter grounded in the former, unlike in a pure
>
>sh> (implemented but ungrounded) symbol system such as a
>
>sh> computer. In this hybrid system (Harnad 1992 -- see excerpt
>
>sh> below) neural nets are used to detect the invariants in the analog
>
>sh> sensory projection that allow object categories to be connected
>
>sh> the symbols that name them; this model invokes no third, new
>
>sh> property, just analog and syntactic properties.

Since I've already stated above why I believe there should be a subdivision of processes based on the causal mechanisms I've described here, as well as in previous posts and the literature, let me just comment briefly on your idea of a hybrid system. I think that the brain involves SPS (or in your terminlogy, is analog) "all the way through". Though there may be some pattern matching (or in your terminology, syntactic properties), I think this occurs at relatively "low level" perceptual stages, definitely not at higher cognitive levels. If, in your system, you "connect" object categories with the symbols that name them, AND the manipulation of those symbols according to their "syntactic properties" are what you intend to underlie thinking, then all you've really got is a pattern matching system with some peripheral grounding which I don't see as being different, in principle, than "a pure (implemented but ungrounded) symbol system such as a computer" for two reasons: 1) the symbols are still form-arbitrary because the connectionist network used to ground them is really just a pattern matching structure [Boyle, 1991], at least the way I've seen it described in your publications, and 2) even if the network is not a pattern matching structure (we could even assume that the iconic category structures are the symbols), the fact that the symbols are part of a "symbolic component" (i.e., they cause change through pattern matching) means that their referential capacities cannot be due to their structures since pattern matching renders *any* structure inconsequential with respect to the change it produces. Thus it wouldn't matter that they were "connected to" transduced category structures. In other words, the causal effects of representing entities are important to their referential capacities and, thus, to how they mean [Boyle, 1992], just as they are important to grounding. So if you want to be fundamentally different than a computer, the physical changes that underlie thinking cannot be due to pattern matching. Grounding is superfluous if it doesn't go "all the way in".


>
>sh> fb> Issues about consciousness, qualia, etc. should be part of
>
>sh> fb> another discussion on mind and brain, but symbol grounding
>
>sh> fb> and even the Chinese Room... should be part of the "What
>
>sh> fb> is Computation?" discussion because they involve issues of
>
>sh> fb> causality and representation which are fundamental to
>
>sh> fb> computation... e.g., "understanding" ...comes about
>
>sh> fb> presumably because of referential characteristics of the
>
>sh> fb> representation.
>
>sh>
>
>sh> But by my lights you can't partition the topic this way, excluding
>
>sh> the question of consciousness, because consciousness already
>
>sh> enters as a NEGATIVE datum even in the Chinese Room: Searle
>
>sh> testifies that he does NOT understand Chinese, therefore the
>
>sh> implementation fails to capture intrinsic reference. Searle is
>
>sh> reporting the ABSENCE of understanding here; that is an
>
>sh> experiential matter. So understanding piggy-backs on the capacity
>
>sh> to have qualia. Frank seems to agree (and to contradict this
>
>sh> partitioning) when he writes:
>
>sh>
>
>sh> fb> This is just a physical explanation of why, as Harnad puts it,
>
>sh> fb> there is "nobody home" in such systems. Nor can there ever be.

What I meant by the above is that discussions about consciousness and qualia, if they are not physically grounded (which includes most of the literature on these topics), should not be part of the discussion, "What is Computation?". But symbol grounding, reference and the Chinese room -- to the extent that it can be used to illustrate the arbitrariness of formal symbol systems -- are all relevant because they can be discussed in objective terms that are grounded. I don't know how to ground consciousness and qualia because I don't really know what kinds of things they are -- my current take is that they are epiphenomenal; the sensations we experience are the result of the particular causal mechanism underlying thinking and the particular large-scale organization of the brain. Understanding is sort of an in-between term which I believe can piggy back on reference.

Thus, if you can't find a common (physical) basis for aspects of cognition and for computation, then those aspects of cognition shouldn't be a part of the discussion. My reference to your "nobody home" characterization of computers was meant to acknowledge that at bottom one needs the right physical characteristics (for me, SPS), since they are, fundamentally, what give rise to our human brand of understanding and experience.


>
>sh> I know that "extended structure" plays a critical role in Frank's own
>
>sh> theory, but I have not yet been able to understand clearly what that role
>
>sh> is. Whenever I have read about it, if I subtracted out the hermeneutics, I
>
>sh> found no remarkable property left over -- other than continuity in time
>
>sh> and space, which is rather too general to be of any help, plus ordinary
>
>sh> analog and syntactic interactions.

If there are hermeneutical aspects to what I've outlined at the beginning of this post with respect to my causal mechanisms, please let me know what they are. It is true that I used the term "information" a bit loosely in the JETAI paper, but it was employed mainly for descriptive purposes, just as "appearance" is above. Nowhere, as far as I can tell, has the theory depended on such terms; there have been no mind-begging appeals to them.

I guess my response to "no remarkable property left over -- other than continuity in time and space, which is rather too general to be of any help" is: How do you see your distinction between pure analog and syntactic processes as being any less general?

Let me just close by giving a brief overview of what I see as the reason for our differences (because, except for the issue of grounding all the way in, I don't see our ideas as being that different). It seems that you are not making a distinction that is finer-grained than analog vs syntactic processes because you are relying on a variant of the Turing Test, your TTT, to test your grounding proposal. But I think one can go further than this in establishing what may be necessary for minds without having to go through the arduous process of building such a system (if, in fact, it is practical or even possible for us to do), perhaps only to find out that it can't be done (how do we know it can't and why if we never successfully build it) because it wasn't pure analog all the way in (though we might never know that). If we can ground thinking in the physical structures of the signals that project environmental structure (e.g., the structures of objects) onto our sensory surfaces, that is, determine the causal mechanism for how those signal structures alter the structure of the brain and, hence, influence its behavior, then we have a better chance of building a system that could pass any behavioral test we decide to throw at it. This is why I think we have to recognize a finer grained distinction within your analog processes category.

Well, I hope all this helped to clarify the points you raised. If not, perhaps the above will lead to other, more detailed comments and questions.

-Franklin Boyle

Boyle, C. F. (1991) On the Physical Limitations of Pattern Matching. Journal of Experimental and Theoretical Artificial Intelligence, 3:191-218.

Boyle, C.F. (1992) Projected Meaning, Grounded Meaning and Intrinsic Meaning. To appear in the Proceedings of the 14th Annual Conference of the Cognitive Science Society. To be given as a poster.

Boyle, C. F. (in preparation) The Ontological Status of Mental Objects.

Harnad, S. (1990) The Symbol Grounding Problem, Physica D, 42: 335-346.

Pattee, H.H. (1986) Universal Principles of Language and Measurement Functions In J.L. Casti and A. Karlqvist (eds), Complexity, Language and Life: Mathematical Approaches, (Springer-Verlag, New York)

Pylyshyn, Z. (1984) Computation and Cognition: Toward a Foundation for Cognitive Science, (MIT Press, Cambridge, MA).

------------------------------------------------------

Date: Thu, 2 Jul 92 14:43:37 EDT From: "Stevan Harnad" To: jfetzer@ub.d.umn.edu Subject: Publishing the "What is Computation?" Symposium


> From: jfetzer@ub.d.umn.edu (james fetzer)
> Subject: SPECIAL ISSUE OF MINDS AND MACHINES
> To: harnad@Princeton.EDU (Stevan Harnad)
> Date: Mon, 29 Jun 92 14:48:17 CDT
>
> Stevan,
>
> I wanted to contact you concerning our tentative plan for a special
> issue of MINDS AND MACHINES devoted to the topic, "What is
> Computation?" I have noticed the tremendous interest in this subject
> since the email exchange began as well as a considerable variation in
> opinion about how to go about pursuing the idea of publication. Based
> on my reading of the participants' reactions, my inference is that
> there is a strong preference for position papers, perhaps supplemented
> by critical discussions of one another's positions. That is an
> appropriate approach, it seems to me, where each participant can be
> assured of having their views presented intact in the form of a
> position paper, where opportunities for critical exchange are also
> provided (more on the order of the email origins, but now in relation
> to these more carefully considered position papers rather than the
> original formulations advanced earlier by email).

Jim, I have no objection to this alternative, if that is what the contributors prefer, but I have to point out that there is an element of wishful thinking in your reading of the participants' reactions as expressing "a strong preference for position papers." The latest tally of the votes had in fact been: 2 preferring position papers, 5 preferring the interactive symposium, and 11 amenable to either or a combination. Since then the tally has risen to 2/8/11, respectively, with all but 4 of the contributors now having cast their votes (see end of this message).


> I also believe that it is better to focus on the question of the nature
> of computation instead of combining this issue with questions about the
> symbol grounding problem. If you or others are inclined to contend that
> the one cannot be resolved without an adequate answer to the other,
> that of course would be an appropriate position to argue in your own
> position paper, but I think it would be an imposition to require others
> to focus on both when they may think that they are separable problems.
> (We do not want to beg the question by assuming they are wrong in
> advance.)

The ancillary issue is not the symbol grounding problem but whether or not cognition is a form of computation. That, after all, is what is motivating the "What is Computation?" question for most of us (and for your journal, "Minds and Machines," too, I should think). I happen to be among those who think that the "What is Computation?" question can and should be settled completely INDEPENDENTLY of the "Is Cognition Computation?" question, but I certainly would not force anyone to consider only one question or both, unless they feel so inclined. However, you will find, I think, that cognitive issues (including Searlian, anti-Searlean and symbol-grounding-related ones) will surface in the discussion whether the publication conforms more to the interactive symposium that is now transpiring (where cognitive issues are clearly being raised) or consists instead of position papers and subsequent interactive discussion, and whether or not its exclusive theme is "What is Computation?"


> If we agree that the combination of position papers and discussions
> is the right way to go, then let me suggest that we target this special
> issue to be the November 1994 issue of this journal. I would like to
> have everything in my hands no later than April 1994, where you need to
> have everything in your hands much sooner to make it all come out
> right. This should provide sufficient time for the contributors to
> compose their papers and exchange them prior to creating the critical
> exchange parts of the issue. I would stress that I believe that this
> should be done in a certain sequence to preserve the integrity of
> various authors' positions. I imagine you will want to continue your
> ongoing exchanges via email as a separate undertaking even while this
> project develops for MINDS AND MACHINES.

I will continue the email symposium as long as the ideas keep flowing. In September I will ask the contributors whether they wish to prepare formal position papers for a further round of email discussion with a view to publication. A tentative target date for receiving and circulating the position papers electronically might be November, when they could be refereed and edited, say, by January. Then the accepted versions could be circulated for electronic commentary and cloture on the ensuing discussion might be invoked in May or June, when the discussion can be edited, returned to the contributors for approval or modification, refereed, re-edited, and then sent to press.


> Let me know if this sounds all right to you. The number of words per
> pages of this journal is 400 rather than 600 (as I believe I mistakenly
> indicated previously). I am willing to commit 100 pages to this
> undertaking and possibly more if it turns out to warrant a greater
> commitment. We will run 125 pages per issue beginning with volume 3
> (1993), but I would like to keep 25 pages for book reviews and such,
> even in the case of special issues, if it is possible. Given what I can
> do, I will keep the reviews on related issues. So let me know if this
> sounds agreeable and how you plan to proceed, etc., and we can carry
> this project forward within these parameters. I am very enthusiastic
> about it.
>
> Best wishes, Jim
>
> ase note the sarcasm dripping off the word "benefit" above.

Those parameters seem ok to me. Since I happen to favor the interactive symposium format, I would prefer short position papers and extended discussion, rather than extended position papers and short discussion, but I leave that to the contributors to decide collectively.


> From: jfetzer@ub.d.umn.edu (james fetzer)
> Date: Wed, 1 Jul 92 11:00:41 CDT
>
> Stevan,
>
> Let me know if the general outline I have sketched sounds agreeable.
> Perhaps it might be possible to have it in print sooner than the
> November 94 issue (as I am sure you would like). More traditional
> preparation, as you know, requires long lead times, such as about two
> years for special issues. So this case--where the preparation of
> manuscripts might take place in much less time--poses special
> circumstances. Let me know how you feel about the timing. I would
> have no objection to the idea of your making this material available
> via email earlier than its publication in MINDS AND MACHINES. That
> could compensate for the timing, although I do not know the time frame
> you have in mind.
>
> Jim

Let's see how the participants feel about the tentataive dates and formats. Below is the lastest vote tally.

Cheers, Stevan

-------------------------------------------------------------------

Previous tally of votes:

Interactive Symposium (IS) vs. Position Papers (PP): Either or Combination: 11 - Prefer IS: 5 - Prefer PP: 2

With the further votes below, the tally is now 11/8/2

------------------------------------------------------------------------

Date: Wed, 20 May 92 22:59:40 EDT From: lammens@cs.Buffalo.EDU (Joe Lammens)

I'm for publication, preferably as an interactive symposium, perhaps with position papers added.

-------------------------------------------------------------------------

Date: Mon, 25 May 92 00:39:00 +0100 From: chrisley@csli.stanford.edu Ronald L. Chrisley

Publishing is fine by me. I have no strong preference concerning the format.

I've found the discussion very interesting, and have some points I'd like to make, but I have not been able to find time to catch up with the postings. I hope I will find time soon.

Could you give me a run-down of the agenda? When will discussion end? When will position papers be expected?

-------------------------------------------------------------------------

Date: Thu, 4 Jun 1992 10:49:33 PDT From: Patrick Hayes

[Voted earlier, but here clarified his vote, so re-assigned to IS]

Sorry Im late, hope not too late. I like to try the edited interactive format, although I think it will require a masterpiece of editing to get it sorted out and readable. I sympathise with some of Brian Smith's concerns also, and of course the ideal situation would be an editor who had no position of his own, but thats probably impossible to achieve here. This is not like a taperecording of a face-to-face conversation, but in any case that can be made readable, with enough work and some ruthlessness in cutting inappopropriate chunter. (Ive done it to transcripts of arguments between groups of cognitive scientists.)

In fact its not like anything else, which is why I like the idea of trying to make it into something. I share Stevan's (or should I say 'your': am I talking to Stevan or the CC list? One of those interesting email questions) fascination with the evolution of a new medium, and would like to help xperient with it.

Pat Hayes

-------------------------------------------------------------------------

Date: Tue, 16 Jun 92 16:44:27 -0400 From: hatfield@linc.cis.upenn.edu (Gary Hatfield)

If you publish some chunk of the interactive discussion and want to include my part, I would give permission with the condition that I be allowed to review and approve the material.

-------------------------------------------------------------------------

From: massimo@Athena.MIT.EDU Massimo Piatelli-Palmarini Date: Wed, 24 Jun 92 08:54:10 EDT

[From a reader of the discussion, not yet a contributor]

Dear Stevan, I am back from a long absence and wish to say that the idea of publishing the debate is a good one, provided that a lot, I mean a lot, of editing is carried out on the existing exchanges.

-------------------------------------------------------------------------

Date: Fri, 3 Jul 92 12:06:53 EDT From: "Stevan Harnad" To: jfetzer@ub.d.umn.edu Subject: Re: Publishing the "What is Computation?" Symposium


> From: jfetzer@ub.d.umn.edu (james fetzer)
> Date: Thu, 2 Jul 92 14:59:21 CDT
>
> Stevan,
>
> What you have in mind is fine. Longer discussions and shorter position
> papers is probably the right combination: my suggestion was meant to
> emphasize the desirability of having position papers to make clear (in
> condensed form) the positions of various contributors. Not all of them
> need to have papers rather than comments, of course. But it seems to me
> that having to read through 100-pages of interactive discussion
> WITHOUT POSITION PAPERS would not be something we should impose on our
> readers. If you can provide 100 pages at 400 words per page of material
> MAX by April 1994--watch out for spacing, which must be taken into
> account--I can schedule it for the November 1994 issue now and we are
> on track.
>
> To clarify the dates mentioned in my last message, I need the final
> version in hand by April 1995. But we also need to have the referee
> process take place, as indeed your tentative schedule clearly accommo-
> dates. So everything sounds fine to me.
>
> Jim

Except that I think you do mean April 1994, n'est ce pas, unless you're REALLY pessimistic about the refereeing...

Chrs, Stevan

-------------------------------------------------------

Date: Fri, 3 Jul 92 14:34:21 EDT From: "Stevan Harnad" Subject: Re: What is Computation?

Date: Fri, 3 Jul 1992 12:17:42 -0400 (EDT) From: Franklin Boyle

David Chalmers writes:


>dc> Thanks to Franklin Boyle for his thoughful replies. I confess to not
>dc> fully understand his position, but as far as I understand it, I gather
>dc> he's saying that for the purposes of determining what's computation
>dc> and cognition, we have to look more closely than simply at the causal
>dc> state-transitional structure. What matters isn't just the pattern of
>dc> transitions, it's (a) the specific nature of the state-transitions,
>dc> and (b) the specific nature of the causal relations.
>dc>
>dc> As far as (a) is concerned, we have to distinguish between real
>dc> changes in structure, e.g. "receiving a projection of a structure",
>dc> from mere changes in the "measured attributes" of a system (e.g.
>dc> voltages). As far as (b) is concerned, we have to distinguish between
>dc> "structure-preserving superposition", in which the form (or
>dc> appearance) of one state somehow imprints itself on another, from
>dc> mere "pattern matching" and "structure fitting". The wrong kinds of
>dc> state-transition and causation give you computation; the right kinds
>dc> might give you cognition.
>dc>
>dc> My reply to this proposal is pretty simple: I'm not sure that these
>dc> distinctions come to anything, and I see no reason why they
>dc> should make a difference between cognition and non-cognition.
>dc> It seems to me that the form or appearance that various states
>dc> embody is just irrelevant to a system's status as cognitive, or
>dc> as computational.
>dc>......
>dc> I've probably misunderstood this position completely,....

I would say you gave a pretty fair, albeit brief, summary of the main ideas.

The reason I believe the "form or appearance" of a system's states (actually of its constituent structures such as neural connectivity and the activation patterns constrained by it) are relevant to a system's status as cognitive is because in order to "know" about objects, events, etc. in the world, we have to somehow acquire information about their structures. If we don't get this spatially varying structure projected directly, then it implies that some kind of external agency set up an apparatus to analyze the structures into whatever *encoded* form is desirable.

This is what happens in digital computers. We encode descriptions (say propositional) of the environment and then have the computer associate those descriptions with actions, say, through rules whose left-hand sides match the descriptions. This is accomplished by the physical process of pattern matching, so that as long as matchers are available for triggering the appropriate actions, it doesn't matter what the encodings look like (they could even be bitmaps of the environment). The system, say a robot with the full range of human peripheral capacities, would simply need a decoder hooked up to its effectors in order to be able to behave in a manner consistent with what we interpret the descriptions to be about.

So, I don't see how we can be cognizant of structures in the environment if we don't have information in our heads which was constructed from those structures such that the structural information is *actually* there. And the structural information isn't there if it's encoded. We only think it is because, as outside observers, we can *interpret* what is there as such and observe behaviors which are consistent with that interpretation. Why would a propositional representation be any different than arbitrary bit strings, neither of which preserves the spatial variations in any form which is, spatially, even remotely isomorphic to the external structures they purportedly represent? In a pattern matching system, these arbitrary forms can represent whatever we want them to. To repeat, if the structural variations are not actually topographically preserved, (and there appear to be many topographic mappings in the brain, at least in the sensory cortical areas) then how can we be said to have information about those structures except by an observer looking into our heads (or into a computational model) and interpreting the representing entities to be such?

Certainly there are physical changes to the visual signal entering the head on its way to the visual cortex and beyond. But I believe these are enhancements and analyses which are accomplished within the spatially-preserved variations of the input signal.

In short, the right causal structure (sequences, etc.) is necessary for a system to behave *as if* it is cognitive. But structure preservation and its causal capacity for effecting change through superposition are also necessary for there to actually be cognition; that is, for having the capacity to, for example, "understand" in Searle's sense.


>dc> fb> If we allow any physical system to be an implementation
>dc> fb> of some computation, we will most likely end up with
>dc> fb> little in the way of principled criteria for determining
>dc> fb> whether cognition is computation.
>dc>
>dc> Let's dispose of this canard for once and for all. Even if
>dc> every system implements some computation, this doesn't
>dc> imply that every system is engaged in cognition, for the
>dc> simple reason that only *certain kinds* of computation
>dc> qualify as cognition. Not even the strongest of believers in
>dc> in strong AI has said that implementing *any* program is
>dc> sufficient for cognition. It has to be the right kind of program
>dc> (or, more generally, the right kind of computation).

Maybe I'm misinterpreting the expression, "cognition is computation", but it seems like you're interpreting what I said above to mean "computation is cognition", which I don't believe at all and did not intend for the above to mean.


>dc> Isolating those kinds of computation that qualify as cognition is
>dc> an interesting, highly non-trivial question in its own right.
>dc> Presumably, only certain highly complex computations will
>dc> be sufficient for cognition; solar systems, along with rocks and
>dc> most everything else in the world, won't have the requisite causal
>dc> structure to qualify.

I certainly agree that a certain level of complexity is necessary.


>dc> There's some kind of strange, deeply-embedded assumption here:
>dc> that true "knowledge" requires embedding of an object's actual
>dc> "structure" inside a cognitive system. To think about spheres, does
>dc> one need something spherical inside one's head? Surely not. This
>dc> sounds like the kind of scholastic theory that Cummins dismisses
>dc> in the first few pages of his book.

As Cummins points out, this kind of literal representation only works for mind-stuff considered as non-physical: "The idea that we could get redness and sphericity in the mind loses its plausibility if this means we have to get it in the brain. When I look at a red ball, a red sphere doesn't appear in my brain". (Cummins, 1989 p31). This is also Roger Shepard's "first-order isomorphism" (see Palmer, 1978). (I have not read Shepard's paper, but I list the reference below as cited by Palmer).

No, I don't take it quite so literally. I guess my ideas are more akin to what Cummins calls "restricted similarity" in the sense that spatial variations of the input are preserved, but that doesn't mean such a representation shares any other properties with its referent. It's just that the spatial variations of one can be mapped onto the other, though there may be metric deformation of the representing form.


>dc> Even if you don't mean something quite as literal as this, there
>dc> seems to be nothing wrong in saying that the brain embodies the
>dc> information it carries in mere "measured attributes" such as
>dc> potentials, frequencies, and so on, as long as these bear the
>dc> requisite causal relation to the outside world and play the
>dc> appropriate functional role within the system.

There are two things to be sorted out here. It is certainly the case that how the structural variations of the image on the retina, for example, get topographically mapped onto the visual cortex may occur by transducing light energy amplitudes into frequencies of spike trains along axons, so in this sense there are measured attributes which "carry" the "information" of the signal. But I claim the significant behavior of the brain depends on the *spatial variation* of these measured-attribute values which cause the structural variations to become "imprinted" (temporarily) on the visual cortex. A particular measured attribute value, or a set of values that are not spatially arranged such that variations in their values are isomorphic (modulo metric deformations) to the variations of the original signal, cannot be said to carry structural information about the referent unless we interpret it, from the outside, to be such.

-Franklin Boyle

Cummins, R. (1989) Meaning and mental representation (MIT Press/Bradford Book).

Palmer, S.E. (1978) Fundamental aspects of cognitive representation. In Rosch, E. and Lloyd, B. (eds), Cognition and Categorization, (Hillsdale, NJ: Lawrence Earlbaum)

Shepard, R. and Chipman, S. (1970) Second-order isomorphism of internal representations: Shapes and states. Cognitive Psychology 1: 1-17.

---------------------------------------------------------------------

Date: Mon, 23 Nov 92 23:39:12 EST From: "Stevan Harnad" To: harnad@rrmone.cnrs-mrs.fr Subject: New Symbol Grounding Discussion

To: Symbol Grounding Discussion group

Well it's November 1992 and the "What Is Computation?" position papers are either just about to be submitted or already submitted and headed for refereeing, so here is a new topic. Selmer Bringsjord, in his recent book, "What Robots Can and Cannot Do" has proposed some arguments against the possibility that certain mechanisms can have minds. One of these, developed more fully in a paper of his, is an "Argument from Serendipity" against the Turing Test. I have asked him to reproduce this argument here, in what started as a one-on-one discussion, but we have both agreed that it's time to let the group join in. (Selmer, a late-comer to the group, is also at work catching up on the "What Is Computation" archive so he can submit a position paper too. Here are the first iterations (under 500 lines).

The next posting after this one (not the one below, but the next one under the header "TT and Necessity") will be open for discussion. You can of course quote from and comment on the exchange below as well.

Stevan Harnad

---------------------------------------------------------------------

Date: Sat, 7 Nov 92 01:55:44 EST From: "Stevan Harnad" To: brings@rpi.edu (Selmer Bringsjord) Subject: Re: TTT

Selmer,

Can you generate a screen-readable ascii version of your serendipity argument against TTT? Or, better still, can you tell it to me in a nut-shell? I pride myself -- or candidly confess -- that I have never had (nor heard) an argument (other than a formal mathematical proof) that I could not tell to a motivated interlocutor in 5 minutes or 1500 words. Whenever I have an idea that takes more words, I suspect the idea.

Stevan

P.S. Let me guess that your serendipity argument is that something could pass the TTT by chance, just happening to have anticipated a lifetime of verbal and robotic contigencies correctly. My answer is that not only is such a combinatorial outcome surely NP-complete (and hence not really worth worrying about) but, as I insist below, Turing Testing is not a game: We really want to design a mechanism with TTT power, capable of handling the infinity of contingencies an infinity of lifetimes (or our own, if we were immortal) could encounter (even though it can only be TESTED in one finite lifetime). And if that's not enough to convince you that serendipity is no refutation of the TTT, let me remind you that we are not talking about necessary and sufficient conditions here: A non-TTT passer, indeed a rock, could, logically speaking, be conscious, and a TTT-passer could fail to be conscious. So, since logical possibility is not the issue in the first place, the monkey-at-a-typewriter combinatorial possibility is no challenge to a principled generative model of Shakespeare.

----------------------------------------------------------------------

From: Selmer Bringsjord Date: Mon, 9 Nov 92 13:26:00 -0500 To: harnad@Princeton.EDU Subject: TT, TTT

Hello Stevan,

On TTT:

We agree that we *want* to design a mechanism with TTT power. (Indeed, prolly both of us are more than willing to work toward designing such a mechanism.) We agree that we (humans) *can* design a mechanism w/ TTT power. We also agree that it's logically possible that a TTT-passer could fail to be conscious. What dispute remains? Well, the only remaining issue for me (for the most part, anyway) is the central Turingish conditional; I think that it's false, and that's all I hope to show by the argument from serendipity. If there's a disagreement between us, you must think the conditional is true. But what is the conditional, and *is* is true?

The simplest construal of Turing's conditional (sparked by Professor Jefferson's Lister Oration in the original paper) is

(TT-P) Ax(x passes TT -> x is conscious)

where -> is the material conditional. But (TT-P) cannot be what is intended, since on standard model-theoretic semantics for FOL (TT-P) is vacuously true. On the other hand, neither can this conditional work:

(TT-P') L[Ax(x passes TT -> x is conscious)]

Because this proposition is overthrown (in standard modal contexts) by what you concede, viz.,

(1) M[a passes TT & ~a is conscious]

What construal remains? In my paper I respond to those who say that though Turing proposed something like (TT- P'), what he *should* have championed was a subjunctive or probabilistic conditional. The best response, which I don't make in the paper, is simply to demand a formal account of such conditionals (because absent such an account I'm well within my rights in refusing to affirm the conditional).

Yours, Selmer Bringsjord Dept. of Philosophy selmer@rpi.edu Dept. of Comp. Sci. selmer@rpitsmts RPI F: 518-276-4871 Troy NY 12180 USA P: 518-276-8105

----------------------------------------------------------------------

Date: Mon, 9 Nov 92 16:20:39 EST From: "Stevan Harnad" To: brings@rpi.edu Subject: Re: TT, TTT

Hi Selmer,

We have a basic methodological disagreement, but it's not specifically about the TT. It's about the value of formalization on topics like this one. I find it useful in technical areas such as mathematics and computer programming, but I do not find it at all helpful in areas where things can be discussed in plain English. I am not a philosopher, yet I can speak with full understanding and authority about the premise that a candidate who passes the TT or TTT (which is just shorthand for exhibiting verbal or verbal+robotic capacities that are indistinguishable from our own) is or is not conscious. It is obvious that no one has a proof that either candidate MUST be conscious, hence it follows that neither is necessarily conscious.

Now, can you please reformulate in English the substance of what is at issue over and above this very simple and straightforward point on which I do not disagree?

Best wishes, Stevan

----------------------------------------------------------------------

From: Selmer Bringsjord Date: Wed, 11 Nov 92 14:56:24 -0500 To: harnad@Princeton.EDU Subject: TT...

Okay, this may be somewhat better:

We have the methodological difference to which you allude in your previous note. And we have a difference, perhaps, over the conditional at the heart of Turing's case for TT. Let me try now not only to address the second, but the first also - in one stroke.

The conditional, in general, is simply the main thesis which Turing advanced, and which many thinkers since have likewise promoted: if something x passes the TT, then x is conscious. It was never enough just to state the TT and pack up and go home; *Mind* wouldn't have accepted the paper in that case. Your TTT and TTTT are intrinsically interesting, but what excites people about them is that perhaps *they* can supplant TT in Turing's conditional to yield a true proposition! It's the same situation with Turingish tests which can't in principle be passed by finite state automata, or Turing machines: the tests themselves are interesting, but the key is that they are supposed to help Turing's case. (This point is made especially vivid by the fact that people have proposed tests which no physical artifact could pass - on the reasonable assumption that machines beyond TMs will be, to put it mildly, rather hard to build. Certainly such tests are devised only to produce a defensible conditional (they may not succeed), not to give rise to a concrete empirical goal toward which AI should strive. But both of us do view TT and the like to be in part an empirical goal worth shooting for.) At any rate, I have expressed the rough-and-ready conditional in English; it isn't formalized. In *general* such informality, coming at a crucial dialectical juncture, worries me; in *general* the informality is something you find welcome. But I think we have here a case where the informal is unfortunate, as the following reasoning may show.

The simplest construal of Turing's conditional (sparked by Professor Jefferson's Lister Oration in the original paper) is

(TT-P) For every x, if x passes TT, then x is conscious.

where the if-then here is the material conditional. But (TT-P) cannot be what is intended, since on standard model-theoretic semantics for first-order logic (TT-P) is vacuously true. Because (TT-P) then says that for every element of the domain, if it passes TT, then it is conscious - and the if-then here is again the material conditional. Since no element of the domain passes TT, the antecedent is always false, and therefore by the characteristic truth-table for the material conditional the conditional is true.

On the other hand, neither can this conditional work:

(TT-P') It's logically necessary that (TT-P).

The operator here I wrote before as 'L' for the necessity operator in modal logics, more often a box. Now (TT-P') is overthrown (in standard modal contexts, i.e., if the necessity and possibility operators have standard meanings cashed out by normal systems of modal logic) by what we both affirm, viz.,

(1) It's logically possible that some thing can pass TT but not be conscious.

The proof that (1) implies not-(TT-P') is elementary, and as you say, we don't want it in this context anyway.

So then what construal remains? How can we get Turing off the ground? In my paper I respond to those who say that though Turing proposed something like (TT- P'), what he *should* have championed was a subjunctive or probabilistic conditional. The second of these possibilities would be that the conditional wouldn't be material in form, but something like P probabilistically entails Q. Enormous work has gone into trying to say what such a conditional amounts to - but there's no consensus at all, and in fact it's all rather messy. So it's like getting Turing out of his mess at the cost of tossing him into quicksand. This is a hard-nosed attitude (I take a different tack in the paper), I know, but I suspect it's an attitude that, based on what you said in your previous note, you would find acceptable in programming contexts, etc. There's a slogan "Theorems before programs." I like "Formal arguments about theorems before programs".

The subjunctive approach, at least the simplest version of it, is to augment (TT-P) with some such thing as: TT must be such that had it been run in other circumstances at other times, it would *also* be passed. Here again, though, the question of counterfactuals has occupied and continues to occupy logicians and philosophers of science and language - and there's no consensus. And anyway I try to show in the paper that on the dominant view of counterfactuals Turing's subjunctivized conditional is falsified by the argument from serendipity.

It seems to me that here we have a siuation which, with gem-like clarity, shows that the English (Turing's English) doesn't serve us well at all. It may be, however, that there is another construal (perhaps involving a 'going to be true' operator from temporal logic) which is defensible.

Everything I've said is of course consistent with my position that our robots will pass TT, TTT, ..., but will not be persons. In the present context, you could say that this is partly because persons are essentially (self-) conscious, and standarly conceived robots aren't. (Chapter IX in WRC&CB is an argument for this.)

Right now we're covering the symbol grounding problem in my course Philosophy of AI (upper undergraudate level), and my students greatly enjoy your writings on the matter. We may arrive at something worth sending on to you. Were looking now at whether some such proposal as "a formula (P) in some robot R's KB means P for R iff some causal relation obtains between the R's sensors and effectors, the external physical world, and KB" is promising. This is a proposal which Georges Rey has made, and it seems related to your proposal for how to ground symbols.

Yours, Selmer

----------------------------------------------------------------------

Date: Sun, 22 Nov 92 22:30:35 EST From: "Stevan Harnad"


>sb> From: Selmer Bringsjord
>sb> Date: Wed, 11 Nov 92 14:56:24 -0500
>sb>
>sb> We have the methodological difference to which you allude in your
>sb> previous note. And we have a difference, perhaps, over the conditional
>sb> at the heart of Turing's case for TT. Let me try now not only to
>sb> address the second, but the first also - in one stroke.
>sb>
>sb> The conditional, in general, is simply the main thesis which Turing
>sb> advanced, and which many thinkers since have likewise promoted: if
>sb> something x passes the TT, then x is conscious. It was never enough
>sb> just to state the TT and pack up and go home; *Mind* wouldn't have
>sb> accepted the paper in that case. Your TTT and TTTT are intrinsically
>sb> interesting, but what excites people about them is that perhaps *they*
>sb> can supplant TT in Turing's conditional to yield a true proposition!

Fine, but let me immediately add a clarification. I think it was arbitrary for Turing to formulate his criterion in the affirmative. The correct version would be: If a candidate passes the TT we are no more (or less) justified in denying that it has a mind than we are in the case of real people. That's the interesting methodological thesis (false in the case of the TT, by my lights, but true in the case of the TTT, again by my lights, and overdetermined in the case of the TTTT) that I, at any rate, find worthy of empirical testing and logical analysis. Any stronger thesis just increases the quantity of arbitrariness in the problem, in my view.


>sb> It's the same situation with Turingish tests which can't in principle
>sb> be passed by finite state automata, or Turing machines: the tests
>sb> themselves are interesting, but the key is that they are supposed to
>sb> help Turing's case. (This point is made especially vivid by the fact
>sb> that people have proposed tests which no physical artifact could pass -
>sb> on the reasonable assumption that machines beyond TMs will be, to put
>sb> it mildly, rather hard to build. Certainly such tests are devised only
>sb> to produce a defensible conditional (they may not succeed), not to give
>sb> rise to a concrete empirical goal toward which AI should strive.

Quite so; and I, for one, have no particular interest in defensible conditionals with no empirical content or consequences -- not on this topic, at any rate.


>sb> But both
>sb> of us do view TT and the like to be in part an empirical goal worth
>sb> shooting for.) At any rate, I have expressed the rough-and-ready
>sb> conditional in English; it isn't formalized. In *general* such
>sb> informality, coming at a crucial dialectical juncture, worries me; in
>sb> *general* the informality is something you find welcome. But I think we
>sb> have here a case where the informal is unfortunate, as the following
>sb> reasoning may show.
>sb>
>sb> The simplest construal of Turing's conditional (sparked by Professor
>sb> Jefferson's Lister Oration in the original paper) is
>sb>
>sb> (TT-P) For every x, if x passes TT, then x is conscious.
>sb>
>sb> where the if-then here is the material conditional. But (TT-P) cannot
>sb> be what is intended, since on standard model-theoretic semantics for
>sb> first-order logic (TT-P) is vacuously true. Because (TT-P) then says
>sb> that for every element of the domain, if it passes TT, then it is
>sb> conscious - and the if-then here is again the material conditional.
>sb> Since no element of the domain passes TT, the antecedent is always
>sb> false, and therefore by the characteristic truth-table for the material
>sb> conditional the conditional is true.

Still not English enough. I assume what you mean is that if anything OTHER THAN US passes the TT, then it's conscious -- and nothing other than us passes the TT, so the claim is trivially true. But if we construe this empirically, there may eventually be something other than us that passes the TT, and there are already coherent things we can say even about that hypothetical future contingency (e.g., Searle's Chinese Room Argument and my Symbol Grounding Problem).


>sb> On the other hand, neither can this conditional work:
>sb>
>sb> (TT-P') It's logically necessary that (TT-P).
>sb>
>sb> The operator here I wrote before as 'L' for the necessity operator in
>sb> modal logics, more often a box. Now (TT-P') is overthrown (in standard
>sb> modal contexts, i.e., if the necessity and possibility operators have
>sb> standard meanings cashed out by normal systems of modal logic) by what
>sb> we both affirm, viz.,
>sb>
>sb> (1) It's logically possible that some thing can pass TT but not be
>sb> conscious.
>sb>
>sb> The proof that (1) implies not-(TT-P') is elementary, and as you say,
>sb> we don't want it in this context anyway.

Not only is it elementary, but no "proof" is necessary, because, as I said earlier, the affirmative thesis is too strong, indeed it's an arbitrary claim. The only thing that would have given the TT the force of necessity would have been a PROOF that anything that passed it had to be conscious. No one has even given a hint of such a proof, so it's obvious that the thesis is not stating a necessary truth. In its positive form, it is just an empirical hypothesis. In its negative form (as I stated it), it is just an epistemic or methodological observation.


>sb> So then what construal remains? How can we get Turing
>sb> off the ground? In my paper I respond to those who say that though
>sb> Turing proposed something like (TT- P'), what he *should* have
>sb> championed was a subjunctive or probabilistic conditional. The second
>sb> of these possibilities would be that the conditional wouldn't be
>sb> material in form, but something like P probabilistically entails Q.
>sb> Enormous work has gone into trying to say what such a conditional
>sb> amounts to - but there's no consensus at all, and in fact it's all
>sb> rather messy. So it's like getting Turing out of his mess at the cost
>sb> of tossing him into quicksand. This is a hard-nosed attitude (I take a
>sb> different tack in the paper), I know, but I suspect it's an attitude
>sb> that, based on what you said in your previous note, you would find
>sb> acceptable in programming contexts, etc. There's a slogan "Theorems
>sb> before programs." I like "Formal arguments about theorems before
>sb> programs".

It never even gets to that point. I don't know why you even invoke the terminology of "subjunctive or probabilistic conditional": In its affirmative form it is an over-strong and probably untestable empirical hypothesis (which, like all empirical hypotheses, depends on future data that can be adduced for and against it) and in its proper negative form it is merely a methodological observation (perhaps also open to counterevidence or logical counter-examples, only I haven't seen any successful instances of either yet). In my view, nothing substantive can come out of the formal analysis of the terms in which this simple thesis is stated.


>sb> The subjunctive approach, at least the simplest version of it, is to
>sb> augment (TT-P) with some such thing as: TT must be such that had it
>sb> been run in other circumstances at other times, it would *also* be
>sb> passed. Here again, though, the question of counterfactuals has
>sb> occupied and continues to occupy logicians and philosophers of science
>sb> and language - and there's no consensus. And anyway I try to show in
>sb> the paper that on the dominant view of counterfactuals Turing's
>sb> subjunctivized conditional is falsified by the argument from
>sb> serendipity.

Look, let's make it simpler. Here's a TT (actually a TTT) for an airplane: If it has performance capacity Turing-indistinguishable from that of an airplane, it flies. No subjunctives needed. Of course you can gerrymander the test of its flying capacity for a finite amount of time to make it appear as if it can fly, even though it really can't; perhaps, with sufficient control of every other physical object and force for the rest of time (by God or by Chance -- "serendipity") you could do it forever. So what? We're not interested in tricks, and the issue is not one of necessity or of subjunctives. There's a certain functional capacity a plane has, we call that flying, and anything else with the same functional capacity also flies. There is no higher authority.

With consciousness, however, there is a higher authority, and that concerns whether subjective states accompany the requisite performance capacity. Perhaps they do, perhaps they don't. My version of the TTT just makes the methodological point that we cannot hope to be the wiser, and hence it is arbitrary to ask for more of machines than we can expect of people. Again, no subjunctivities or necessities about it.


>sb> It seems to me that here we have a siuation which, with gem-like
>sb> clarity, shows that the English (Turing's English) doesn't serve us
>sb> well at all. It may be, however, that there is another construal
>sb> (perhaps involving a 'going to be true' operator from temporal logic)
>sb> which is defensible.

I don't think English is at fault; I think you are trying to squeeze necessity out of an empirical hypothesis, and I don't see any reason why you should expect to be any more successful here than with F = ma (which is not to say that TT claims are entirely like ordinary empirical hypotheses).


>sb> Everything I've said is of course consistent with my position that our
>sb> robots will pass TT, TTT, ..., but will not be persons. In the present
>sb> context, you could say that this is partly because persons are
>sb> essentially (self-) conscious, and standarly conceived robots aren't.
>sb> (Chapter IX in WRC&CB is an argument for this.)

I would not say anything of the sort. I think it is possible that all TTT-passers will be conscious and I think it is possible that not all TTT-passers will be conscious. I just think there are methodological reasons, peculiar to the mind/body problem, why we can never know one way or the other, even with the normal level of uncertainty of an ordinary empirical hypothesis. (By the way, although I believe that Searle's Chinese Room Argument has shown that it is highly improbable that TT-passing computers are conscious, it is not a PROOF that they cannot be; a second consciousness of which Searle is not conscious is still a logical possibility, but not one worthy of much credence, in my view.)


>sb> Right now we're covering the symbol grounding problem in my course
>sb> Philosophy of AI (upper undergraudate level), and my students greatly
>sb> enjoy your writings on the matter. We may arrive at something worth
>sb> sending on to you. Were looking now at whether some such proposal as "a
>sb> formula (P) in some robot R's KB means P for R iff some causal
>sb> relation obtains between the R's sensors and effectors, the external
>sb> physical world, and KB" is promising. This is a proposal which Georges
>sb> Rey has made, and it seems related to your proposal for how to ground
>sb> symbols.

As you know, I haven't much of a taste for such formalism. If what you/he are saying is that it is more probable that a symbol system is conscious if it can not only pass the TT, but also the TTT, that is indeed what I was saying too. And in particular, whereas a symbol in a TT-passing symbol system obeys only one set of constraints (the formal, syntactic ones that allow the system to bear the weight of a systematic semantic interepretation), a TTT-passing system obeys a second set of constraints, namely. that the system must be able (Turing-indistinguishably) to pick out (discriminate, manipulate, categorize, identify and describe) all the objects, events and states of affairs that its symbols are systematically interpretable as denoting, and these two different sets of constraints (symbolic and robotic) must square systematically and completely with one another. This would solve the symbol grounding problem -- the interpretations of the symbols in the system would be autonomously grounded in the system's robotic capacities rather than having to be mediated by the mind of an external interpreter) but it would still be a logical possibility that there was nobody home in the grounded system: no one in there for the symbols to mean anything TO. Hence groundedness and consciousness are certainly not necessarily equivalent. Indeed, I know of no way to prove that groundedness is either a necessary or a sufficient condition for consciousness. It's just a probable one.

Stevan Harnad

PS I think it's time to start posting this correspondence to the symbol grounding group as a whole. Confirm that this is OK with you and I will gather together the pertinent parts of what came before and post it, so the Group (about 500 strong) can join in. It would be a good way to start this year's discussion, which has lain dormant as people wrote their "What Is Computation?" position papers.

---------------------------------------------------------------------

Date: Mon, 23 Nov 92 23:53:45 EST From: "Mail Delivery Subsystem"

From: Selmer Bringsjord Date: Mon, 23 Nov 92 23:01:50 -0500

Some thoughts on your thoughts (and as you read remember what a magnanimous step I have taken in agreeing to minimize the appearance of the formal in my thoughts :) ...):


>sb> We have the methodological difference to which you allude in your
>sb> previous note. And we have a difference, perhaps, over the conditional
>sb> at the heart of Turing's case for TT. Let me try now not only to
>sb> address the second, but the first also - in one stroke.
>sb> The conditional, in general, is simply the main thesis which Turing
>sb> advanced, and which many thinkers since have likewise promoted: if
>sb> something x passes the TT, then x is conscious. It was never enough
>sb> just to state the TT and pack up and go home; *Mind* wouldn't have
>sb> accepted the paper in that case. Your TTT and TTTT are intrinsically
>sb> interesting, but what excites people about them is that perhaps *they*
>sb> can supplant TT in Turing's conditional to yield a true proposition!


>
>sh> Fine, but let me immediately add a clarification. I think it was
>
>sh> arbitrary for Turing to formulate his criterion in the affirmative.
>
>sh> The correct version would be: If a candidate passes the TT we are no more
>
>sh> (or less) justified in denying that it has a mind than we are in the
>
>sh> case of real people. That's the interesting methodological thesis
>
>sh> (false in the case of the TT, by my lights, but true in the case of the
>
>sh> TTT, again by my lights, and overdetermined in the case of the TTTT)
>
>sh> that I, at any rate, find worthy of empirical testing and logical
>
>sh> analysis. Any stronger thesis just increases the quantity of
>
>sh> arbitrariness in the problem, in my view.

Your rendition of Turing's original conditional -- call it TT-P* -- is one that's likely to have him turning in his grave ... because TT- P* is obviously false: Contenders for a solution to the problem of other minds involve reference to the physical appearance and behavior of real people over and above their linguistic comm -- and TT, as we all know, prohibits such reference. So we didn't need Searle's CRA. We can just use your construal and this shortcut. (You know, the referees at *Mind* would have picked this up. So it's a good thing Turing didn't propose your TT-P*. If TT-P* rests, on the other hand, on the claim that we often hold that x is a person on evidence weaker than that found in TT, you have delivered a parody of Turing's position -- since I for one sometimes hold that x is a person on the strength of a fleeting glance at an image in a mirror, or a dot on the horizen, etc).

Of course, since I think my argument from serendipity (in Ford & Glymour, below) also refutes Turing's original conditional (perhaps in whatever form it takes), I don't worry much about the position you now find yourself occupying (and of course *you* won't worry much, since you want "robotic" elements in the brew, and I seem to have thrown a few in). But you better strap your seat belt on, or at least ratchet it down tighter than it's been during your defense of CRA, 'cause we both agree that a lot of ostensibly clever people have based their intellectual lives in large part on Turing's original, bankrupt conditional -- and my prediction is that you're gonna eventually find yourself coming round to extensions of CRA which shoot down that remnant of "Strong" AI that's still near and dear to your heart.


>sb> It's the same situation with Turingish tests which can't in principle
>sb> be passed by finite state automata, or Turing machines: the tests
>sb> themselves are interesting, but the key is that they are supposed to
>sb> help Turing's case. (This point is made especially vivid by the fact
>sb> that people have proposed tests which no physical artifact could pass -
>sb> on the reasonable assumption that machines beyond TMs will be, to put
>sb> it mildly, rather hard to build. Certainly such tests are devised only
>sb> to produce a defensible conditional (they may not succeed), not to give
>sb> rise to a concrete empirical goal toward which AI should strive.


>
>sh> Quite so; and I, for one, have no particular interest in defensible
>
>sh> conditionals with no empirical content or consequences -- not on
>
>sh> this topic, at any rate.

Hmm. This topic, any philosophical topic, I treat as in some sense just a branch of logic and mathematics. (Had it not been for the fact that conceding Cantor's paradise wasn't all that "expensive," we'd prolly still have thinkers around today looking for desperate escapes. The dictum that "a proof is a proof" is silly.) Your "not on this topic" suggests that had you my attitude you would take purely formal philosophical conditionals seriously. At any rate, the conditionals in question are hardly devoid of empirical content. Take "If x is a person capable of evaluating a novel like *War & Peace*, then x can decide a Turing undecidable set." This has strong empirical content for an AInik who thinks that no physical artifact can decide Turing undecidable sets! On the other hand, if you have some idiosyncratic construal waiting in the wings again (this time for 'empirical content'), what is it?


>sb> But both
>sb> of us do view TT and the like to be in part an empirical goal worth
>sb> shooting for. At any rate, I have expressed the rough-and-ready
>sb> conditional in English; it isn't formalized. In *general* such
>sb> informality, coming at a crucial dialectical juncture, worries me;
>sb> in *general* the informality is something you find welcome. But I
>sb> think we have here a case where the informal is unfortunate, as the
>sb> following reasoning may show.
>sb>
>sb> The simplest construal of Turing's conditional (sparked by Professor
>sb> Jefferson's Lister Oration in the original paper) is
>sb>
>sb> (TT-P) For every x, if x passes TT, then x is conscious.
>sb>
>sb> where the if-then here is the material conditional. But (TT-P) cannot
>sb> be what is intended, since on standard model-theoretic semantics
>sb> for first-order logic (TT-P) is vacuously true. Because (TT-P) then
>sb> says that for every element of the domain, if it passes TT, then it is
>sb> conscious - and the if-then here is again the material conditional.
>sb> Since no element of the domain passes TT, the antecedent is
>sb> always false, and therefore by the characteristic truth-table for the
>sb> material conditional the conditional is true.


>
>sh> Still not English enough. I assume what you mean is that if
>
>sh> anything OTHER THAN US passes the TT, then it's conscious -- and nothing
>
>sh> other than us passes the TT, so the claim is trivially true.

Exactly.


>
>sh> But if we
>
>sh> construe this empirically, there may eventually be something other than
>
>sh> us that passes the TT, and there are already coherent things we can say
>
>sh> even about that hypothetical future contingency (e.g., Searle's
>
>sh> Chinese Room Argument and my Symbol Grounding Problem).

Yes, but whatever you want to say about hypothetical futures and the like will suggest to me we won't really know be in position to assign truth-values until we turn to some logic for help. A large part of philosophy has been and continues to be devoted to schemes for talking carefully about hypothetical futures. This is why in an attempt to rescue Turing people turn to the many and varied conditionals of conditional logic, probabilstic conditionals, etc.


>sb> On the other hand, neither can this conditional work:
>sb>
>sb> (TT-P') It's logically necessary that (TT-P).
>sb>
>sb> The operator here I wrote before as 'L' for the necessity operator
>sb> in modal logics, more often a box. Now (TT-P') is overthrown (in
>sb> standard modal contexts, i.e., if the necessity and possibility operators
>sb> have standard meanings cashed out by normal systems of modal logic)
>sb> by what we both affirm, viz.,
>sb>
>sb> (1) It's logically possible that some thing can pass TT but not be
>sb> conscious.
>sb>
>sb> The proof that (1) implies not-(TT-P') is elementary, and as you
>sb> say, we don't want it in this context anyway.


>
>sh> Not only is it elementary, but no "proof" is necessary, because, as
>
>sh> I said earlier, the affirmative thesis is too strong, indeed it's an
>
>sh> arbitrary claim. The only thing that would have given the TT the force
>
>sh> force of necessity would have been a PROOF that anything that passed
>
>sh> it had to be conscious. No one has even given a hint of such a proof,
>
>sh> so it's obvious that the thesis is not stating a necessary truth.

Your reasoning here is fallacious, since it assumes that if P is a necessary truth, someone has given at least the hint of a proof of P. Counter-examples: Propositions like "The equivalence of deterministic and non-deterministic TMs added by disjunction introduction to the proposition that Quayle's IQ is greater than 3." Also, the set of necessary truths is uncountable.


>
>sh> In its affirmative form it is an over-strong and probably untestable
>
>sh> empirical hypothesis (which, like all empirical hypotheses, depends on
>
>sh> future data that can be adduced for and against it) and in its proper
>
>sh> negative form it is merely a methodological observation (perhaps also
>
>sh> open to counterevidence or logical counter-examples, only I haven't
>
>sh> seen any successful instances of either yet).

You have only to formalize the aforementioned objection to TT-P* via commonalities in proposed solutions to the problem of other minds.


>sb> The subjunctive approach, at least the simplest version of it, is to
>sb> augment (TT-P) with some such thing as: TT must be such that
>sb> had it
>sb> been run in other circumstances at other times, it would *also* be
>sb> passed. Here again, though, the question of counterfactuals has
>sb> occupied and continues to occupy logicians and philosophers of
>sb> science and language - and there's no consensus. And anyway I try to
>sb> show in the paper that on the dominant view of counterfactuals Turing's
>sb> subjunctivized conditional is falsified by the argument from
>sb> serendipity.


>
>sh> Look, let's make it simpler. Here's a TT (actually a TTT) for an
>
>sh> airplane: If it has performance capacity Turing-indistinguishable
>
>sh> from that of an airplane, it flies. No subjunctives needed. Of course you
>
>sh> can gerrymander the test of its flying capacity for a finite amount of
>
>sh> time to make it appear as if it can fly, even though it really can't;
>
>sh> perhaps, with sufficient control of every other physical object
>
>sh> and force for the rest of time (by God or by Chance -- "serendipity")
>
>sh> you could do it forever. So what? We're not interested in tricks, and
>
>sh> the issue is not one of necessity or of subjunctives. There's a certain
>
>sh> functional capacity a plane has, we call that flying, and anything
>
>sh> else with the same functional capacity also flies. There is no higher
>
>sh> authority.

There's no certified logic of analogy for use in deductive contexts. Witness the problems people encounter when they try to reason to the conclusion that people are fundamentally computers because computers behave analogously to people (Chap II, WRC&CB). People are people; planes are planes. It does you no good to prove a version of TT-P* in which planes are substituted for people.

Never said I *was* interested in tricks. We're both working toward building TT and TTT passers. But you're forgetting what Searle has taught you. Your view mirrors the sort of desperation we find amongst those who reject the near-proof of CRA! If you tell me that a capacity for TTT-passing is what we need for consciousness, and I promptly build you a thought-experiment in which this capacity is around in all its glory, but no one's home -- well, if you don't get a little worried, you look like those who tell me that Jonah doing his thing with mental images of Register Machines has given birth to numerically distinct persons. I can prove, I believe, that a number of Turingish conditionals regarding TT and TTT are false (done in the relevant paper). You have simply provided a new conditional -- TT-P*; the CRAless attack on which is sketched above - - which presumably can be adapted for TTT:

(TTT-P*) If a candidate passes TTT we are no more (or less) justified in denying that it has a mind then we are in the case of real people.

I think this proposition is demonstrably false. And I'm not gonna have to leave my armchair to muster the counter-argument.


>sb> Everything I've said is of course consistent with my position that
>sb> our robots will pass TT, TTT, ..., but will not be persons. In the
>sb> present context, you could say that this is partly because persons are
>sb> essentially (self-) conscious, and standardly conceived robots
>sb> aren't. (Chapter IX in WRC&CB is an argument for this.)


>
>sh> I would not say anything of the sort.

I knew that. (But what premise is false in Chapter IX?)


>
>sh> I think it is possible that all TTT-passers will be conscious and I
>
>sh> think it is possible that not all TTT-passers will be conscious. I just
>
>sh> think there are methodological reasons, peculiar to the mind/body
>
>sh> problem, why we can never know one way or the other, even with the
>
>sh> normal level of uncertainty of an ordinary empirical hypothesis. (By
>
>sh> the way, although I believe that Searle's Chinese Room Argument has
>
>sh> shown that it is highly improbable that TT-passing computers are
>
>sh> conscious, it is not a PROOF that they cannot be; a second
>
>sh> consciousness of which Searle is not conscious is still a logical
>
>sh> possibility, but not one worthy of much credence, in my view.)

Come on, Stevan, I'm tellin' you, check those seat belts. My Searlean argument in *What Robots Can & Can't Be* is pretty close to a proof. The David Cole/ (I now see:) Michael Dyer multiple personality desperation dodge is considered therein, the upshot being that LISP programmers have the capacity to throw off the Earth's census. I've tried this out on students for five years. If you take my chapter and teach it and take polls before and after, the results don't bode well for "Strong" AI. You know you think the multiple person move implies some massively counter-intuitive things'll need to be swallowed. And the Hayes objection is easily handled by my Jonah, who works at the level of Register Machines and rocks -- besides which, Jonah can be hypnotized etc. etc. so as to remove "free will" from the picture (a move I spelled out for Hayes' after his presentation of his CRA dodge at the Second Human & Machine Cognition workshop). Besides, again, you follow herdish naivete and talk as if a proof is a proof -- in some religious sense. My version of CRA is as much of a proof as any reductio in classical mathematics is for a constructivist. Intuitionist mathematics is perfectly consistent, clever, rigorous, has been affirmed by some biggies, and so on. Try getting an intuitionist to affirm some of the non-constructive theorems which most in this forum seem to be presupposing. I can prove Goldbach's conjecture is either T or F in a sec in Logic 101 But that's not a proof for a genius like Heyting! You want to rest on probability (w.r.t. CRA etc.). I want proofs. But the motivation to rest on proofs may come from a mistaken notion of 'proof.'


>sb> Right now we're covering the symbol grounding problem in my
>sb> course Philosophy of AI (upper undergraudate level), and my students
>sb> greatly enjoy your writings on the matter. We may arrive at something
>sb> worth sending on to you. Were looking now at whether some such
>sb> proposal as "a formula (P) in some robot R's KB means P for R iff
>sb> some causal relation obtains between the R's sensors and effectors, the
>sb> external physical world, and KB" is promising. This is a proposal which
>sb> Georges Rey has made, and it seems related to your proposal for how to
>sb> ground symbols.


>
>sh> As you know, I haven't much of a taste for such formalism. If what
>
>sh> you/he are saying is that it is more probable that a symbol system is
>
>sh> conscious if it can not only pass the TT, but also the TTT, that is
>
>sh> indeed what I was saying too.

What I'm saying is that your position on TTT can be put in the form of a quasi-formal declarative proposition about the desired causal relation between symbol systems, sensors and effectors, and the physical environment. (If it *can't* be so put, I haven't much of a taste for it.) Such a proposition is sketched by Rey. Again, I'm not gonna have to leave my leather Chesterfield to come up with a diagnosis.

By all means, post our exchange to this point, and subsequent discussion if you like. Keep in mind, however, that I'm gonna have to write my position paper after finishing the archives! (So I may not be discussing much for a while.) The only reason I might be able to turn this thing around fast is that in '88 I formulated a Chalmersian definition of 'x is a computer' after going without sleep for two days, obsessed with the Searlean "everything is a computer" argument that at that time wasn't associated with Searle (it was considered by R. J. Nelson).

(Has anybody else read the archives from start to finish in hard copy having not seen any of the discussion before? You've done a marvelous job with this, Stevan. I must say, though, that after my sec'y got the files and generated hard copy within the confines of one day, she was weary: how many trees have you guys killed?

All the best, Stevan, and thanks for the stimulating discussion. I hope to have the THINK reaction finished before Turkey day, will send to you direct...

Yours, Selmer

REFERENCES

Bringsjord (in press) "Could, How Could We Tell If, and Why Should -- Androids Have Inner LIves," in Ford, K. & C. Glymour, eds., *Android Epistemology* (Greenwich, CT: JAI Press).

Bringsjord, S. (1992) *What Robots Can & Can't Be* (Dordrecht, The Netherlands: Kluwer). "Searle: Chapter V" "Introspection: Chapter IX"

---------------------------------------------------------------------

From harnad Mon Jun 14 22:43:52 1993 To: brings@rpi.edu (Selmer Bringsjord) Subject: Long-overdue reply to Bringsjord Status: RO

As this response to Selmer Brinsgjord (sb) is long (over 7 months) overdue, I (sh) back-quote in extenso to furnish the context:

> From: Selmer Bringsjord > Date: Mon, 23 Nov 92 23:01:50 -0500


>sb> Some thoughts on your thoughts (and as you read remember what a
>sb> magnanimous step I have taken in agreeing to minimize the
>sb> appearance of the formal in my thoughts :) ...):


>
>sb> We have the methodological difference to which you allude in your
>
>sb> previous note. And we have a difference, perhaps, over the conditional
>
>sb> at the heart of Turing's case for TT. Let me try now not only to
>
>sb> address the second, but the first also - in one stroke.
>
>sb> The conditional, in general, is simply the main thesis which Turing
>
>sb> advanced, and which many thinkers since have likewise promoted: if
>
>sb> something x passes the TT, then x is conscious. It was never enough
>
>sb> just to state the TT and pack up and go home; *Mind* wouldn't have
>
>sb> accepted the paper in that case. Your TTT and TTTT are intrinsically
>
>sb> interesting, but what excites people about them is that perhaps *they*
>
>sb> can supplant TT in Turing's conditional to yield a true proposition!


>
>sh> Fine, but let me immediately add a clarification. I think it was
>
>sh> arbitrary for Turing to formulate his criterion in the affirmative.
>
>sh> The correct version would be: If a candidate passes the TT we are no more
>
>sh> (or less) justified in denying that it has a mind than we are in the
>
>sh> case of real people. That's the interesting methodological thesis
>
>sh> (false in the case of the TT, by my lights, but true in the case of the
>
>sh> TTT, again by my lights, and overdetermined in the case of the TTTT)
>
>sh> that I, at any rate, find worthy of empirical testing and logical
>
>sh> analysis. Any stronger thesis just increases the quantity of
>
>sh> arbitrariness in the problem, in my view.


>sb> Your rendition of Turing's original conditional -- call it TT-P* -- is
>sb> one that's likely to have him turning in his grave ... because TT- P*
>sb> is obviously false: Contenders for a solution to the problem of other
>sb> minds involve reference to the physical appearance and behavior of real
>sb> people over and above their linguistic comm -- and TT, as we all know,
>sb> prohibits such reference. So we didn't need Searle's CRA. We can just
>sb> use your construal and this shortcut. (You know, the referees at *Mind*
>sb> would have picked this up. So it's a good thing Turing didn't propose
>sb> your TT-P*.

No, I think I have Turing's actual intuition (or what ought to have been his intuition) right, and it's not obviously false -- despite appearances, so to speak: Turing of course knew we use appearances, but he also knew appearances can be deceiving (as they would be if a computer really DID have a mind but we immediately dismissed it out of hand simply because of the way it looked). The point of Turing's party game was partly to eliminate BIAS based on appearance. We would certainly be prepared to believe that other organisms, including extraterrestrial ones, might have minds, and we have no basis for legislating in advance what their exteriors are or are not allowed to LOOK like (any more than we can legislate in advance what their interiors are supposed to look like). It's what they can DO that guides our judgment, and it's the same with you and me (and the Blind Watchmaker: He can't read minds either, only adaptive performance).

I have no idea how your brain works, and my intuitions about appearances are obviously anthropocentric and negotiable. But if you could correspond with me as a lifelong pen-pal I have never seen, in such a way that it would never cross my mind that you had no mind, then it would be entirely arbitrary of me to revise my judgment just because I was told you were a machine -- for the simple reason that I know neither what PEOPLE are nor what MACHINES are. I only know what machines and people usually LOOK like, and those appearances can certainly be deceiving. By contrast, the wherewithal to communicate with me intelligibly for a lifetime -- that's something I understand, because I know exactly where it's coming from, so to speak.

On the other hand, I of course agree that what a person can DO includes a lot more than pen-pal correspondence, so in eliminating bias based on appearance, Turing inadvertently and arbitrarily circumscribed the portion of our total performance capacity that was to be tested. It is here that my own upgrade from T2 to T3 is relevant. It should be noted, though, that T3 is still in the spirit of Turing's original intuition. It's not really the APPEARANCE of the candidate that is relevant there either, it is only what it can DO -- which includes interacting with the objects in the world as we do (and, of course, things like eye contact and facial expression are probably important components of what we can and do DO too, but that's another matter).

So, yes, restriction to T2 rather than T3 was a lapse on Turing's part, though this still did not make even T2 FALSE prima facie: We still needed a nonarbitrary REASON for revising our intuitions about the fact that our lifelong pen-pal had a mind even if he turned out to be a machine, and this revision could not be based on mere APPEARANCE, which is no reason at all. So Searle's Chinese Room Argument WAS needed after all, to show -- at least in the case of a life-long pen-pal who was ONLY the implementation of an implementation-independent symbol system, each of whose implementations allegedly had a mind -- that the conclusion drawn from T2 would have been false.

But note that I say "would have been," since I do not believe for a minute that a pure symbol-cruncher could ever actually pass T2 (because of the symbol grounding problem). I believe a successful T2 candidate's capacity to pass T2 would have to be grounded in its capacity to pass T3 -- in other words, the candidate would still have to be a robot, not just a symbol-cruncher. So, paradoxically, T2 is even closer to validity than might appear even once we take into account that it was arbitrarily truncated, leaving out all the robotic capacities of T3; for if I'm right, then T2 is also an indirect test of T3. Only for a counterfactual symbol-cruncher that could hypothetically pass T2 is T2 not valid (and so we do need Searle's Argument after all).

To summarize: the only nonarbitrary basis we (or the Blind Watchmaker) have for adjudicating the presence or absence of mind is performance capacity totally indistinguishable from that of entities that really do have minds. External appearances -- or internal ones, based on the means by which the performance capacity is attained -- are irrelevant because we simply KNOW nothing about either.

And I repeat, no tricks are involved here. We REALLY want the performance capacity; that's not just an intuitive criterion but an empirical, engineering constraint, narrowing the degrees of freedom for eligible candidates to the ones that will (we hope) admit only those with minds. For this reason it is NOT a valid objection to point out that, for example, people who are paralyzed, deaf and blind still have minds. Of course they do; but the right way to converge on the correct model is not to look first for a candidate that is T3-indistinguishable from such handicapped persons, but for one that is T3-indistinguishable from a normal, intact person. It's after that's accomplished that we can start worrying about how much we can pare it back and still have somebody home in there.

A more relevant objection was my hint at the adaptive role of appearance in facial expression, for example, but my guess is that this will be a matter fine-tuning for the winning candidate (as will neurosimilitude). The empirical degrees of freedom are probably narrowed sufficiently by our broad-stroke T3 capacity.


>sb> If TT-P* rests, on the other hand, on the claim that we
>sb> often hold that x is a person on evidence weaker than that found in TT,
>sb> you have delivered a parody of Turing's position -- since I for one
>sb> sometimes hold that x is a person on the strength of a fleeting glance
>sb> at an image in a mirror, or a dot on the horizen, etc).

No, as I have written (Harnad 1992), the right construal of T2 is as a life-long pen-pal. Short-term party tricks and snap conclusions are not at issue here. We're talking about an empirical, engineering criterion. By way of analogy, if the goal is to build a system with performance capacities indistinguishable from those of an airplane, it is not good enough to build something that will fool you into think it's a plane for a few minutes, or even hours. It REALLY has to have a plane's total performance capacity.


>sb> Of course, since I think my argument from serendipity (in Ford &
>sb> Glymour, below) also refutes Turing's original conditional (perhaps in
>sb> whatever form it takes), I don't worry much about the position you
>sb> now find yourself occupying (and of course *you* won't worry much,
>sb> since you want "robotic" elements in the brew, and I seem to have
>sb> thrown a few in).

Regarding the serendipity argument, let me quote from my reply to your commentary (Bringsjord 1993) on Harnad (1993a): "So chimpanzees might write Shakespeare by chance: What light does that cast on how Shakespeare wrote Shakespeare?" This is perhaps the difference between an engineering and a philosophical motivation on this topic. Physicists don't worry about thermodynamics reversing by chance either, even though it's a logical possibility. It's just that nothing substantive rides on that possibility. A life-long pen-pal correspondence could be anticipated by chance, logically speaking. So what? What we are looking for is a mechanism that does it by design, not by chance.

My own position is that T3 is the right level of empirical constraint for capturing the mind (though it is of course no guarantor). T2 is too UNDERdetermined (because it lets in the hypothetical ungrounded symbol-cruncher) and T4 is OVERconstrained (because we do not know which of our neuromolecular properties are RELEVANT to generating our T3 capacity). And that's all there is to the reverse-engineering T-hierarchy ("t1" is subtotal "toy" fragments of our Total capacity, hence not a Turing Test at all, and hopelessly underdetermined, and T5 is the Grand Unified Theory of Everything, of which engineering and reverse engineering -- T2 - T4 -- are merely a fragment; Harnad 1994). There's just nowhere else to turn empirically; and no guarantees exist. That's what makes the mind/body problem special, with an extra layer of underdetermination, over and above that of pure physics and engineering: Even in T4 there could be nobody home, for all we know. This makes qualia fundamentally unlike, say, quarks, despite the fact that both are undeservable and both are real (Harnad 1993b).


>sb> But you better strap your seat belt on, or at least
>sb> ratchet it down tighter than it's been during your defense of CRA,
>sb> 'cause we both agree that a lot of ostensibly clever people have
>sb> based their intellectual lives in large part on Turing's original,
>sb> bankrupt conditional -- and my prediction is that you're gonna
>sb> eventually find yourself coming round to extensions of CRA which
>sb> shoot down that remnant of "Strong" AI that's still near and dear to
>sb> your heart.

I'm all strapped in, but the view from here is that the only thing that made CRA work, the only thing that gave Searle this one special periscope for peeking across the otherwise impenetrable other-minds barrier, is the implementation-independence of pure symbol systems. Hence it is only a T2-passing symbol cruncher that is vulnerable to CRA. There are no extensions. T3 is as impenetrable as a stone or a brain.


>
>sb> It's the same situation with Turingish tests which can't in principle
>
>sb> be passed by finite state automata, or Turing machines: the tests
>
>sb> themselves are interesting, but the key is that they are supposed to
>
>sb> help Turing's case. (This point is made especially vivid by the fact
>
>sb> that people have proposed tests which no physical artifact could pass -
>
>sb> on the reasonable assumption that machines beyond TMs will be, to put
>
>sb> it mildly, rather hard to build. Certainly such tests are devised only
>
>sb> to produce a defensible conditional (they may not succeed), not to give
>
>sb> rise to a concrete empirical goal toward which AI should strive.


>
>sh> Quite so; and I, for one, have no particular interest in defensible
>
>sh> conditionals with no empirical content or consequences -- not on
>
>sh> this topic, at any rate.


>sb> Hmm. This topic, any philosophical topic, I treat as in some sense just
>sb> a branch of logic and mathematics. (Had it not been for the fact that
>sb> conceding Cantor's paradise wasn't all that "expensive," we'd prolly
>sb> still have thinkers around today looking for desperate escapes.
>sb> The dictum that "a proof is a proof" is silly.) Your "not on this
>sb> topic" suggests that had you my attitude you would take purely formal
>sb> philosophical conditionals seriously. At any rate, the conditionals in
>sb> question are hardly devoid of empirical content. Take "If x is a person
>sb> capable of evaluating a novel like *War & Peace*, then x can decide a
>sb> Turing undecidable set." This has strong empirical content for an AInik
>sb> who thinks that no physical artifact can decide Turing undecidable
>sb> sets! On the other hand, if you have some idiosyncratic construal
>sb> waiting in the wings again (this time for 'empirical content'), what is
>sb> it?

Nothing idiosyncratic. We are interested in real reverse engineering here: Designing systems that really have certain capacities. If there are arguments (like CRA) that show that certain approaches to this reverse engineering (like trying to design a T2-scale symbol-cruncher) are likely to fail, then they are empirically relevant. Otherwise not. Engineering is not a branch of logic and mathematics (though it may apply them, and is certainly bound by them). Conditionals about undecidability are (as far as I can see) irrelevant; we are interested in capturing what people CAN do (T3), not what they can't...


>
>sb> But both
>
>sb> of us do view TT and the like to be in part an empirical goal worth
>
>sb> shooting for. At any rate, I have expressed the rough-and-ready
>
>sb> conditional in English; it isn't formalized. In *general* such
>
>sb> informality, coming at a crucial dialectical juncture, worries me;
>
>sb> in *general* the informality is something you find welcome. But I
>
>sb> think we have here a case where the informal is unfortunate, as the
>
>sb> following reasoning may show.


>
>sb> The simplest construal of Turing's conditional (sparked by Professor
>
>sb> Jefferson's Lister Oration in the original paper) is
>
>sb> (TT-P) For every x, if x passes TT, then x is conscious.
>
>sb> where the if-then here is the material conditional. But (TT-P) cannot
>
>sb> be what is intended, since on standard model-theoretic semantics
>
>sb> for first-order logic (TT-P) is vacuously true. Because (TT-P) then
>
>sb> says that for every element of the domain, if it passes TT, then it is
>
>sb> conscious - and the if-then here is again the material conditional.
>
>sb> Since no element of the domain passes TT, the antecedent is
>
>sb> always false, and therefore by the characteristic truth-table for the
>
>sb> material conditional the conditional is true.


>
>sh> Still not English enough. I assume what you mean is that if
>
>sh> anything OTHER THAN US passes the TT, then it's conscious -- and nothing
>
>sh> other than us passes the TT, so the claim is trivially true.


>sb> Exactly.


>
>sh> But if we
>
>sh> construe this empirically, there may eventually be something other than
>
>sh> us that passes the TT, and there are already coherent things we can say
>
>sh> even about that hypothetical future contingency (e.g., Searle's
>
>sh> Chinese Room Argument and my Symbol Grounding Problem).


>sb> Yes, but whatever you want to say about hypothetical futures and the
>sb> like will suggest to me we won't really be in position to assign
>sb> truth-values until we turn to some logic for help. A large part of
>sb> philosophy has been and continues to be devoted to schemes for talking
>sb> carefully about hypothetical futures. This is why in an attempt to
>sb> rescue Turing people turn to the many and varied conditionals of
>sb> conditional logic, probabilistic conditionals, etc.

The only conditional I've found useful here so far is Searle's: "IF a pure symbol-cruncher could pass T2, it would not understand, because I could implement the same symbol-cruncher without understanding."


>
>sb> On the other hand, neither can this conditional work:


>
>sb> (TT-P') It's logically necessary that (TT-P).
>
>sb> The operator here I wrote before as 'L' for the necessity operator
>
>sb> in modal logics, more often a box. Now (TT-P') is overthrown (in
>
>sb> standard modal contexts i.e. if the necessity and possibility operators
>
>sb> have standard meanings cashed out by normal systems of modal logic)
>
>sb> by what we both affirm, viz.,
>
>sb> (1) It's logically possible that some thing can pass TT but not be
>
>sb> conscious.
>
>sb> The proof that (1) implies not-(TT-P') is elementary, and as you
>
>sb> say, we don't want it in this context anyway.


>
>sh> Not only is it elementary, but no "proof" is necessary, because, as
>
>sh> I said earlier, the affirmative thesis is too strong, indeed it's an
>
>sh> arbitrary claim. The only thing that would have given the TT the
>
>sh> force of necessity would have been a PROOF that anything that passed
>
>sh> it had to be conscious. No one has even given a hint of such a proof,
>
>sh> so it's obvious that the thesis is not stating a necessary truth.


>sb> Your reasoning here is fallacious, since it assumes that if P is a
>sb> necessary truth, someone has given at least the hint of a proof of P.
>sb> Counter-examples: Propositions like "The equivalence of deterministic
>sb> and non-deterministic TMs added by disjunction introduction to the
>sb> proposition that Quayle's IQ is greater than 3." Also, the set of
>sb> necessary truths is uncountable.

I can only repeat, "For every x, if x passes TT, then x is conscious" was just a conjecture, with some supporting arguments. It turns out to be very probably false (unless memorizing symbols can make Searle understand Chinese, or generate a second mind in him that understands Chinese). No Searlean argument can be made against T3, but that could of course be false too; and so could even T4. So forget about proofs or necessity here. It's underdetermination squared all the way through, because of what's abidingly special about the mind/body problem (or about qualia, if you prefer).


>
>sh> In its affirmative form it is an over-strong and probably untestable
>
>sh> empirical hypothesis (which, like all empirical hypotheses, depends on
>
>sh> future data that can be adduced for and against it) and in its proper
>
>sh> negative form it is merely a methodological observation (perhaps also
>
>sh> open to counterevidence or logical counter-examples, only I haven't
>
>sh> seen any successful instances of either yet).


>sb> You have only to formalize the aforementioned objection to TT-P* via
>sb> commonalities in proposed solutions to the problem of other minds.

I cannot understand it stated in that abstract formal form. In plain English, how does it refute that T3 is the right level of empirical constraint for capturing the mind (bearing in mind that T3 never guaranteed anything, and never could, since nothing could) in the reverse engineering sense?


>
>sb> The subjunctive approach, at least the simplest version of it, is to
>
>sb> augment (TT-P) with some such thing as: TT must be such that
>
>sb> had it
>
>sb> been run in other circumstances at other times, it would *also* be
>
>sb> passed. Here again, though, the question of counterfactuals has
>
>sb> occupied and continues to occupy logicians and philosophers of
>
>sb> science and language - and there's no consensus. And anyway I try to
>
>sb> show in the paper that on the dominant view of counterfactuals Turing's
>
>sb> subjunctivized conditional is falsified by the argument from
>
>sb> serendipity.


>
>sh> Look, let's make it simpler. Here's a TT (actually a TTT) for an
>
>sh> airplane: If it has performance capacity Turing-indistinguishable
>
>sh> from that of an airplane, it flies. No subjunctives needed. Of course
>
>sh> you can gerrymander the test of its flying capacity for a finite amount
>
>sh> of time to make it appear as if it can fly, even though it really can't;
>
>sh> perhaps, with sufficient control of every other physical object
>
>sh> and force for the rest of time (by God or by Chance -- "serendipity")
>
>sh> you could do it forever. So what? We're not interested in tricks, and
>
>sh> the issue is not one of necessity or of subjunctives. There's a certain
>
>sh> functional capacity a plane has, we call that flying, and anything
>
>sh> else with the same functional capacity also flies. There is no higher
>
>sh> authority.


>sb> There's no certified logic of analogy for use in deductive contexts.
>sb> Witness the problems people encounter when they try to reason to the
>sb> conclusion that people are fundamentally computers because computers
>sb> behave analogously to people (Chap II, WRC&CB). People are people;
>sb> planes are planes. It does you no good to prove a version of TT-P* in
>sb> which planes are substituted for people.

I can't follow this. Forward engineering was able to build and completely explain the causal principles of planes. I'm just suggesting that reverse engineering T3 capacity will do the same with people, and, en passant, it will also capture the mind. The point is not about analogy, it's about the generation of real performance capacities.


>sb> Never said I *was* interested in tricks. We're both working toward
>sb> building TT and TTT passers. But you're forgetting what Searle has
>sb> taught you. Your view mirrors the sort of desperation we find amongst
>sb> those who reject the near-proof of CRA! If you tell me that a capacity
>sb> for TTT-passing is what we need for consciousness, and I promptly build
>sb> you a thought-experiment in which this capacity is around in all its
>sb> glory, but no one's home -- well, if you don't get a little worried,
>sb> you look like those who tell me that Jonah doing his thing with mental
>sb> images of Register Machines has given birth to numerically distinct
>sb> persons. I can prove, I believe, that a number of Turingish
>sb> conditionals regarding TT and TTT are false (done in the relevant
>sb> paper). You have simply provided a new conditional -- TT-P*; the
>sb> CRAless attack on which is sketched above - - which presumably can be
>sb> adapted for TTT:
>sb> (TTT-P*) If a candidate passes TTT we are no more (or less) justified
>sb> in denying that it has a mind then we are in the case of real people.
>sb> I think this proposition is demonstrably false. And I'm not gonna have
>sb> to leave my armchair to muster the counter-argument.

I'm ready for the counter-argument. I know what made the CRA work (penetrability of the other-minds barrier to Searle's periscope because the implementation-independence of the symbolic level), but since a T3-passer cannot be just a symbol-cruncher, how is the T3-CRA argument to go through? Serendipity and Jonah are just arguments for the logical POSSIBILITY that a T2 or T3 passer should fail to have a mind. But I've never claimed otherwise, because I never endorsed the positive version of T2 or T3! For me, they are epistemic, not ontic criteria (empirical constraints on models, actually).


>
>sb> Everything I've said is of course consistent with my position that
>
>sb> our robots will pass TT, TTT, ..., but will not be persons. In the
>
>sb> present context, you could say that this is partly because persons are
>
>sb> essentially (self-) conscious, and standardly conceived robots
>
>sb> aren't. (Chapter IX in WRC&CB is an argument for this.)


>
>sh> I would not say anything of the sort.


>sb> I knew that. (But what premise is false in Chapter IX?)

I mean I never say much of anything about "self-consciousness": consciousness simpliciter (somebody home) is enough for me. Nor do I know what "standardly conceived robots" are, but the only ones I've ever had in mind are T3 robots; and although it's POSSIBLE that they will not be conscious, there is (unlike in the case of T2, symbol-crunching, and Searle's periscope) no way of being any the wiser about that, one way or the other. Repeating logical possibilities in ever more embellished parables does not advance us in either direction: The logical possibility is there; the means of being any the wiser is not.

So we may as well stick with T3, which was good enough for the original designer. The only open question is whether there is anything more to be said for T4 (the last possible empirical resort). Until further notice, I take T4 to be just a matter of fine tuning; passing T3 will already have solved all the substantive problems, and if the fact of the matter is that that's not a fine enough filter to catch the mind, whereas T4 is, then what's peculiar to mind is even more peculiar than it had already seemed: Certainly no one will ever be able to say not only WHETHER, but even, if so, WHY, among several T3-indistinguishable candidates, only the T4-indistinguishable one should have a mind.

Chapter IX is too much of a thicket. The CRA is perspicuous; you can say what's right or wrong with it in English, without having to resort to formalisms or contrived and far-fetched sci-fi scenarios. I have restated the point and why and how it's valid repeatedly in a few words; I really think you ought to do the same.

(Also, speaking as [as far as I know] the first formulator of the T-hierarchy, there is, despite your hopeful dots after T2, T3..., only one more T, and that's T4! The Turing hierarchy (for mind-modelling purposes) ENDS there, whereas the validity of CRA begins and ends at T2.)


>
>sh> I think it is possible that all TTT-passers will be conscious and I
>
>sh> think it is possible that not all TTT-passers will be conscious. I just
>
>sh> think there are methodological reasons, peculiar to the mind/body
>
>sh> problem, why we can never know one way or the other, even with the
>
>sh> normal level of uncertainty of an ordinary empirical hypothesis. (By
>
>sh> the way, although I believe that Searle's Chinese Room Argument has
>
>sh> shown that it is highly improbable that TT-passing computers are
>
>sh> conscious, it is not a PROOF that they cannot be; a second
>
>sh> consciousness of which Searle is not conscious is still a logical
>
>sh> possibility, but not one worthy of much credence, in my view.)


>sb> Come on, Stevan, I'm tellin' you, check those seat belts. My Searlean
>sb> argument in *What Robots Can & Can't Be* is pretty close to a proof.
>sb> The David Cole/ (I now see:) Michael Dyer multiple personality
>sb> desperation dodge is considered therein, the upshot being that LISP
>sb> programmers have the capacity to throw off the Earth's census. I've
>sb> tried this out on students for five years. If you take my chapter and
>sb> teach it and take polls before and after, the results don't bode well
>sb> for "Strong" AI. You know you think the multiple person move implies
>sb> some massively counter-intuitive things'll need to be swallowed. And
>sb> the Hayes objection is easily handled by my Jonah, who works at the
>sb> level of Register Machines and rocks -- besides which, Jonah can be
>sb> hypnotized etc. etc. so as to remove "free will" from the picture (a
>sb> move I spelled out for Hayes' after his presentation of his CRA dodge
>sb> at the Second Human & Machine Cognition workshop). Besides, again, you
>sb> follow herdish naivete and talk as if a proof is a proof -- in some
>sb> religious sense. My version of CRA is as much of a proof as any
>sb> reductio in classical mathematics is for a constructivist. Intuitionist
>sb> mathematics is perfectly consistent, clever, rigorous, has been
>sb> affirmed by some biggies, and so on. Try getting an intuitionist to
>sb> affirm some of the non-constructive theorems which most in this forum
>sb> seem to be presupposing. I can prove Goldbach's conjecture is either T
>sb> or F in a sec in Logic 101 But that's not a proof for a genius like
>sb> Heyting! You want to rest on probability (w.r.t. CRA etc.). I want
>sb> proofs. But the motivation to rest on proofs may come from a mistaken
>sb> notion of 'proof.'

This has nothing to do with nonconstructive vs. constructive proof. As I understand it, CRA is not and cannot be a proof. If you can upgrade it to one, say how, in a few transparent words. Students can be persuaded in many ways; that's irrelevant. I've put my construal briefly and transparently; you should do the same.


>
>sb> Right now we're covering the symbol grounding problem in my
>
>sb> course Philosophy of AI (upper undergraudate level), and my students
>
>sb> greatly enjoy your writings on the matter. We may arrive at something
>
>sb> worth sending on to you. Were looking now at whether some such
>
>sb> proposal as "a formula (P) in some robot R's KB means P for R iff
>
>sb> some causal relation obtains between the R's sensors and effectors, the
>
>sb> external physical world, and KB" is promising. This is a proposal which
>
>sb> Georges Rey has made, and it seems related to your proposal for how to
>
>sb> ground symbols.


>
>sh> As you know, I haven't much of a taste for such formalism. If what
>
>sh> you/he are saying is that it is more probable that a symbol system is
>
>sh> conscious if it can not only pass the TT, but also the TTT, that is
>
>sh> indeed what I was saying too.


>sb> What I'm saying is that your position on TTT can be put in the form of
>sb> a quasi-formal declarative proposition about the desired causal
>sb> relation between symbol systems, sensors and effectors, and the
>sb> physical environment. (If it *can't* be so put, I haven't much of a
>sb> taste for it.) Such a proposition is sketched by Rey. Again, I'm not
>sb> gonna have to leave my leather Chesterfield to come up with a
>sb> diagnosis.

But there's no NEED for such a formalization, any more than there is for the engineering criterion of building a plane with flight capacities indistinguishable from those of a DC-11. A robot's internal symbols (if any) are grounded if it can pass T3: The interpretations of the symbols then do not need to be mediated by an interpreter; they are grounded in the robot's T3 interactions with the objects, properties, events and states of affairs in the world that the symbols are otherwise merely externally interpretable as being about.


>sb> All the best, Stevan, and thanks for the stimulating discussion.
>sb> Yours, Selmer


>sb> REFERENCES


>sb> Bringsjord (in press) "Could, How Could We Tell If, and Why Should --
>sb> Androids Have Inner LIves," in Ford, K. & C. Glymour,
>sb> eds., *Android Epistemology* (Greenwich, CT: JAI Press).


>sb> Bringsjord, S. (1992) *What Robots Can & Can't Be* (Dordrecht, The
>sb> Netherlands: Kluwer). "Searle: Chapter V" "Introspection: Chapter IX"

Thanks Selmer, and sorry for the long delay in my response! -- Stevan

-----------------------------------------------------------------------

Bringsjord, S. (1993) People Are Infinitary Symbol Systems: No Sensorimotor Capacity Necessary. Commentary on Harnad (1993) Think (Special Issue on Machine Learning) (in press)

Harnad, S. (1992) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4) (October) 9 - 10.

Harnad, S. (1993a) Grounding Symbols in the Analog World with Neural Nets. Think (Special Issue on Machine Learning) (in press)

Harnad, S. (1993b) Discussion. In: T. Nagel (ed.) Experimental and Theoretical Studies of Consciousness. Ciba Foundation Symposium 174.

Harnad, S, (1994) Does the Mind Piggy-Back on Robotic and Symbolic Capacity? To appear in: H. Morowitz (ed.) "The Mind, the Brain, and Complex Adaptive Systems.

The above articles are retrievable by anonymous ftp from host: princeton.edu directory: pub/harnad/Harnad

-------------------------------------------------------

From harnad Fri Jun 11 21:21:34 1993 To: sharnad@life.jsc.nasa.gov Subject: Preprint by S. Yee available Status: RO


> Date: Fri, 11 Jun 93 14:04:31 -0400
> From: yee@envy.cs.umass.edu
> To: harnad@princeton.edu
>
> A revised version of my paper on Turing machines, the Chinese
> room, and Godel has been accepted for publication in the philosophy journal
> LYCEUM. I was wondering whether (a) I could send the following announcement
> to the Symbol Grounding list, and (b) I could get a listing of SG-list
> subscribers so as to avoid sending any of them a duplicate announcement?
>
> Thank you very much, Richard

---------------------------------------------------------------


> Subscribers to this list might be interested in the following
> article, which will appear in LYCEUM, 5(1), Spring 1993.
>
> PostScript versions are available via anonymous ftp to
> envy.cs.umass.edu, file: pub/yee/tm-semantic.ps.Z. Instructions are
> provided below. Hard-copies are also available from the author.
>
> Questions and reactions to the article are welcomed. RY
>
>
> TURING MACHINES AND SEMANTIC SYMBOL PROCESSING:
>
> Why Real Computers Don't Mind Chinese Emperors
>
> Richard Yee
>
> Department of Computer Science
> University of Massachusetts, Amherst, MA 01003
> Internet: yee@cs.umass.edu
> Tel: (413) 545-1596, 549-1074
>
> Abstract
>
> Philosophical questions about minds and computation need to focus
> squarely on the mathematical theory of Turing machines (TM's).
> Surrogate TM's such as computers or formal systems lack abilities
> that make Turing machines promising candidates for possessors of
> minds. Computers are only universal Turing machines (UTM's)---a
> conspicuous but unrepresentative subclass of TM. Formal systems are
> only static TM's, which do not receive inputs from external sources.
> The theory of TM computation clearly exposes the failings of two
> prominent critiques, Searle's Chinese room (1980) and arguments from
> \Godel's Incompleteness theorems (e.g., Lucas, 1961; Penrose, 1989),
> both of which fall short of addressing the complete TM model. Both
> UTM-computers and formal systems provide an unsound basis for
> debate. In particular, their special natures easily foster the
> misconception that computation entails intrinsically meaningless
> symbol manipulation. This common view is incorrect with respect to
> full-fledged TM's, which can process inputs non-formally, i.e., in a
> subjective and dynamically evolving fashion. To avoid a distorted
> understanding of the theory of computation, philosophical judgements
> and discussions should be grounded firmly upon the complete Turing
> machine model, the proper model for real computers.
>
> ====================================================================
>
> Instructions for anonymous, binary ftp:
> ----------------------------------------


>
> unix> ftp envy.cs.umass.edu
>
> Name: anonymous
> Password:
> ftp> cd pub/yee
> ftp> binary
> ftp> get tm-semantic.ps.Z
> ftp> bye
>
> unix> uncompress tm-semantic.ps.Z
> unix> tm-semantic.ps

Date: Fri, 11 Jun 93 22:25:30 EDT From: "Stevan Harnad" To: yee@envy.cs.umass.edu Subject: Prima facie questions

Comments on Yee's Paper:

I could only print out page 16 and upward of Yee's paper, but from that I was able to discern the following: Once one sets aside the abstractions and technical language on the one hand, and the mentalistic interpretation on the other, Yee seems to be saying that only UTMs (Universal Turing Machines, e.g. digital computers) are symbol-manipulators, and hence open to the objections against pure symbol manipulation. TMs (Turing Machines) are not. UTMs merely SIMULATE TMs, which are in turn OTHER kinds of machines that are NOT just symbol manipulators. Systems with minds are TMs, not UTMs.

That all sounds fine, but one still needs the answers to a few questions:

(1) What kind of system is NOT a TM then, in this sense? Is a bridge, a furnace, a plane, a protein, an organism, an atom, a solar system? a brain? the universe?

(2) If the answer to all of the above is that they are all TMs, then OF COURSE systems with minds are TMs too, but then what does this tell us about the mind that isn't true about everything else under the sun (including the sun) as well? Saying it was a TM was supposed to tell us something more than that it was a physical system! We ALL accepted that in the first place. The question was, what KIND of physical system. If "TMs" does not pick out a subset, it's not informative, just as it would not be informative to tell us that the brain, like everything else, obeys differential equations.

(3) If the answer is instead that NOT all physical systems are TMs, then what distinguishes those that are from those that aren't (and what's special about the kinds with minds)? In the version of the cognition-is-computation hypothesis that I know, the UTM version, not the TM version, mentation is a form of symbol manipulation and mental states are symbolic states, independent, like software, of the physical details of their implementation. Is there implementation-independence for TMs too (it seems to me you NEED this so someone cannot say it's just differential equations, i.e., physics)? If so, WHAT is independent of the implementation, if it is not the symbol system, as in the case of UTMs? Is there multiple realizability too (i.e., would EVERY implementation of whatever it is that is implementation-independent in a TM [it's not symbols and syntax any more, so what is it?] have a mind if we found the right TM?)? Would the UTM simulating that TM have a mind? And if not, why not?

These are the kinds of things you have to be explicit about, otherwise your reader is lost in an abstract hierarchy of formalisms, without any clear sense of what kinds of physical systems, and which of their properties, are at issue.

Stevan Harnad Cognitive Science Laboratory | Laboratoire Cognition et Mouvement Princeton University | URA CNRS 1166 I.B.H.O.P. 221 Nassau Street | Universite d'Aix Marseille II Princeton NJ 08544-2093 | 13388 Marseille cedex 13, France harnad@princeton.edu | harnad@riluminy.univ-mrs.fr 609-921-7771 | 33-91-66-00-69

------------------------------------------------------------------

From harnad Sat Jun 12 14:38:38 1993 To: sharnad@life.jsc.nasa.gov Subject: Re: Prima facie question

Date: Sat, 12 Jun 93 10:40:46 -0400 From: davism@turing.cs.nyu.edu (Martin Davis)

I haven't read Yee's paper, but it seems to me that he is badly confused.

I've written a pair of technical papers a long time ago on the question precisely of when a TM can be regarded as being a UTM. The technical issue is how to separate the computational work in "decoding" the symbol string representing a TM being simulated from the work of that TM itself.

The conclusion is that any "sufficiently powerful" TM can be regarded as a UTM. Martin Davis

------------------------------------------------------------------

Date: Mon, 14 Jun 93 17:14 BST From: ronc@cogs.susx.ac.uk (Ron Chrisley)

Hello, Stevan.

A warning. I have not read Yee's paper. But some things you said while discussing it prompted me to make this quick comment (a similar point appears in my Minds & Machines draft, though perhaps not as explicitly as it should).

Date: Fri, 11 Jun 93 22:25:30 EDT From: "Stevan Harnad"

That all sounds fine, but one still needs the answers to a few questions:

(1) What kind of system is NOT a TM then, in this sense? Is a bridge, a furnace, a plane, a protein, an organism, an atom, a solar system? a brain? the universe?

(2) If the answer to all of the above is that they are all TMs, then OF COURSE systems with minds are TMs too, but then what does this tell us about the mind that isn't true about everything else under the sun (including the sun) as well? Saying it was a TM was supposed to tell us something more than that it was a physical system! We ALL accepted that in the first place. The question was, what KIND of physical system. If "TMs" does not pick out a subset, it's not informative, just as it would not be informative to tell us that the brain, like everything else, obeys differential equations.

I maintain that even if everything can be understood to be a TM (i.e., for everything, there is at least one TM description which applies to it) this does not make the idea that "cognition is computation" vacuous, on a suitably reasonable reading of that motto. The reasonable reading is not "anything that has a computational description is a cognizer"; that reading, combined with ubiquity of computation, would indeed lead to panpsychism, which many are loathe to accept. Rather, the reasonble reading is something like "anything that has a TM description which falls in class C is a cognizer", for some *natural (i.e., non-disjunctive and non-question-begging)* class C of TM's. The claim that cognition is computation does not mean that *all* computation is cogntion.

Even if you don't spell out what C is, if you subscribe to the motto, then you are saying something more than "cognition is physically realizable". You are committing yourself to the claim that there will be a natural division between cognizers and non-cognizers using the concepts of computational theory. Which is a substantial claim.

Of course you, Stevan, were well aware of this, but I wanted to clear up a possible misreading.

Ronald L. Chrisley (ronc@cogs.susx.ac.uk) School of Cognitive & Computing Sciences University of Sussex, Falmer, Brighton, BN1 9QH, UK

------------------------------------------------------------------

Date: Sun, 13 Jun 93 13:02:38 -0400 From: yee@envy.cs.umass.edu Subject: SG-list: Reply to M. Davis

Davis expresses grave doubts about whatever it is he imagines I have said about TM's and UTM's. Unfortunately, he cannot explain very well what the alleged problems with the paper are. Nevertheless, I'll take a stab at speculating on the possible relevance of his comments. I will try to briefly spell out my position on universal and non-universal Turing machines.

If the brain is essentially a TM, then surely it is an *online* one, meaning that it must respond in "real-time," producing sequences of outputs that correspond to sequences of inputs. Let us discretize time at some fine level of granularity, t = 0, 1, ..., and suppose that the brain at time t is the online machine Mt, where M0 is some TM; is the input-output pair at time t, and for t > 0,

Mt(X) = M0(, , ..., ; X).

In other words, the output of machine Mt may be a function of its entire input-output history from time 0 to t-1 (in addition to input xt). Under this view, then, instead of corresponding to a fixed TM, the brain may correspond to an evolving sequence of related but possibly slightly different TM's, which may be partially determined by specific input-output experiences. It seems extremely unlikely that the sequence of machines {Mt} would correspond to "the brain's being a UTM" in any interesting sense.

Now, Davis has proved that if a given machine Mt is "sufficiently powerful" (in a precisely defined sense), then, as I understand it, there exist recursive *encoding functions* (which are not themselves universal), under which Mt could be interpreted as a UTM. Denote such encoding functions by Et. Given the undoubted complexity of the brain, it might be that for each Mt there would exist suitable encodings Et, which would then make it possible to interpret each Mt as a UTM.

Would this make the brain essentially a UTM? Given input xt at time t, the transition of machine Mt into M(t+1):

Mt(xt) = yt ---> M(t+1)(X) = Mt(; X),

means that even if it were possible to interpret M(t+1) as a UTM, doing so might require new encoding functions E(t+1), not equal to Et. Hence, even if at each instant there were *some* interpretation under which the current state of the brain would correspond to a UTM, such interpretations might need to be determined anew at each time step.

My reaction is that viewing the brain as a UTM would only be interesting if there were *fixed* encoding functions under which for all t, Mt would be interpretable as a UTM. Such would be the case, for example, if M0 were the description of a common programmable computer because for all t, Mt would equal M(t+1), i.e., a computer's performance in running a program on an input is unaffected by any previous programs and inputs that it might have run. In contrast to this, if viewing the brain as a UTM required perpetual re-interpretation, then it would not be a coherent UTM but a sequence of *accidental UTM's* that would happen to arise due to the complexity of the machines {Mt}. In such a case, it would be more sensible to find a stable and continuous non-UTM account of brain processes (e.g., the presumably non-UTM machines {Mt}).

My apologies for this somewhat involved response to Davis's simple remarks. I must admit that the paper itself does not explain these points in such detail. It also does not address Davis's results concerning UTM's. Perhaps it should. Doing so might make certain statements in the paper more precise, but I doubt very much that it would affect the substance of what is said.

Richard Yee

------------------------------------------------------------------

Date: Mon, 14 Jun 93 18:23:54 -0400 From: yee@envy.cs.umass.edu

Please note: for the next two to three weeks, I will be mostly unable to participate in these discussions. ry

Based on reading the final pages of my paper, Stevan Harnad summarizes one of its main assertions:

> Date: Fri, 11 Jun 93 22:25:30 EDT > From: Stevan Harnad > Subject: Prima facie questions > > Comments on Yee's Paper: > > I could only print out page 16 and upward of Yee's paper, but ... > ... Yee seems to be saying that > only UTMs (Universal Turing Machines, e.g. digital computers) are > symbol-manipulators, and hence open to the objections against pure > symbol manipulation. TMs (Turing Machines) are not. UTMs merely > SIMULATE TMs, which are in turn OTHER kinds of machines that are NOT > just symbol manipulators. Systems with minds are TMs, not UTMs.

This is essentially correct, but I would add a few comments:

(a) Obviously UTM's are TM's, hence *some* TM's perform a certain type of formal symbol processing.

(b) Technically, I only claim that NO ONE HAS PROVEN that all TM's are formal processors---a necessary result for establishing a Chinese room-like argument against all computational accounts of mind. However, I also try to show why such a proof does not exist.

(c) The main thesis of the paper is that the philosophy of mind needs to focus its attention on the complete TM model. Analyses of UTM's and symbol processing are used to show how failing to do so has led to problems, e.g., the Chinese room debate. Analysis of the Godelian debate provides further evidence.

Harnad continues: > That all sounds fine, but one still needs the answers to a few questions: > > [*** PARAPHRASING: > 1. What kind of physical systems would NOT be TM's? > > 2. If all physical systems are TM's, then what is learned by saying > the mind [/brain?] is as well? > > 3. If NOT all systems are TM's, then what exactly is the TM/non-TM > distinction for physical systems, particularly as it regards > minds [/brains]? ***] > > These are the kinds of things you have to be explicit about, otherwise > your reader is lost in a abstract hierarchy of formalisms, without any > clear sense of what kinds of physical systems, and which of their > properties, are at issue.

Harnad's questions focus on the relationship between the physical world and the mathematical theory of TM-computation. I agree that this is an important philosophical issue which should be addressed. However, the purpose of my paper is to examine what the theory of computation says about the Chinese room and Godelian arguments. The theory shows that neither argument can fully refute computationalism because neither fully addresses the TM model. The conclusion is that all sides would be better served by focusing on TM's proper (instead of, e.g., UTM-computers or formal systems).

In contrast, the theory of computation says almost nothing about the physical world. Except for the discreteness and finiteness of TM processing steps, it does not specify how TM's are or are not realized by physical systems. Thus, the questions raised by Harnad, do not lie within the scope of either the theory or the current paper. As Ron Chrisley alluded to in his recent message, such questions *are* currently being addressed in the "what is computation?" discussion.

On the one hand, Harnad's questions raise the concern that computationalism might be *too broad*, being nothing more than a restatement of the view that minds arise from the actions of physical systems: brains. On the other hand, critiques such as the Chinese room and Godelian arguments claim that computation is *too limited*, specifically lacking the ability to process symbols semantically, in the same manner as minds. The concerns of the "too-broad" view are well-founded because TM-computation is quite general indeed, as the Church-Turing thesis atests. This makes the claims of the "too-limited" arguments seem all the more provocative. Furthermore, both the Chinese room (CR) and Godelian arguments are long-standing, widely debated, and widely accepted as definitive critiques of computationalism. Therefore, it is sufficient to refute them alone, without taking on the additional valid concerns raised by Harnad.

The paper tries to provide clear refutations of these two prominent critiques. Hopefully, the refutations remain mostly within the theory of computation, calling for understanding rather than belief in anyone's intuitions. The "what is computation?" issues raised by Harnad, besides being somewhat opposite in orientation, appear to lie mostly beyond the scope of the mathematical theory. Richard Yee

------------------------------------------------------------------

Date: Tue, 15 Jun 93 20:37:01 EDT From: "Stevan Harnad"

Date: Tue, 15 Jun 93 18:35 BST From: ronc@cogs.susx.ac.uk (Ron Chrisley)


>
>sh> Date: Mon, 14 Jun 93 14:50:15 EDT
>
>sh> From: "Stevan Harnad"


> rc> I maintain that even if everything can be understood to be a TM (i.e.,
> rc> for everything, there is at least one TM description which applies to
> rc> it) this does not make the idea that "cognition is computation"
> rc> vacuous, on a suitably reasonable reading of that motto. The
> rc> reasonable reading is not "anything that has a computational
> rc> description is a cognizer"; that reading, combined with ubiquity of
> rc> computation, would indeed lead to panpsychism, which many are loathe
> rc> to accept. Rather, the reasonable reading is something like "anything
> rc> that has a TM description which falls in class C is a cognizer", for
> rc> some *natural (i.e., non-disjunctive and non-question-begging)* class
> rc> C of TM's. The claim that cognition is computation does not mean that
> rc> *all* computation is cognition.


> rc> Even if you don't spell out what C is, if you subscribe to the motto,
> rc> then you are saying something more than "cognition is physically
> rc> realizable". You are committing yourself to the claim that there will
> rc> be a natural division between cognizers and non-cognizers using the
> rc> concepts of computational theory. Which is a substantial claim.


>
>sh> Ron, of course I am aware that the hypothesis that All Cognition Is
>
>sh> Computation does not imply that All Computation Is Cognition, but that
>
>sh> still does not help. Sure if the brain and the sun are both doing
>
>sh> computations, they're still doing DIFFERENT computations, and you're
>
>sh> free to call the sun's kind "solar computation" and the brain's kind
>
>sh> "cognitive computation," but then you may as well have said that they
>
>sh> both obey differential equations (different ones), and the sun obeys
>
>sh> the solar kind and the brain the cognitive kind. That's obvious a
>
>sh> priori and does not help us one bit either.

Yes, that's why I said C has to be a *natural* and *non-question-begging* class of computation. Simply *defining* what the brain does as "cognitive computation" is not going to get one anywhere. One has to show that there is a class, naturally expressible in terms of computational concepts, that includes brains and all other cognizing physical systems, but leaves out stars, stones, etc. Only then will one be justified in claiming "cognition is computation" in any non-vacuous sense. If the best one can do, when using computational concepts, is find some wildly disjunctive statement of all the systems that are cognizers, then that would suggest that cognition is *not* computation. So the claim is not vacuous, but contentious.


>
>sh> In fact, computationalism was supposed to show us the DIFFERENCE
>
>sh> between physical systems like the sun and physical systems like the
>
>sh> brain (or other cognitive systems, if any), and that difference was
>
>sh> supposed to be that the brain's (and other cognitive systems')
>
>sh> cognitive function, UNLIKE the sun's solar function, was
>
>sh> implementation-independent -- i.e., differential-equation-independent
>
>sh> -- because cognition really WAS just (a kind of) computation; hence
>
>sh> every implementation of that computation would be cognitive, i.e.
>
>sh> MENTAL (I have no interest whatsoever in NON-mental "cognition," because
>
>sh> that opens the doors to a completely empty name-game in which we call
>
>sh> anything we like "cognitive").

That might have been how many people thought computation was relevant to cognitive science, but then one can take what I say here to be a different proposal. I think both the sun and the brain can be looked at as performing some computation. So what's special about cognition is not that it is realized in a system that can be looked at computationally.

Nor is the multiple realizability feature significant here. Once one admits that there is a computational description of what the sun is doing, then one ipso facto admits that in some sense, what the sun is doing is multiply realizable too. So that's not what is so computationally special about cognition.

What's special is this: there is no *natural*, *non-wildly-disjunctive* way to distinguish white dwarf stars from red giant stars by appealing to the computational systems they instantiate. The "cognition is computation claim", however, is claiming that there *is* a natural way to distinguish cognizing systems from non-cognizing ones. And that natural way is not using the concepts of neurophysiology, or molecular chemistry, but the concepts of computational theory. This is a substantive claim; it could very well be false. It certainly isn't vacuously true (note that it is not true for the case of stars); and its bite is not threatened even if everything has a computational characterization.

Now if everything can have *every* computational characterization, then the claim might be in danger of being content-free. But that's why I took the time to rebutt Putnam's (and in some sense, Searle's) arguments for that universal-realization view.


>
>sh> This is where Searle's argument came in. showing that he could himself
>
>sh> become an implementation of that "right" cognitive system (the
>
>sh> T2-passing symbol cruncher, totally indistinguishable from a life-long
>
>sh> pen-pal), but without having the requisite cognitive (= mental) state,
>
>sh> namely, understanding what the symbols were ABOUT.

I agree that Searle was trying to show that the "cognition is computation" claim is false. But his argument applies (although I don't feel it succeeds) to my construal of the "C is C" claim. He was trying to show that there was no such class of computation that characterizes cognizers, since he could instantiate one of the computations in any proposed class and not have the requisite cognitive properties.


>
>sh> Now the implementation-independence of the symbolic level of description
>
>sh> is clearly essential to the success of Searle's argument, but it seems
>
>sh> to me that it's equally essential for a SUBSTANTIVE version of the
>
>sh> "Cognition Is Computation" hypothesis (so-called "computationalism").
>
>sh> It is not panpsychism that would make the hypothesis vacuous if
>
>sh> everything were computation; it would be the failure of "computationalism"
>
>sh> to have picked out a natural kind (your "C").

As computational theory stands now, it is pretty implementation-independent. Such a purely formal theory may or may not (I go with the latter) be the best account of computation. But even if our notion of computation were to change to a more "implementation-dependent" notion (although I suspect we wouldn't think of it as more "implementation-dependent" once we accepted it), I don't see why the "C is C" claim would be in danger of vacuity. It would be even stronger than before, right? But perhaps you just meant that it would be false, since it would rule out cognizers made of different stuff? That's just a guess that the "C is C" claim is false: that the natural kinds of computation do not line up with cognition. That's not an argument.


>
>sh> For we could easily have had a "computationalist" thesis for solar heat
>
>sh> too, something that said that heat is really just a computational
>
>sh> property; the only thing that (fortunately) BLOCKS that, is the fact
>
>sh> that heat (reminder: that stuff that makes real ice melt) is NOT
>
>sh> implementation-independent, and hence a computational sun, a "virtual
>
>sh> sun" is not really hot.

Right, but the computation that any particular hot thing realizes *is* implementation-independent. The question is whether that computation is central to the phenomenon of heat, or whether it is accidental, irrelevant to the thing *qua* hot thing.

If you want to avoid begging the question "is heat computation?", then you have to allow that heat *might* be implementation-independent. Then you notice that there is no natural computational class into which all hot things fall, so you reject the notion of "heat is computation" and thereby the prospects for implementation-independence.


>
>sh> In contrast, computationalism was supposed to have picked out a natural
>
>sh> kind, C, in the case of cognition: UNLIKE HEAT ("thermal states"),
>
>sh> mental states were supposed to be implementation-independent symbolic
>
>sh> states. (Pylyshyn, for example, refers to this natural kind C as
>
>sh> functions that transpire above the level of the virtual architecture,
>
>sh> rather than at the level of the physical hardware of the
>
>sh> implementation; THAT's what's supposed to distinguish the cognitive
>
>sh> from the ordinary physical).

No. *All* formal computations are implementation-independent. So it cannot be implementation-independence that distinguishes the cognitive from the non-cognitive (now you see why I reminded us all that "all cognition is computation" does not mean "all computation is cognition"). There are many other phenomena whose best scientific account is on a computational level of abstraction (what goes on in IBM's, etc.; perhaps some economic phenomena, et al). So the fact that a phenomenon is best accounted for on the computational (implementation-independent) level does not mean that it is cognitive. It means that it is *computational*. The big claim is that cognition is a computational phenomenon in this sense.


>
>sh> And it was on THIS VERY VIRTUE, the one
>
>sh> that made computationalism nonvacuous as a hypothesis, that it
>
>sh> foundered (because of Searle's argument, and the symbol grounding
>
>sh> problem).

I accept that "implementation-dependent" (better: not purely formal) notions of computation are probably the way to go (I've argued as much on this list before), but I don't feel that Searle compels me to do this. The sociology of computer science certainly hasn't worked that way. It just so happens that embedded and embodied notions are essential to understand normal, ordinary computational systems like token rings, CPU's hooked to robot arms, etc. But there is a disagreement here: if computational theory goes embedded (rejects implementation-independence), that doesn't mean it is vacuous; just the opposite! It makes its range of application even more restricted.

My original claim (the negation of the one you made in your initial response to Yee) was that *even if* everything has a computational characterization, that does not make the "computation is cognition" claim vacuous. I have given the reasons above. Now if we have an implementation-dependent computational theory, that does not mean that not everything will have a computational characterization. It could just mean that tokens that were of the same computational type in the formal theory are now of distinct types in the embedded theory. Nevertheless, despite such ubiquity of computation, there might still be a natural computational kind which includes just those things which are cognizers. Or there might not.

Ronald L. Chrisley (ronc@cogs.susx.ac.uk) School of Cognitive & Computing Sciences University of Sussex

--------------------------------------------------------

From: harnad@clarity.princeton.edu (Stevan Harnad) Date: Tue Jul 6 21:06:54 EDT 1993

ON IMPLEMENTATION-DEPENDENCE AND COMPUTATION-INDEPENDENCE

Ron Chrisley suggests that the thesis that "Cognition is Computation" is nonvacuous even if every physical process is computation, as long as COGNITIVE computation can be shown to be a special KIND of computation. (He does not, of course, suggest what that special kind of computation might actually be, for here he is only trying to establish that such a thesis is tenable.)

Ron does mention (and to a certain extent equivocates on) one difference that may indeed distinguish two different kinds of computation: the "implementation-DEpendent" kind (again, not defined or described, just alluded to) and the usual, implementation-INdependent kind. Ron thinks cognition may turn out to be implementation-DEpendent cognition.

In writing about physical systems that science has NOT so far found it useful to describe, study or explain computationally (the sun, for example), Ron notes that they are nevertheless computational systems, and in some sense "multiply realizable" (I'm not sure whether he means this to be synonymous with implementation-independent -- I don't think there's any cat that can only be skinned ONE way, but I don't think that's quite what's ordinarily meant by "implementation-independence" in the computational context, otherwise that term too risks becoming so general as to become vacuous.)

I'll lay my own cards on the table, though: The only implementation-independence *I* think is relevant here is the kind that a computer program has from the computer that is implementing it. Is that the kind Ron thinks the sun has? (I mean the sun in our solar system here, not the workstation by that name, of course!) If so, then a computer simulation of the sun -- a physical symbol system implementing the sun's COMPUTATIONAL description, in other words, a computer running the sun's computer program and hence systematically interpretable as the sun, a virtual sun -- would have to be hot (VERY hot, and not just symbols systematically interpretable as if they were hot). We don't even need Searle to see that THAT's not likely to happen (because you can FEEL the absence of heat, but you can't be so sure about the absence of mind). So whatever kind of "implementation-independence" the sun may have, it's not the kind we need here, for the purposes of the "cognition is computation" thesis.

So suppose we give up on that kind of software/hardware implementation-independence and settle for "implementation-DEpendent" computation -- whatever that is, for it sounds as if spelling out the nature of that dependence will turn out to be as essential to a description of such a "computational" system as the computational description itself. Indeed, it sounds as if the dependence-story, unlike the computation-story, will turn out to be mostly physics in that case. I mean, I suppose flying is an implementation-dependent sort of computation too, and that a plane is, in a sense, just a highly implementation-dependent computer. The only trouble is that all the RELEVANT facts about planes and flying are in the physics (aeronautical engineering, actually) of that dependency, rather than the computation!

So if we open up the Pandora's box of implementation-dependence, is there not the risk that the "Cognition is (implementation-dependent) Computation" thesis would suffer the same fate as a "Flying is (implementation-dependent) Computation" thesis?

Now to the play-by-play:

rc> Date: Tue, 15 Jun 93 18:35 BST rc> From: ronc@cogs.susx.ac.uk (Ron Chrisley)

rc> I said C has to be a *natural* and rc> *non-question-begging* class of computation. Simply *defining* what rc> the brain does as "cognitive computation" is not going to get one rc> anywhere. One has to show that there is a class, naturally rc> expressible in terms of computational concepts, that includes brains rc> and all other cognizing physical systems, but leaves out stars, stones, rc> etc. Only then will one be justified in claiming "cognition is rc> computation" in any non-vacuous sense. If the best one can do, when rc> using computational concepts, is to find some wildly disjunctive rc> statement of all the systems that are cognizers, then that would rc> suggest that cognition is *not* computation. So the claim is not rc> vacuous, but contentious.

It's not coincidental that the "Is Cognition Computation?" and the "What is Computation?" discussion threads are linked, because it's critical to get it straight what it is that we are affirming or denying when we say Cognition Is/Isn't Computation. I was content to assume that what was at issue was implementation-independent symbol manipulation, but now, with the introduction of TMs (Turing Machines) in place of UTMs (Universal Turing Machines) it's being suggested that that's not the issue after all. It seems to me that although your burden is not to actually produce the RIGHT theory of what is special about that subset of computation that is cognitive, you do have to give us some idea of the KIND of thing it might be.

So let's look more closely at your implementation-dependent TM theory of mind. Here's an important question: Would the UTM simulation of the right TM (the one that had mental states) have mental states? If so, we're back to Searle. If not, I'd like to know why not, since computational equivalence is supposed to be the pertinent INVARIANT that holds all these computational descriptions together. I mean, without the computational equivalence, isn't it back to physics again?


>
>sh> In fact, computationalism was supposed to show us the DIFFERENCE
>
>sh> between physical systems like the sun and physical systems like the
>
>sh> brain (or other cognitive systems, if any), and that difference was
>
>sh> supposed to be that the brain's (and other cognitive systems')
>
>sh> cognitive function, UNLIKE the sun's solar function, was
>
>sh> implementation-independent -- i.e., differential-equation-independent
>
>sh> -- because cognition really WAS just (a kind of) computation; hence
>
>sh> every implementation of that computation would be cognitive, i.e.
>
>sh> MENTAL

rc> That might have been how many people thought computation was relevant rc> to cognitive science, but then one can take what I say here to be a rc> different proposal. I think both the sun and the brain can be looked rc> at as performing some computation. So what's special about cognition rc> is not that it is realized in a system that can be looked at rc> computationally.

(I don't mean to pick on your phraseology, but that last sentence sounds like a denial of the Cog=Comp thesis right there...) But of course you are here adverting to the fact that it's going to turn out to be a special KIND of computation. Can you be more specific, perhaps give examples of other systems that have been taken to be natural kinds because they turned out to be special kinds of computational systems? And can you suggest what lines the specialness might take in the case of cognition?

rc> Nor is the multiple realizability feature significant here. Once one rc> admits that there is a computational description of what the sun is rc> doing, then one ipso facto admits that in some sense, what the sun is rc> doing is multiply realizable too. So that's not what is so rc> computationally special about cognition.

As I said, multiple-realizability is not quite the same as implementation-independence. There are, for example, many different ways to transduce light, some natural (the vertebrate retinal cone or the invertebrate omatidium), some artificial (as in a bank-door's photosensitive cell), but NONE of them are computational, nor constitute a natural family of computationally equivalent systems -- or if they do, the computational story is trivial. It's the physics of light-transduction that's relevant.

By way of contrast, all the things you can reconfigure a computer to DO by changing its software DO share interesting properties, and the properties are computational ones: The same software can be run on radically different forms of hardware yet it would still be performing the same computation. THAT was the kind of multiple-realizability that I THOUGHT was relevant to what computation is and what Cog=Comp claims.

By way of contrast, note that the number of ways you can reconfigure a UTM like a digital computer to implement different programs does NOT include a way to reconfigure it into an optical tranducer, a plane, or a sun. For that, the "reconfiguring" would have to be more radical than merely computational: It would have to be physical. (And that's why I think implementation-DEpendent "computation" is a non-starter.)

rc> What's special is this: there is no *natural*, *non-wildly-disjunctive* rc> way to distinguish white dwarf stars from red giant stars by appealing rc> to the computational systems they instantiate. The "cognition is rc> computation claim", however, is claiming that there *is* a natural way rc> to distinguish cognizing systems from non-cognizing ones. And that rc> natural way is not using the concepts of neurophysiology, or molecular rc> chemistry, but the concepts of computational theory. This is a rc> substantive claim; it could very well be false. It certainly isn't rc> vacuously true (note that it is not true for the case of stars); and rc> its bite is not threatened even if everything has a computational rc> characterization.

But, as I said, I UNDERSTOOD the claim when I took computation to be implementation-independent symbol-manipulation. But with implementation-DEpendent TMs I no longer even know what's at issue... "Not wildly disjunctive" just isn't a positive enough characterization to give me an inkling. Do you have examples, or (relevant) analogies?

Let me put it even more simply: It's clear that some subset of computer programs is the subset that can do, say, addition. Let's suppose that this subset is "not wildly disjunctive." It is then an equivalence class, of which we can say, with confidence, that every implementation of those computer programs will be doing addition. Now all you need is a similar story to be told about thinking: Find the right ("nondisjunctive") subset of computer programs, and then every implementation of them will be thinking. But now you seem to be saying that NOT every implementation of them will be thinking, because the programs are implementation DEpendent. So what does that leave of the claim that there is a nontrivial COMPUTATIONAL equivalence there to speak of at all?

Remember, if we go back to the sun, the scientific story there is thermodynamics, electromagnetism, etc. It's not in any interesting sense a computational story. Solar physics is not a branch of computer science. No one is espousing a "Solar Dynamics = Computation" hypothesis. All of physics is (approximately) COMPUTABLE, but that does not mean that physical processes are COMPUTATIONAL. And as far as I can tell, the most direct CARRIER of that dissociation is the fact that physics is not implementation-independent. So computational equivalence has a hollow ring to it when you are trying to explain the physics. The burden is to show why exactly the same thing is not true when it comes to explaining thinking.

rc> Now if everything can have *every* computational characterization, rc> then the claim might be in danger of being content-free. But that's rc> why I took the time to rebut Putnam's (and in some sense, Searle's) rc> arguments for that universal-realization view.

As it happens, I never accepted the "everything is EVERY computation" view (for the cryptographic reasons I adduced earlier in this discussion). But I think "everything is SOME [implementation-independent OR implementation-dependent] computation" is just as empty and unhelpful as a basis for a cognition-specific thesis, for that's just the Church-Turing Thesis, which is a formal thesis about what "computation" is and has NOTHING to do with mental states or what they are or aren't.

rc> I agree that Searle was trying to show that the "cognition is rc> computation" claim is false. But his argument applies (although I rc> don't feel it succeeds) to my construal of the "C is C" claim. He was rc> trying to show that there was no such class of computation that rc> characterizes cognizers, since he could instantiate one of the rc> computations in any proposed class and not have the requisite rc> cognitive properties.

Yes, but Searle's argument works only for computation construed as implementation-INdependent symbol manipulation. If some other sense of computation is at issue here, his argument may well fail, but then I don't know what would be at issue in its place.

Indeed, Searle's argument fails immediately if anyone wants to say (as, say, Pat Hayes does): Cognition has to be implemented the "right way" and Searle's implementation is not the right one. But to save THAT from amounting to just special pleading in the same sense that, say, a "wild disjunction" would be, one has to face the problem of how to distinguish the "right" from the "wrong" implementation without shifting the scientific substance of the explanation of cognition to the implementational details rather than the computation!

Again, to put it ever so briefly: Implementation-DEpendent "computation" would indeed be immune to Searle, but look at the price: Cognition is now not just the right computatation, but the right implementation of that computation -- and then the rest is just an arbitrary squabble about proportions (computations/physics).


>
>sh> Now the implementation-independence of the symbolic level of description
>
>sh> is clearly essential to the success of Searle's argument, but it seems
>
>sh> to me that it's equally essential for a SUBSTANTIVE version of the
>
>sh> "Cognition Is Computation" hypothesis (so-called "computationalism").
>
>sh> It is not panpsychism that would make the hypothesis vacuous if
>
>sh> everything were computation; it would be the failure of
>
>sh> "computationalism" to have picked out a natural kind (your "C").

rc> As computational theory stands now, it is pretty rc> implementation-independent. Such a purely formal theory may or may rc> not (I go with the latter) be the best account of computation.

It's at this point that I sense some equivocation. We are now to envision not only a still unspecified "special" KIND of computation that is peculiar to (and sufficient for) mental states, but we must also imagine a new SENSE of computation, no longer the old implementation-independent kind on which the whole formal theory was built. My grip on this inchoate "computation" is loosening by the minute...

rc> But even if our notion of computation were to change to a more rc> "implementation-dependent" notion (although I suspect we wouldn't rc> think of it as more "implementation-dependent" once we accepted it), I rc> don't see why the "C is C" claim would be in danger of vacuity. It rc> would be even stronger than before, right? But perhaps you just meant rc> that it would be false, since it would rule out cognizers made of rc> different stuff? That's just a guess that the "C is C" claim is rc> false: that the natural kinds of computation do not line up with rc> cognition. That's not an argument.

I couldn't be suggesting that such a claim was false, since, as I said, I've lost my grip on what the claim is claiming!

"Stuff" had nothing to do with the old Cog=Comp thesis, since it was implementation-independent. And I could quite consistently manage to be an anticomputationalist toward this old form of computationalism (because of the symbol grounding problem) without for a minute denying that minds could be realized in multiple ways (just as optical transducers can); in fact, that's what my Total Turing Test (T3) banks on. But SYNTHETIC alternative realizations (made out of different, but T3-equivalent stuff) are not the same as VIRTUAL alternative realizations, which is what a pure symbol system would be. Besides, a symbol-cruncher alone could not pass T3 because it lacks sensorimotor transducers -- significantly, the only part of a robot that you can't have a virtual stand-in for.

But never mind that: Why would a Cog=Comp thesis involving "implementation-dependent computation" be stronger than one involving implementation-independent computation? It sounds more like a weaker one, for, as I keep hinting, there is the risk that in that case the relevant functions are in the DEPENDENCY, i.e., in the physics (e.g., the transduction) rather than the computation.


>
>sh> For we could easily have had a "computationalist" thesis for solar heat
>
>sh> too, something that said that heat is really just a computational
>
>sh> property; the only thing that (fortunately) BLOCKS that, is the fact
>
>sh> that heat (reminder: that stuff that makes real ice melt) is NOT
>
>sh> implementation-independent, and hence a computational sun, a "virtual
>
>sh> sun" is not really hot.

rc> Right, but the computation that any particular hot thing realizes *is* rc> implementation-independent. The question is whether that computation rc> is central to the phenomenon of heat, or whether it is accidental, rc> irrelevant to the thing *qua* hot thing.

Well, apart from the fact that thermodynamics, the science of heat, shall we say, is not accustomed to thinking of itself as a computational science, surely the ESSENTIAL thing about heat is whatever "being hot" is, and that's exactly what virtual heat lacks. If this is not obvious, think of a virtual plane in a virtual world: It may be computationally equivalent to a real plane, but it can't fly! I wouldn't describe the computational equivalence between the real and virtual plane as accidental, just as insufficient -- if what we wanted was something that FLEW!

And EXACTLY the same is true in the case of wanting something that really has mental states -- at least with the old candidate: implementation-independent symbol manipulation (which could yield only a VIRTUAL mind, not a real one). But I have no idea how to get a grip on an implementation-DEPENDENT candidate. I mean, what am I to suppose as the TM in question? There's (1) real me. I have real mental states. There's (2) virtual "me" in a virtual world in the pure symbol cruncher. It's computationally equivalent to me, but for Searlean reasons and because of the symbol grounding problem, I don't belive for a minute that it has a mind.

But that's not what's at issue here. We should now think of a third entity: (3) A TM performing implementation-DEpendent computations. I can't imagine what to imagine! If it's a robot that's T3-indistinguishable, I'm already ready to accept it as a cognizing cousin, with or without a computational story (it could all be a transducer story -- or even a HEAT story, for that matter, in which case real heat could be essential in BOTH cases). But what am I to imagine wanting to DENY here, if I wanted to deny this new form of computationalism, with TMs and implementation-DEpendence instead of UTMs and implementation-INdependence?

rc> If you want to avoid begging the question "is heat computation?", then rc> you have to allow that heat *might* be implementation-independent. rc> Then you notice that there is no natural computational class into rc> which all hot things fall, so you reject the notion of "heat is rc> computation" and thereby the prospects for rc> implementation-independence.

Ron, you've completely lost me. There's no sense of heat that I can conjure up in which heat is computation, no matter how many ways it can be realized. (Again: multiple-realizability is not the same as implementation-independence.) I have no problem with synthetic heat, but that still does not help me see heat as computation (the real problem is VIRTUAL heat). And even if there is a nice, crisp ("nondisjunctive") set of unique computational invariants that characterize hot things and no others, I still don't see what it would mean to say that heat was computation -- except if EVERY implementation of the heat program were hot -- which is decidedly not true (because virtual heat is not hot). (Also, in the above passage it sounds as if you are conceding that computationality DOES call for implementation-INdependence after all.)


>
>sh> In contrast, computationalism was supposed to have picked out a natural
>
>sh> kind, C, in the case of cognition: UNLIKE HEAT ("thermal states"),
>
>sh> mental states were supposed to be implementation-independent symbolic
>
>sh> states. (Pylyshyn, for example, refers to this natural kind C as
>
>sh> functions that transpire above the level of the virtual architecture,
>
>sh> rather than at the level of the physical hardware of the
>
>sh> implementation; THAT's what's supposed to distinguish the cognitive
>
>sh> from the ordinary physical).

rc> No. *All* formal computations are implementation-independent.

Again, there seems to be some equivocation here. I thought you had said you thought that cognition might be an implementation-DEpendent form of computation earlier. Or is there now "formal computation" and "computation simpliciter" to worry about? Entities seem to be multiplying and it sounds like it's all in the service of saving an increasingly vague if not vacuous thesis...

rc> So it rc> cannot be implementation-independence that distinguishes the cognitive rc> from the non-cognitive (now you see why I reminded us all that "all rc> cognition is computation" does not mean "all computation is rc> cognition"). There are many other phenomena whose best scientific rc> account is on a computational level of abstraction (what goes on in rc> IBM's, etc.; perhaps some economic phenomena, et al). So the fact rc> that a phenomenon is best accounted for on the computational rc> (implementation-independent) level does not mean that it is cognitive. rc> It means that it is *computational*. The big claim is that cognition rc> is a computational phenomenon in this sense.

No, just as I have indictated that I of course realize that "all cog is comp" does not imply "all comp is cog," I don't think that the emphasis on the implementation-independence of cognition was enough to uniquely characterize cognition, for there are of course plenty of other implementation-independent computational phenomena. It had more of a necessary- than a sufficient-condition flavor -- though of course necessity was not at issue at all. What Pylyshyn was saying was that mental states were unlike other kinds of physical states, and were like other kinds of computational states (including nonmental ones) in being IMPLEMENTATION-INDEPENDENT. That wasn't SUFFICIENT to make every computation mental, but it was sufficient to distance cognitive science from certain other forms of physicalism, in particular, the kind that looked for a hardware-level explanation of the mind.

Now I had asked for examples. IBM is a bad one, because it's a classical (approxiation to a) UTM. Economic phenomena are bad examples too, for of course we can model economic phenomena (just as we can model solar systems), and all that means is that we can predict and explain them computationally. I would have conceded at once that you could predict and explain people and their thoughts computationally too. I just don't think the computational oracle actually THINKS in so doing, any more than the planetary oracle moves (or the virtual plane flies or the virtual sun is hot). "Economics" is such an abstract entity that I would not know what to do with that; it's not a concrete entity like a person or a set of planets. If you make it one, if you say you want to model "society" computationally, then I'll say it's the same as the solar system oracle. There's no people in the one, no planets in the other, because such things are not implementation-independent.


>
>sh> And it was on THIS VERY VIRTUE, the one
>
>sh> that made computationalism nonvacuous as a hypothesis, that it
>
>sh> foundered (because of Searle's argument, and the symbol grounding
>
>sh> problem).

rc> I accept that "implementation-dependent" (better: not purely formal) rc> notions of computation are probably the way to go (I've argued as much rc> on this list before), but I don't feel that Searle compels me to do rc> this.

Fine, but I have no idea what I am committing myself to or denying if I accept or reject this new, "nonformal" form of computationalism. I doubt if Searle would know either. He has, for example, never denied the possibility of synthetic brains -- as long as they have the relevant "causal powers" of the real brain. A pure symbol-cruncher, he has shown, does NOT have the relevant causal powers. So if you tell him that all the systems that DO have that causal power are computationally equivalent, he'll just shrug, and say, fine, so they are, just as, perhaps, all stars are computationally equivalent in some way. The relevant thing is that they have the right causal powers AND IT'S NOT JUST COMPUTATION, otherwise the virtual version -- which is, don't forget, likewise computationally equivalent -- would have the causal powers too, and it doesn't. Now that you've disavowed UTMs, pure formal sytnax and implementation-independence, this does not bother you, Ron; but just what, exactly, does it leave you with?

rc> The sociology of computer science certainly hasn't worked that rc> way. It just so happens that embedded and embodied notions are rc> essential to understand normal, ordinary computational systems like rc> token rings, CPU's hooked to robot arms, etc. But there is a rc> disagreement here: if computational theory goes embedded (rejects rc> implementation-independence), that doesn't mean it is vacuous; just rc> the opposite! It makes its range of application even more restricted.

Symbol grounding is not just "embedded" symbol crunchers with trivial add-on peripherals; but never mind. I do agree that T3 exerts constraints on ALL models (whether computational or, say, analog-transductive), constraints that T2, imagination, and virtual worlds alone do not. I've already declared that I'm ready to confer personhood on the winning T3 candidate, no matter WHAT's going on inside it. The only thing that has been ruled out, as far as I'm concerned, is a pure symbol cruncher.

But since you never advocated a pure symbol cruncher, you would have to say that Searle is right in advocating a full neuromolecular (T4) understanding of the brain, because the brain, like everything else, is a computational system, and if "C is C" is right, Searle will end up converging on the very same computational theory everyone else does.

My own guess is that going "embedded" or "implementation-dependent" amounts to conceding that the cognition is in the physics rather than just the computation. What shape that physics actually ends up taking -- whether it is just a matter of hooking up the right peripherals to a symbol cruncher in order to make the mental lights go on or (as I suspect) there's rather more to it than that -- is beside the point. The logical implication stands that without the (shall we call it) "computation-independent" physics -- the RIGHT (non-wildly disjunctive) physics -- there is no cognition, even if the computation is "right."

rc> My original claim (the negation of the one you made in your initial rc> response to Yee) was that *even if* everything has a computational rc> characterization, that does not make the "computation is cognition" rc> claim vacuous. I have given the reasons above. Now if we have an rc> implementation-dependent computational theory, that does not mean that rc> not everything will have a computational characterization. It could rc> just mean that tokens that were of the same computational type in the rc> formal theory are now of distinct types in the embedded theory. rc> Nevertheless, despite such ubiquity of computation, there might still rc> be a natural computational kind which includes just those things which rc> are cognizers. Or there might not. Ron Chrisley

I don't know about types and tokens, but if there are two physical systems that are both implementations of the very same formal (computational) system and one of them is "right" and the other one is "wrong," then it sounds as if the formal (computational) story is either incorrect or incomplete. To resolve the ambiguity inherent in computationalism it is therefore not enough to point out, as Ron has done, that the "Cognition is Computation" thesis just claims that "Cognition is a KIND of Computation"; for what it really claims is that "Cognition is JUST a Kind of Cognition." And it's that "JUST" that I think implementation-DEpendence then gives up. But once that's given up, of course, anything goes, including that the computational aspects of cognition, having already been conceded to be partial, turn out to be minimal or even irrelevant...

Stevan Harnad

--------------------------------------------------------------------

Date: Thu, 8 Jul 93 11:02:32 EDT From: "Stevan Harnad"

rk> From: Robert.Kentridge@durham.ac.uk rk> Subject: Re: "Is Cognition Computation?" rk> Date: Wed, 7 Jul 1993 14:50:10 +0100 (BST) rk> rk> As I'm very interested in classifying physical systems according rk> to their intrinsic computational properties I thought I'd offer a few rk> comments on your recent exchange with Ron Chrisley. rk> rk> Given some criteria as to what constitutes a good computational rk> description of systems (for example, ones in which the graph rk> indeterminacy of the machine describing the computation is minimized) rk> it is easy (in principle, although quite an effort in practice!) to rk> produce computational descriptions of physical systems (e.g. the sun, rk> a brain, a neuron, a neural network model). Crutchfield and Young rk> describe an algorithm to do just this; from it we can produce rk> stochastic symbolic computational descriptions of any system from rk> which we can make a series of quantified observations over time. The rk> details of the C&Y algorithm are irrelevant here; all we need to rk> concentrate on is the fact that any dynamics can have a symbolic rk> computational description. rk> rk> One problem we face in producing symbolic descriptions of systems is rk> deciding what to measure when we prepare a time-series for our rk> chosen algorithm. If I am producing computational descriptions of rk> stars with the aim of discovering the common computational principles rk> underlying starhood should I measure time series of the luminosity of rk> those stars or should I measure time series of the spatial positions rk> of all of the elementary particles constituting those stars? If I rk> choose the latter course then the computation inherent in luminosity rk> dynamics might emerge as a feature of particle dynamics computation rk> but even so I might not recognize it. Luminosity dynamics may even be rk> of so little predictive power in describing the long term evolution of rk> particle positions that its effects are omitted from the particle rk> dynamics derived description. The problem with implementation rk> independent computation in the context of cognition is that it implies rk> that a system has only one dynamics to measure and that this dynamics rk> underlies cognition. rk> rk> A good reason to worry about the transduction of the external world in rk> cognitive systems is that the nature of this transduction provides us rk> with some clues as to which features of the system from which a rk> computational description might be derived are of functional rk> importance to cognition and which aren't. We might discover some rk> relationship between cognition and computation if we investigate rk> computational descriptions of the dynamics of those features. On the rk> other hand, the intrinsic computation of functionally unimportant rk> features of the system is unlikely to further our understanding of rk> cognition. (If the system in which we believe cognition occurs is the rk> head then my bet would be that studying computational descriptions of rk> hair-growth dynamics won't get us far!!). rk> rk> ps I've now got some data on symbolic machine reconstructions from rk> biologically plausible network models, a tech report and/or preprints rk> should be available soon. rk> rk> Dr. R.W. Kentridge phone: +44 91 374 2621 rk> Psychology Dept., email: robert.kentridge@durham.ac.uk rk> University of Durham, rk> Durham DH1 3LE, U.K.

ON COMPUTATIONAL DESCRIBABILITY VS. ESSENTIAL COMPUTATIONALITY: Computationalism is not just the Church-Turing Thesis

What is at issue in the computationalist thesis that "Cognition IS (just a kind of) Computation" is not whether cognition is DESCRIBABLE by computation. That's conceded at once (by me, Searle, and anyone else who accepts some version of the Church-Turing Thesis). The question is whether it IS just computation. That's why implementation-independence is so critical.

When this is made perfectly explicit:

(1) Thinking IS just (implemented) computation (2) Hence every physical implementation of the right computation will be thinking

then the troubles with this thesis become much clearer (as Searle's Argument and the Symbol Grounding Problem show). In a nutshell, the question becomes: Is a virtual mind really thinking? Is the Universal Turing Machine (UTM), the pure symbol manipulator, that LIKEWISE implements the same computation that describes, say, the brain, THINKING? For if it is not, then Cognition is NOT (just a kind of) Computation.

I think the problem lies squarely with the UNOBSERVABILITY of mental states, which are nevertheless real (see Harnad 1993). That's the only reason this question cannot be settled as trivially as it can in the case of flying and heat. No one would doubt that one could have a full computational DESCRIPTION of a plane or a sun, but no one would dream that the computer implementation of that description (a virtual plane or a virtual sun) actually flew or got hot. Hence no one would say something as absurd as that "Flying (Heating) IS (just a kind of) Computation." Observation alone shows that the critical property is completely missing from the UTM implementation, so it CAN'T be just computation.

With thinking (cognition) we equivocate on this, partly because (a) thinking is unobservable except to the thinker, and partly because we weasel out of even that one by helping ourselves to the possibility of (b) "unconscious thinking" -- which is then unobservable to ANYONE. (The trouble is that, until further notice, unconscious thoughts only occur in the heads of systems that are capable of conscious thoughts, so we are back to (a).) So in my view most of the tenacity of the "Cognition is Computation" thesis derives from the fact that it is not OBVIOUSLY false, as it is in the case of flying and heat, because cognition (or its absence) is unobservable -- to a 3rd person. Yet it IS observable to a 1st person, and this is where Searle's "periscope" comes in.

So, to summarize, computational DESCRIBABILITY is not at issue; essential computationality is. To put