From Aaron Mon Jan 29 00:25:10 GMT 1996 Newsgroups: comp.ai,comp.ai.philosophy References: <4dtge9$566@spool.cs.wisc.edu> <4e2th9$lkm@cantaloupe.srv.cs.cmu.edu> <1996Jan23.184658.3864@media.mit.edu> Subject: Re: who first used "scruffy" and "neat"? [Some thoughts and reminiscences] minsky@media.mit.edu (Marvin Minsky) writes: > Date: Tue, 23 Jan 1996 18:46:58 GMT > Organization: MIT Media Laboratory > > In article <4e2th9$lkm@cantaloupe.srv.cs.cmu.edu> Lonnie Chrisman writes: > >so@brownie.cs.wisc.edu (Bryan So) wrote: [BS] > >>A question of curiosity. Who first used the terms "scruffy" and "neat"? See below > >>And in what document? How about "strong" and "weak"? I think it was John Searle who first described a (rather confused) distinction between Strong AI and Weak AI in his 1980 paper: John R Searle, `Minds Brains and Programs' in The Behavioral and Brain Sciences, 3,3, 1980. though before that there had been (for a long time) discussion of the differences between trying to produce *simulations* of human behaviour and trying to produce machines with *their own* mental states. Before Searle introduced those terms, people used to talk about the difference between "simulation" and "replication" of mental processes, I seem to recall. The general idea is pretty old. It also marked a division between different kinds of AI researchers: 1. Those who merely wished to produce machines that could do useful things which previously only people (and other animals) could do, i.e. simulating intelligence. (AI as engineering) 2. Those who wished to replicate (and explain) mental states and processes, or at least wished to discover whether and how it might be done (which is NOT the same as wanting to do it). (AI as science and as philosophy) There was a third group (strongest at CMU I think, led by Simon and Newell): 3. Those who wanted to *model* and explain the internal cognitive processes (i.e. not just simulate the external behaviour) but without necessarily claiming that the processes thus modelled were replicated. (AI as psychology) John Haugeland used different terminology for talking about some of the options (e.g. in his introduction to the book Mind Design) when he asked whether computers, or their contents, could have "original intentionality", like the mental states of people and other animals or only "derivative intentionality" like books and records in filing cabinets, which had meaning only because people interpreted them as have meaning. Dennett (see Brainstorms, 1978) appeared to claim that this was not a question of FACT, but something to be settled by adopting "the intentional stance" to the machines in question and finding out if that proved more convenient than other stances. (What's convenient can depend on your purposes.) This later became (misleadingly) known as the "symbol grounding" problem after Harnad introduced the latter phrase (which is misleading because it starts by *presupposing* that meaning needs some sort of "grounding", a view I have previously attacked in a posting to comp.ai.philosophy, with which some people disagreed!). Searle (in 1980, and in his BBC Reith lectures a few years later) gave all this a confusing twist because of his insistence that Strong AI was committed to getting semantics out of pure syntax, which was a great surprise to many people working in AI, e.g. building robots with TV cameras, motors, etc. I.e. they used a lot more than pure syntax, and would not dream of relying only on syntax. [LC I think, wrote:] > >Since I don't see a response yet, I'll take a stab. The earliest use of > >"scruffy" and "neat" that comes to my mind was in David Chapman's "Planning > >for Conjunctive Goals", Artificial Intelligence 32:333-377, 1987. "Weak" > >evidence for this being the earliest use is that he does not cite any earlier > >use of the terms, but perhaps someone else will correct me and give an > >earlier citation. > [MM] > I think it first appeared in a paper by Robert Abelson cqlled > something like "Constraint, Construal and Cognitive Science". A long > time ago, but I don't recall the date. I remember hearing Abelson giving a talk about this at the 1981 conference of the Cognitive Science Society (at Berkeley I think). I think he then attributed the distinction to Roger Schank. After an intellectual historian gets to work, many of these things could turn out to have a much longer history than anyone remembers. E.g. in the 50s (or earlier) there was much discussion of the difference between mathematical proofs that were "perspicuous" and those that were messy and hard to follow (See Wittgenstein's Remarks on the Foundations of Mathematics). I don't know if the words "neat" and "scruffy" were used in that context, but they might have been! (Compare the recent proof of Fermat's last theorem?) I think similar contrasts were often made regarding engineering designs with varying degrees of economy, elegance, generality. I.e. the neat/scruffy distinction may be much older than the labels. E.g. during most of the 1970s there was an evident and conscious difference of approach (neat vs scruffy) between most of the work done at two of the leading AI labs: people at Stanford University (and SRI?) (inspired by McCarthy and Nilsson, among others?) tended to make a lot of use of logic, theorem provers and general purpose methods (e.g. logic-based planners), whereas work on AI at MIT (led by Minsky and Papert in those days) tended to be characterised by the notion that clean and general methods of representation and general-purpose algorithms could not work, so that a lot of domain-specific knowledge and know-how and representational apparatus was required. (There were always exceptions at both places, I guess. E.g. Schank came from Stanford (before he went to Yale) and he backed the scruffies. Marr, at MIT in the late 70s, tried to make vision neat, but that turned out not to work.) The contrast took a strange twist in the 1980s with the rebirth of connectionism, which made use of much neat mathematics as a basis for designing neural networks, whose actual behaviour on non-trivial problems turned out to be pretty scruffy and hard to understand. A personal note: I like, and wish-for, neat theories and neat systems, but, alas, I think there's a neat theory to explain why ACTUAL working intelligent systems will necessarily have a lot of scruffiness, because of: (a) the intractability of detecting inconsistences in complex information stores (b) the inevitability of errors in a complex and changing and only partly perceived environment (c) the intractability of the task of working out exactly what to undo when mistakes or inconsistencies are discovered, etc. (This partly follows from the fact that the general need for speed in a real-time system leads to lots of cache-ing of results of computations and derivations, for future use, so that mistaken theories and percepts can lead to LARGE numbers of incorrect items of information scattered around the system, which are then difficult to detect and undo, except in a piecemeal fashion as and when they cause trouble) The inevitable consequence of this, except in infinitely fast brains/computers, is steadily growing scruffiness throughout much of one's life. And the same will be true for intelligent robots. I think that's one reason why we are not immortal: it's easier to make new models with relatively clean sheets and start again every now and again than to cope with patching ever-increasing mess. Alan Bundy and I had a debate about neat and scruffy AI at a conference in 1989, published in the conference proceedings: Evolving Knowledge in Natural Science and Artificial Intelligence, eds J.E.Tiles, G.T.McKee, G.C.Dean, London: Pitman, 1990 Alan was more optimistic about the prospects for neatness than I was. What may have been a nearly final draft of my contribution can be found in: http://www.cs.bham.ac.uk/~axs/misc/scruffy.ai.text Aaron --- From article: 28146 in comp.ai.philosophy Newsgroups: comp.ai.philosophy Message-ID: <4elob8$dqs@usenet.srv.cis.pitt.edu> References: <4dtge9$566@spool.cs.wisc.edu> <4e2th9$lkm@cantaloupe.srv.cs.cmu.edu> <1996Jan23.184658.3864@media.mit.edu> <4eh44k$jf3@percy.cs.bham.ac.uk> Date: 30 Jan 1996 18:34:16 GMT Organization: University of Pittsburgh Subject: Re: who first used "scruffy" and "neat"? From: andersw+@pitt.edu (Anders N Weinstein) In article <4eh44k$jf3@percy.cs.bham.ac.uk>, Aaron Sloman wrote: > >Searle (in 1980, and in his BBC Reith lectures a few years later) >gave all this a confusing twist because of his insistence that >Strong AI was committed to getting semantics out of pure syntax, >which was a great surprise to many people working in AI, e.g. >building robots with TV cameras, motors, etc. I.e. they used a lot >more than pure syntax, and would not dream of relying only on >syntax. Um, isn't this just "syntax" (computation) plus "grounding", (causal embedding at the input and output sides)? If so, I don't understand your point against Searle, for two reasons. (1) you pooh-poohed the need for grounding earlier in your note. Why? Although I do not myself accept any form of computationalism as a theory of the nature of ordinary language mental states (belief, intention, etc), I do think that the only version worth talking about would be a grounded version. Only causal embedding can determine any reference to anything in the world, which is what original intentionality requires. I had thought from the prominence of the "control system" concept in your writing that you would be sympathetic to this line of thought, since precisely the difference between a "control system" and an arbitrary computer program is the fact that the former is causally embedded, i.e. actually controlling something non-symbolic. (2) In any case, Searle's argument does squarely address the idea of tacking on real causal embedding at the periphery of the computation (the Robot Reply). It seems to me that if your surprised AI workers believe any version of computationalism as a theory of the nature of intentional states then the Searle argument is relevant. Of course if they don't, then I suppose they are, like most AI workers, practicing "weak" (merely technological) AI as Searle understands it. Obviously Searle is only interested in the philosophical thesis -- that thought and computation are "radically the same" (Haugeland) -- which he took to be the official party line of a certain movement. I think, in fairness to Searle, that many philosophically inclined AI workers and cognitive scientists really do believe this.