Posted: 8 Jun 1997 Newsgroups: comp.ai,comp.ai.philosophy,sci.cognitive References: <51626.robin073@maroon.tc.umn.edu> From: A.Sloman@cs.bham.ac.uk (Aaron Sloman) Subject: Re: "Stuck" research (AI isn't stuck) "Alan J. Robinson" writes: [Original was posted in comp.ai I've added comp.ai.philosophy and sci.cognitive.] > Date: Fri, 30 May 97 10:39:57 CST > Organization: University of Minnesota > > Somewhat of an aside, but if it is any consolation to AI researchers, > theirs is not the only field in the general area of the basic and > applied behavioral and brain sciences that is "stuck". > > A couple of the best examples are the psychiatric disorders, in > particular schizophrenia and manic depression. > .... > .... [lots of stuff snipped] > .... > Alan J. Robinson > robin073@maroon.tc.umn.edu > Golden Hind International > Artificial Intelligence Research I don't think research on schizophrenia is stuck. I have recently attended some lectures and a workshop from which I get the impression that knowledge is growing rapidly and far more is known now than 20 or 30 years ago, e.g. about how the disease progresses from early childhood (where it appears there are now quite good predictors) through various manifestations at different ages. It seems clear that it involves many different aspects of the whole information processing architecture, since the symptoms can affect a wide variety of behavioural and mental phenomena, e.g. the mode of play in early childhood, the syntactic forms used by older children in their essays, bizarre thought processes and experiences later in life, etc. (please don't ask me for further details: I am merely recounting what I remember from lectures I've attended as a sideline over the last nine months or so). Clues are also beginning to emerge from FMRI data showing functional differences in brains of schizophrenics performing various tasks, though such data are still necessarily pretty crude. A lot of the mechanisms seem to involve the pharmacology of the brain: a fast growing research field. It seems that more and more different types of chemicals and chemical processes are being discovered that play an important role in "higher level" brain functioning. (This should not be surprising if you think about the effects of alcohol, hallucinogenic drugs, pain killers, hormones, etc.) When a problem is extremely difficult, the fact that it has not been solved after a hundred years of research, or even a thousand years, does not mean that the research is stuck. (But I note that Alan used quotation marks.) Likewise I don't for a minute agree with anyone who says that AI is stuck. I suspect these comments mainly come from people who don't go to AI conferences, who don't read AI journals, who don't visit AI labs, and who don't try to do AI or only try to do it in a very limited way. (In addition to the bigots who simply want the whole world to switch to THEIR theory or technique, and waste time and energy disparaging everything else instead of trying to synthesise what's good in different approaches and techniques.) There is a huge amount of work going on in AI chipping away, with varying degrees of success, at large numbers of sub-problems to do with vision, learning, memory, robotics, problem solving, planning, communcation, cooperative problem solving, pattern recognition, motor control, rule induction, data mining, etc. (Talk to the companies that are successfully selling services or products based on these techniques.) Some of the work is not labelled "AI", but who cares about that? Some of it is never announced because it simply gets adopted into larger systems, e.g. plant control systems, configuration management tools, office management systems, interfaces to software systems or machines, etc. (This is stuff I've picked up from talking to people in industry. To name one example: Integral Solutions Ltd in the UK has a data-mining product combining a variety of AI techniques, which is now selling all round the world.) Algebraic and other mathematical software tools which are widely used as a matter of course can be traced back to AI research (e.g. at MIT) in the 60s and 70s. One *real* often-noted problem is that AI has become fragmented. This is partly a result of the success of the work in the various fragments (vision, learning, NLP, planning, neural nets, alife, evolutionary computation, etc. etc. ) There is so much to absorb in each of the sub-fields that hardly anyone has the time, the educational background or the breadth of vision to take on the task of integrating the information, a task we are addressing in a limited way (using the "Broad and shallow" approach) here in Birmingham. Another problem is that there are different views of the goals of AI. It's clear that the vision of the main founders of AI (including people like Turing, Minsky, McCarthy, Simon, and others) was much broader than the vision of most people currently working in AI labs (who are often under terrible pressures to publish or perish, and who often have only a very narrow educational background, e.g. mathematics and computer science). I characterise the broader vision of AI as Exploration of design space and niche space and mappings between them and their dynamics (i.e. the study of trajectories in niche space and design space). A similar view of AI was presented by Randy Davis(from MIT) in his presidential address to the AAAI conference in Portland august 1996, so this isn't just my quirky vision. The spaces referred to include designs and niches of both natural and artificial systems (I.e. the "artificial" in AI is misleading, and always has been: even the inventor of the name, John McCarthy, spends a lot of time thinking about and trying to understand natural intelligence as part of his work on AI, as do, or did, Turing, Minsky, Simon and others). There's no sharp boundary between designs for systems that are and systems that are not intelligent. So the "intelligent" in "AI" is also misleading. E.g. people in AI labs have tried to model various kinds of insect abilities, and who cares whether they should be called "intelligent" or not. (E.g. a spider's ability to build a web, an ant's ability to climb over an obstacle.) On this view AI (when done properly) synthesises a lot of psychology, theoretical biology, ethology, brain science, software engineering, computer science, and philosophy. Among its applications should be not only the creation of useful machines but also provision of powerful explanatory ideas that can help us understand schizophrenia, emotional and motivational disorders, why educational programmes fail, etc. etc. Cognitive science is the subset of AI thus construed that studies human (or human and other animal) designs and niches. To see why AI is not stuck you have to see how progress is being made in the whole picture: it's slow but real, with spurts coming in different places at different times. Finally I should note that it is just silly to restrict AI to using *computational* processes and mechanisms, for two reasons. (a) the notion of computation is very unclear -- to be more precise: the formal, mathematicial, notion of computation is very clear but that is a notion concerned only with structures, not mechanisms and causation: the notion of what is a computational mechanism is very unclear. (b) If new more powerful mechanisms turn out to be useful (e.g. quantum computers, chemical mechanisms, and even mechanical devices in robots) then AI will use them. Restricting AI to what we now understand by computation would be as silly as restricting physics 200 hundred years ago to what could be expressed using the mathematics available at that time. Computation is just a tool, and tools can change. It is silly to define a scientific activity by the tools it happens to use at a particular time, even if the tool provided a tremendous increase in power when it became available. In fact all sorts of additional tools have been used in AI robotics labs for as long as I can remember, e.g. various kinds of transducers, hardware-software tradeoffs in compliant wrists and other mechanical designs, etc. No doubt future AI robotic research will look at quantum computers, chemical mechanisms, etc. Likewise restricting AI to what can be done using logic is silly: there are many forms of representation with different strenths and weaknesses for different purposes. (For more on this see my paper 'Beyond Turing Equivalence' in P.J.R. Millican and A. Clark (eds) Machines and Thought: The Legacy of Alan Turing (vol I), 1996, The Clarendon Press, Oxford, pp 179--219, (originally presented at the 1990 Turing colloquium and also available in the Birmingham ftp directory. See below.) Cheers. Aaron Papers expanding the ideas presented here are in the Cognition and Affect FTP directory at the University of Birmingham: ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect/ The files 0-INDEX and 0-INDEX.html give a full list, with abstracts. A recent addition, still in draft form, presents some ideas about the evolution of consciousness, consistent with the above framework. ===