At the ceremony Ron Chrisley introduced me and my work with some kind words and ended with a reference to the claim on my website that I tend to upset vice chancellors and other superior beings. After Ron, I had to make a short speech. I had prepared a few bullet points to be projected on the screen to remind me of what I wanted to say, but for some reason they never appeared, so I talked from memory. I remembered all the points except one, about computing education. Since that is a very important point, I thought I would retrospectively write out an expanded version of the acceptance speech here, with the omitted point included, below, as point 3, and other parts expanded and corrected, thanks to reminders from Sussex colleagues who have read and commented on this.
Sussex was committed to interdisciplinarity and gave me opportunities to meet and learn from many others including physicists, e.g. Tony Leggett (with whom I taught an 'Arts/Science' course for a mixed collection of first year students, and who recently won a Nobel Prize for physics), biologists (e.g. John Maynard Smith and Brian Goodwin), psychologists of several kinds (including Marie Jahoda, Stuart Sutherland and Keith Oatley), social scientists, e.g. Jennifer Platt and Donald Winch (who, by chance, was also awarded an honorary degree, the day before me), mathematicians, e.g. John Kingman who helped me formulate a new theory of the meaning of 'better', and many others.
In those days there was no pressure on young lecturers to get grants and publish journal articles. (My first grant did not come till about 13 years after my first job started. I did not start writing an article until I had a good idea, and I did not publish anything until the work had been suitably rounded off. And nobody hassled me to publish or get grants.)
So I had enormous freedom to continue learning by reading, meeting people, attending lectures and seminars outside my field, including the inspiring lectures by Max Clowes (pronounced 'clues') who arrived in 1969 as a reader in Artificial Intelligence and persuaded me to start thinking about Philosophy in a new way, by thinking about how to design a working mind (or fragments thereof) instead of merely arguing in the abstract about necessary and sufficient conditions as philosophers normally do. This led to my spending a year in Edinburgh in 1972-3 (thanks to Bernard Meltzer who obtained a grant to bring me to his Department of Computational Logic).
As a result of all that, my way of doing philosophy was completely transformed, as later reported in a book mentioned by Ron: The Computer Revolution in Philosophy: Philosophy, science and models of mind
It is an international problem, as I found, for example, when I gave an invited talk in Bremen in June 2006. There is a dreadful situation world-wide: the mechanisms many governments use in order to decide how to allocate research funding now produce tremendous pressure on everyone to keep on publishing and getting grants. This makes it very difficult for a young academic to spend as much time learning as I did, after doing a PhD and getting a job: so people have to remain narrow. It would be too risky for a young lecturer to start reading and thinking about topics that may not lead to publishable articles in the near future.
Allocating funding on the basis of measurable targets produces pressure on people to meet the targets, instead of performing the services the nation needs, and doing the work humanity can most benefit from, as we are finding in schools, the National Health Service, universities, and probably other public service organisations. It also shifts motivation away from collaboration to competition, which is often highly counter-productive.
This is partly a consequence of using performance metrics to evaluate individuals and determine funding allocations -- as if doing research were like selling cars. A better model for choosing researchers to support is choosing a wife or a husband: deciding what is worth finding out is more like deciding whom to marry -- and opinions can justifiably differ.
You certainly would not wish to select a spouse on the basis of some government list of desirable features.
Those of you who are graduating today who become leading politicians, captains of industry, and senior university managers in future should do everything in your power to reverse this disastrous process, so as to allow deep and creative research to flourish on its own time scales. Here are some alternative ways of doing things:
The analysis is presented in the form of an open letter to my MP Lynne Jones, along with a collection of news items about the NHS/iSoft fiasco, and a collection of comments from leading academics in computer science and others with practical experience, all accessible from here.
I sincerely hope that future Prime Ministers and other national leaders will understand some of these arguments.
Another book on how computers are going to change our lives? Yes, but this is more about computing than about computers, and it is more about how our thoughts may be changed than about how housework and factory chores will be taken over by a new breed of slaves.This sort of vision led us to develop new kinds of teaching that allowed students to explore ways of giving computers human-like capabilities in order to deepen their understanding of those capabilities and in order to teach them to think creatively and analytically about complex structures and processes and how they interact. Our work led to the development of the Poplog system a multi-language development environment for teaching and research, whose successful marketing helped to fund the growth of COGS in the early years (thanks to the genius of its chief architect, John Gibson, building on earlier work by Steve Hardy and Chris Mellish).
Thoughts can be changed in many ways. The invention of painting and drawing permitted new thoughts in the processes of creating and interpreting pictures. The invention of speaking and writing also permitted profound extensions of our abilities to think and communicate. Computing is a bit like the invention of paper (a new medium of expression) and the invention of writing (new symbolisms to be embedded in the medium) combined. But the writing is more important than the paper. And computing is more important than computers: programming languages, computational theories and concepts -- these are what computing is about, not transistors, logic gates or flashing lights. Computers are pieces of machinery which permit the development of computing as pencil and paper permit the development of writing. In both cases the physical form of the medium used is not very important, provided that it can perform the required functions.
Computing can change our ways of thinking about many things, mathematics, biology, engineering, administrative procedures, and many more. But my main concern is that it can change our thinking about ourselves: giving us new models, metaphors, and other thinking tools to aid our efforts to fathom the mysteries of the human mind and heart. The new discipline of Artificial Intelligence is the branch of computing most directly concerned with this revolution. By giving us new, deeper, insights into some of our inner processes, it changes our thinking about ourselves. It therefore changes some of our inner processes, and so changes what we are, like all social, technological and intellectual revolutions.
[Note added 11 Aug 2006:
A summary of some of what we did to support student-driven learning first in the Pop-11 system then later in Poplog is now available here as part of a contribution to opposition to patents for ideas about e-learning.
The teaching and research tools we developed are now freely available online at the Free Poplog web site. Now, as then, they can be used to help many people, including school children, learn to design, implement, test. debug, analyse, explain, compare and criticise, working systems, instead of merely copying and rearranging what others have created, which is what many people use computers for.
This new mode of education also began to flourish in some schools with the spread of BBC micros. Many highly creative teachers inspired new adventurous and disciplined forms of learning in their pupils -- though many teachers had no idea what to do with computers because they had no suitable training.
ALAS THE DREAM COLLAPSED.
Politicians, parents, school teachers, and industrialists all started claiming that computers should be used to teach schoolkids how to use the tools that were being used in industry. This was a world-wide folly.
So instead of learning how to THINK, children all round the world now use the potentially most powerful educational medium that has ever existed merely for the mundane task of learning how to USE the packages that run on Windows on a PC, such as word processors, browsers, email tools, databases and spread sheets --- most of which will be out of date by the time their own careers are launched.
As a result many intelligent school leavers who have never encountered programming or artificial intelligence now don't see how computing could possibly be a university degree subject: they think it's like cooking -- you learn to use a computer as you learn to use an oven. I hope to show how wrong that is. But it will not be easy. Most people are now brainwashed into thinking that a computer by definition comes with Microsoft windows on it and the idea that people, including people like them, can actually design and modify the tools and packages that run on computers never enters their heads.
I was intrigued to hear a senior Microsoft person on the radio a couple of weeks ago lamenting the fact that there are so few people coming out of schools wanting to study computing, because they think it is cool to use computer systems but don't realise it is cool to create new ones. He claimed this was seriously damaging the economy. He did not mention why this is happening.
If some of the people now graduating can be made to understand this message, then perhaps when they are teachers or politicians or parents they will not make the same drastic mistake as was made by the previous generation.
Alas, this may now be irreversible, world wide: a great tragedy of our time. Even if politicians recognize the mistake, it will take decades to produce enough teachers who have the competence to teach people to create working systems instead of merely using them.
I have one small hope regarding a way of reversing this trend. If it makes progress I'll add a note here later. But I am not very hopeful.
I've elaborated a little on these points in a paper for a conference on Grand Challenges in Computing Education in 2004. The paper is here.
Added June 2014
The Computing At School movement, which started in the UK in 2009 is now making a huge difference, but at present the potential for teaching AI in the spirit of Sussex in the 1970s and 1980s, seems to have been mostly ignored.
This work, begun at Sussex, and continued since I moved to Birmingham, involves learning from psychologists, neuroscientists, biologists, computer scientists, software engineers, AI researchers and philosophers.
Recently I have realised that we can learn enormous amounts by looking at children with the mindset of an engineer asking:
Could I design something that achieved that?Often it helps to look at videos rather than live children, because real life moves too fast, whereas a video can be viewed several times. Often you'll notice something important only on the third viewing and that will generate questions that cause you to go on noticing new things on subsequent viewings of the same video and others.
An example video of an 11 month old child feeding his belly, his legs, the carpet and his mind by eating and playing with yogurt using a spoon is discussed so as to illustrate the point in this recent poster presentation (20 slides PDF) at an AI conference in Boston. For example at a certain stage he does not realise that if you wish to transfer yogurt from the tub to your leg, it is not enough to load the spoon and then press it on your leg. At a certain age the child's ontology does not include the idea that the bowl of the spoon prevents the transfer unless the spoon is rotated. There are many other examples. A major challenge in AI, which, for all I know, may take decades to solve, is explaining those learning processes in sufficient detail to allow us to design robots that can learn in similar ways through creative play and exploration. (People who are worried about what robots might do to us may find these notes on Asimov's Laws of Robotics useful.)
Many people are doing similar work with the practical goal of trying to make smart useful new machines. My personal main goal is finding out more about humans and other animals and especially about the complex tradeoffs between knowledge and skills produced by biological evolution and those produced through individual learning. To understand more about what we are, we need to understand a lot better how we actually work. Apart from its intrinsic interest, many practical benefits could follow including much better forms of teaching which do not turn bright kids off mathematics. There could also be applications in counselling and therapy. But that's just sugar on the strawberries: the value of deeper understanding of how we work does not need to be based on practical applications.
Alas many very bright potential contributors to such research have been steered away by current forms of computing education in schools (and some universities), though fortunately a few do realise the mistake and use the fleeting opportunities provided by conversion masters degrees and interdisciplinary research projects to compensate partly for the failures of their schooling.
The results of such processes are not merely products of evolution, for they depend on the environment, but the processes that produce those products depend heavily on mechanisms provided by evolution.
People sometimes ask:
What percentage of us comes from the genes and what percentage from the environment?That's a silly question because we are not made of separate measurable bits of stuff of two kinds. The important question is
How do the environment and genetically determined mechanisms interact with each other within an individual in such a way as to produce complex patterns of learning and development?Asking how to divide up the credit for the results is just silly. Asking what the various contributions are to the processes producing those results, and asking how they work, is not silly: it is a deep and mostly unsolved problem. Solving it requires deep new explanatory theories of development and learning.
All of this is an illustration of my main final point: biological processes involve vast amounts of information-processing of many different kinds, including the processes controlling the development of an oak tree from an acorn or a giraffe from a fertilised egg, the digestion of food and distribution of components to where they are needed in the body, the detection and repair of injured or malfunctioning components all round the body, the operations of the immune system, the control processes in ecosystems.
These processes involve many different sorts of information-processing mechanisms, all produced over millions of years by biological evolution, and most of them still not understood.
KINDS OF MACHINES
Up till the last century people mostly thought that machines were things that manipulate matter and energy, e.g. diggers and cranes that move large amounts of earth or pre-constructed parts of buildings, or vehicles that move people; and many kinds of engines that convert chemical energy, wind energy, water energy, into mechanical energy, or which convert mechanical energy or chemical energy into electrical energy, and so on.
During the last century we gradually started to understand a third kind of machine: a machine that manipulates information, by acquiring, storing, transforming, analysing, combining, matching and using information. As with matter-manipulation and energy-manipulation we learnt how to build new information manipulating machines, namely computers of many types, and those machines were soon used repeatedly to help us produce the next generation of information-processing machines, so that the process accelerated unbelievably.
Despite all those technical advances it remains the case that we currently understand only a tiny subset of the kinds of information processing machines that exist on earth because the vast majority are not the ones we designed and built but were 'designed' and built by evolution long before we existed, and we mostly know very little about them, including human brains, despite all the advances in new techniques for peering into them, which mostly don't show us what they do or how they do it, but merely tell us where some of the action is.
On a different scale there are information-processing systems that exist in collections of organisms, including swarming insects, various kinds of symbiotic systems, in human societies and in ecosystems. The processes studied by economists, historians, sociologists, anthropologists and ecologists may include some rearrangements of matter and energy, and even money, but above all else they involve the acquisition, transfer and use of information of many kinds, including the latest tunes downloaded by technologically extended kids.
So my prediction for 3006 is that by then Informatics, the science that studies information-processing systems of all sorts, will have expanded far beyond the study and use of computers and will have discovered far more about biological and social information-processing systems, though there could still be many unsolved problems about how they work even 1000 years from now.
If all that is correct, then in 3006 (and maybe even by 2106) Informatics departments will subsume most of biology, neuroscience, psychology, ecology, and the social sciences.
Maybe some of that new understanding will trickle down to schools, and maybe by then schools will no longer confuse the processes of stretching young minds to the full with the processes of training industry fodder.
Outside the Dome: