Some uses of information are very simple such as the use of information by a thermostat turning a heater on or off, or a Watt governor controlling the speed of a steam engine. Note the subtle differences between the uses of energy and the uses of information in such devices: the information is used to control how much energy is used at any time. Other uses are far more complex, such as the use of information in a fertilised egg to produce an ant or an antelope. Use of information about how to add numbers, built into a calculator, and information about which numbers to add, provided by a user, is closer to the simple cases, whereas uses of information in brains when when humans add numbers mentally are much more complex cases.
Some of the mechanisms are products of recent technology, for instance, the physical computers, the computer networks, and the virtual machines that run on physical computers (like operating systems such as Android, Linux, MacOs and Windows). Others are ancient products of biological evolution including brains, minds, ecosystems and socio-economic systems. All of these are concerned with information -- either as part of what controls the processes, or as products of the processes. The information may be used immediately, stored for later use, communicated to other information users, or some combination of these.
Because of the broad applicability of its concepts, theories, and techniques, computer science increasingly overlaps with other disciplines, in the same sort of way as mathematics has always done.
Information, expressed as computer programs and in other forms, can be used to control behaviours of man-made machines. Information also controls living systems or parts of living systems, on many scales, including regulation of processes within individual cells, and planning a journey to a feeding site. Information can be available for use without ever actually being used by anyone or anything and without being intended as a source of information. For example, every structured object, whether a pebble on a beach, a portion of the night sky, a tree, or an abstract mathematical structure, such as a perfect cube or a collection of numbers, can be a source of information about the object; and computer science includes the study of possible ways in which such information can be acquired, represented, manipulated and used.
Computer science includes the study and use of both artificially created information structures and naturally available information. In particular it investigates how machines can acquire and use such information and, in collaboration with other disciplines, investigates how organisms (including humans) use information, for example in perception, formation of goals, making plans to achieve goals, controlling behaviours in accordance with such plans, and communicating with others.
In both man-made machines and organisms there are structures (simple or complex, physical or abstract, entities) that are constructed, examined, manipulated, or transformed and collections of instructions specifying what to do. This has led some to propose slogans similar to:
Computation = Data + Control or Data + AlgorithmsHowever, individual instructions and whole programs are themselves structures that can be operated on, e.g. when they are constructed, or when they are investigated in order to understand design flaws, or when they are modified to remove errors or to extend what the programs can do. So computer science includes investigation of programs that create, debug or modify programs. This can include altering the algorithms, altering how data-structures are used to encode information, and altering how different parts of a complex system are put together (the information-processing architecture).
When did it start?
Computer Science as we now know it was born in the first half of the 20th Century. But it has much older roots, in ancient mathematics (including algorithms), in logic (including powerful forms of representation and reasoning), in philosophy (addressing hard questions about knowledge, reasoning, mind and life), in various forms of engineering — including mechanical calculators, musical boxes, automated weaving machinery (such as the Jacquard loom (1801), and machines for sorting statistical information encoded on 'punched cards', used for the United States census in 1890, and for many business applications before computers as we now know them existed, or were even thought of. Unlike modern general purpose computers, those machines were restricted to following instructions for particular sorts of tasks.
The ideas of Charles Babbage (1791 - 1871), who designed and partly built what he called his "Analytical Engine", came very close to what we now understand by a general purpose stored program computer. Moreover his collaborator Ada Lovelace, who understood many of the implications of the design, seems to have been the first programmer, if we discount the ancient mathematicians who developed algorithms to be executed by human computers! Although earlier philosophers had asked questions about whether machines can have minds, she seems to have been the first to raise such philosophical questions about what programmable computers can do.
Who started it?
In the 20th Century, computer science ramped up with increasing speed, building on theoretical work by Gottlob Frege, Bertrand Russell, Jacques Herbrand, Kurt Gödel, Alonzo Church, Emil Post, Alan Turing, Konrad Zuse, John von Neumann, Claude Shannon, and many others. Much of that work was concerned with development of mathematical models of computation and investigation of properties of the models. For example, several of the models that were defined differently turned out to have equivalent problem-solving powers. This raised the question whether new, possibly still unknown mechanisms, for example biological brain mechanisms, might have greater powers.
Comparing powers of natural and artificial computers.
A recent variant of that question is whether the chemical computational mechanisms that are essential for life, including construction of brains, and were used by biological evolution to produce millions of different life forms, including microbes and humans, have important features that digital computers and neural computers both lack.
Most of computer science is now concerned with "general purpose" computers, machines that differ in the details of their construction but can all, if suitably programmed, perform the same range of computations (although they may differ in their speed, their storage capacity, and their attached peripheral devices, e.g. screens, video cameras, microphones, speakers, joysticks, touch-pads, storage devices, etc.)
There are also special purpose computers that perform restricted tasks and in some cases do so more cheaply or efficiently than the general purpose computers.
Digital (discrete, general purpose) and analog (continuous, special purpose)
General purpose computers are all discrete, or digital, i.e. based on switches that can be in discrete states, with discontinuous changes (e.g. between "on" and "off" states). Different combinations of switch states can be interpreted by a central processor as expressing different instructions. The generality comes from the ability of one physical machine to be given many different sets of instructions that, when run, perform a wide variety of tasks. Not all digital computers are general purpose, however, as some are hard-wired to perform a fixed set of tasks (e.g. interpreting programs in a fixed 'machine language').
In contrast, many special purpose analog computers are based on continuously varying physical states, such as voltages, currents, electrical resistance, or pressures and flow of fluids. Discreteness can ensure total predictability of a computer (while it works), and that predictability can be of great importance for users. But there are significant costs associated with synchronising large numbers of switches, and research on analog computers continues, although they are not included in all computer science courses.
From theory to practical engineering
Early computer science research was mainly theoretical, but during and after World War 2 electronic computers became increasingly important and made many new practical applications possible, including the famous code-breaking work in Bletchley Park. Advances in physics, materials science, and electrical engineering, and later on computer-controlled design and manufacturing, made it possible to develop ever smaller, cheaper and more powerful computing machines, with ever-increasing complexity, and with much lower energy requirements than their predecessors.
Challenging requirements of large businesses, military organisations and research laboratories also led to development of networks linking many computers together, performing a variety of different functions collaboratively, often distributed across continents. Geographical separation of components of an integrated system raises new theoretical and practical problems for computer science.
While theoretical computer science and computer systems engineering, along with their applications, all expanded, more and more sciences and industries used computing machines and computing techniques for doing calculations, making predictions, designing and controlling complex systems, interpreting complex data, solving mathematical problems, finding mathematical proofs, and developing new engineering applications -- including the internet and world-wide web. Not all of them used digital computers. For some applications, electrical or mechanical continuous, or hybrid, control mechanisms can be effective, like the Watt centrifugal governor, many temperature controllers, and self-aligning windmills, e.g. with large vanes to capture physical energy from wind, and small vanes to acquire and use information about wind direction, as depicted below (with thanks to Wikimedia):
Influence on other sciences
As knowledge about computation spread, more and more sciences that had been established earlier began to absorb ideas from computer science, including mathematics, philosophy, psychology, neuroscience, biology, physics and chemistry. At first these applications were restricted to processing records of data from experiments or surveys, and in some cases controlling machines performing scientific experiments, but gradually it was realised that computer programs could be used to formulate powerful explanatory theories about the workings of mechanisms that could not be directly observed or measured, for example theories about how brains understand language or process visual information, or theories about unobservable processes inside stars or theories about how tornadoes form.
As a result, there have been increased opportunities for students in many scientific and engineering disciplines to learn to use computers, and many of them learn to use a programming language (or program development environment) to express algorithms composed of instructions telling a computer what to do with some starting information -- which may include a problem statement, a collection of numbers, a collection of symbolic records, a collection of images, speech recordings, or other information structures.
One of the major scientific successes was the Human Genome Project, which used computer controlled laboratory procedures and computer-based analysis of the results: a use of artificial information processing systems to help us understand natural information processing systems: human reproductive mechanisms.
Unfortunately the kind of programming education offered students in non-computing disciplines often produces narrowly focused technical expertise without providing a deep understanding of the scope of computational systems and problems, or the variety of types of programming languages and the reasons for their existence, topics that are part of a balanced computer science education.
Computational models of living systems
Relations between inputs and outputs (sensory and motor signals) cannot summarise the complex internal computations involved in animal perception, learning, reasoning, planning, problem-solving, language learning, and socialisation. We need to build working models to test psychological theories with deep explanatory power.
Biological theories about how organisms reproduce, grow, acquire new capabilities, fight infections, repair damage, etc. are all concerned with information and the use of information in control mechanisms of many kinds.
For these reasons, computational ideas have become increasingly important in the 21st century, in sciences that study living things, since varieties of types of information, forms of representation of information, and mechanisms for manipulating and using information are essential to understanding living things. So life scientists increasingly use ideas from computer science and have begun to contribute new ideas.
It is now commonplace to regard mechanisms of evolution, reproductive mechanisms, mechanisms controlling growth of plants and animals, the mechanisms involved in cancer and other diseases, the operations of animal nervous systems and the workings of human minds as all involving information-processing mechanisms, that cannot be understood using older research methodologies based on looking for patterns in observational and experimental data -- for example records of human responses to stimuli in laboratories: there's far too much going on between the stimuli and the responses that needs to be understood.
The processes by which complex structures grow, change their form, interact with other structures, compete for resources, and in some cases interfere with the operation of other mechanisms, cannot be understood using the ideas in older scientific theories that link a fixed number of variables in a mathematical formula, like Newton's F=M×A, or Ohm's law: Voltage=Current×Resistance.
Why are there so many different programming languages?
The diversity of practical and theoretical applications of computing has inspired the development of a large variety of programming languages and programming methodologies that differ in important ways. This diversity is not served well by an educational system that teaches a narrow subset of languages. Allowing different teachers to focus on different languages and uses of computation could produce a much richer and more varied community of scientists, engineers and other thinkers using computers.
Varieties of programming tasks
In the simplest programming task, an algorithm (an organised collection of instructions) is applied to some pre-existing information structure, which could be a single number whose square root is to be computed, or millions of data items to be analysed statistically, or a mathematical question to be answered by a calculation, or an algebraic expression whose correctness is to be checked, or some measurements and control signals in a machine that is to be controlled by the computer (e.g. a thermostatic temperature controller that sets different target temperatures at different times each week).
The instructions in the algorithm may start by inspecting the information given and performing operations, often creating new temporary information structures used to obtain a final result. The process may or may not terminate, and if it terminates may or may not give the right answer. The cleverer the programmer the more likely the answer is to be correct.
Other applications don't start with a fixed collection of information or a fixed information structure, but continually acquire more information, and use that for some purpose, which might be learning patterns in the information stream, or might simply be continuously controlling some machine in the light of the data -- for example, keeping an airliner flying at a specified altitude, in a specified direction, at a specified speed, despite continuous changes in surrounding atmospheric conditions, or maintaining the temperature and humidity of air in a building.
The diversity of applied computing
Because there are many different sorts of computational problem, different kinds of programming language and computational architecture are best suited to different problems. For example, some problems break down into sub-problems, that in turn break down into still-smaller sub-problems, so that if the original problem is complex its solution may involve solution of very large sets of sub-problems.
A computer scientist may discover that in some cases the same sub-problem is repeatedly generated when solving a big problem. So if the system is able to detect when a new sub-problem is the same as one already solved, and to re-use the previously found solution, then that could in some cases enormously reduce the number of steps required to solve problems. This requires the programming language used to be able to compare specifications of problems, to decide whether a new one is a variant of an old one, whose solution can be re-used, possibly in a slightly different form.
Expressive power of programming languages
For some programming languages and some machine architectures it may be very difficult to use that strategy for simplifying problems, whereas other programming languages and computer architectures (e.g. with support for 'memo-functions') may provide excellent support for such techniques, allowing quick answers to ever more complex questions of the form "have I previously encountered and solved problem (......) and if so what was the result?".
Identifying types of problem, and the types of machine or programming language that are good for different problem types is a very different task from designing particular algorithms for particular problems, emphasised in some answers to the question 'What is computer science?' There is no standard algorithm for developing new algorithms to solve novel problems. Finding a good solution often requires a deep understanding not only of computing but also of the application domain, and a great deal of programmer creativity. There is work in Artificial Intelligence aimed at giving computers that kind of creativity, though so far this can be done only for special types of problem that human designers understand well.
There are also cases where 'self-modifying' programs can study their own successes and failures and learn which sorts of change improve a program and which sorts make it worse. But it may take much more research to produce machines that are as good as the best human programmers at creating and modifying programs.
The stature of computer science
Partly as a result of its significance for many other sciences, Computer Science is now comparable in stature to all the other major sciences and mathematics -- attracting some of the leading thinkers of our time.
The physical sciences study forms of matter and energy in our universe, and their interactions. The life sciences study forms and mechanisms of life, including individual organisms, societies and ecosystems, and their interactions with living and non-living matter. The science of computation studies, among other things, the bridge between the two: namely information, and the ways in which it can be represented, derived, transformed, combined, created, transmitted and above all used -- whether in the simplest organisms as they reproduce and control their reactions to their immediate environment, or in the most complex organisms as they perceive, learn, act and communicate, or in advanced, increasingly intelligent, forms of technology that now dominate our industries, our social interactions, the latest medical advances, and powerful tools of thought.
This vision of computation was already clear to one of the founders of computer science, Alan Turing, in the early 1950s, when computers were still in their infancy. He even foresaw the development of artificial intelligence (AI) which now plays a crucial role in many widely used systems, and also raises deep philosophical questions about the nature of mind. He also made important contributions to computational modelling of biological processes of development (morphogenesis).
More recently, growing appreciation of the importance of computer science and computational thinking in many fields has led to a major revision of the content of computing education in UK schools, and a rapidly growing community of teachers and others concerned with the role of computing as a school subject. Thousands of teachers and many others have joined an organisation that aims to ensure that far more future school leavers, and later on university graduates, will be equipped to contribute to computer science research, or to development and use of sophisticated computational ideas and techniques in other fields: http://computingatschool.org.uk/
What do computer scientists do?
A partial answer is given in a separate file.
The future of computer science
A semi-serious discussion of the future of computer science can be found here:
Computing is a Natural Science____________________________________________________________________________
Information processes and computation continue to be found abundantly in the deep structures of many fields. Computing is not--in fact, never was--a science only of the artificial.
COMMUNICATIONS OF THE ACM July 2007/Vol. 50, No. 7, pp 13--18.
Some useful links (to be expanded):