A summary answer to the question "What is computer science?" is
here, and a much longer answer here.
(The long-term future of CS???)
As in any major branch of science there are many sub-fields of research in Computer Science. Here is a sample of topics studied.
What forms of computation are there and how do they differ? E.g. some forms are entirely numerical, others entirely non-numerical, and some a mixture; some involve use of programs composed mainly of complex instructions executed in relatively fixed sequences, whereas others use programs composed of rules with relevance tests whose instructions get executed when they become relevant (e.g. "if it rains open an umbrella").
Some involve instructions that are executed one at a time, whereas others have many "streams" of instructions being obeyed in parallel, with the results of instructions often affecting other streams.
Some computer scientists use and study programming languages whose basic steps are concerned with making something happen in the computer, while others use logic-based languages whose basic steps involve specifying that something is true, checking whether something is true, and performing inferences to decide what is true.
Some computer scientists study the strengths and weaknesses of various programming languages and try to design new languages that overcome those weaknesses, for example
Thousands of different programming languages have been proposed by computer scientists and software engineers since the 1950s. An important aspect of computer science is study of the properties of those languages, for example what they can and cannot express (their representational power), the problems of getting them to run properly on different computers with different basic instruction sets and memory structures.
New requirements for computer languages can arise out of new problems and applications, for example languages for specifying algorithmic processes, are different from languages for specifying the "architectures" of complex systems (what the parts are, what they can do, and how they need to interact, and whether the architecture may need to change, and if so in what ways).
When information is shared between different systems, the programmers developing those systems need to agree on the forms in which the various kinds of information are stored on computers and how those forms are to be interpreted.
There has been a great deal of work on how to specify different forms of information -- including image information, recorded speech, social and historical records, information about chemical structures and processes, economic information, information about the structures and functions of new forms of machinery, or new buildings constructed with new building materials, information about the weather and climate change, information about problems of mental patients, and many more.
Many computer scientists are concerned with developing languages and methods to specify what a computer or computer-based machine is needed for -- requirements specifications.
For example, if the National Health System decides to transfer all patient records to computers, it is important to decide in what ways that would be useful, for whom it would be useful, who might need the information in the records, what they would do with the information, how their right to access the information could be checked, what records of use of the information might be desirable and how they could be produced, how new information can be combined with old information when patients or treatments change, or when errors or omissions are detected and how the information can be kept secure, and accessed only for the intended purposes, how robust the information needs to be against computer failures, and many more.
Getting the requirements into a usable form may depend on development of new languages and tools for expressing and checking requirements. But when requirements are collected and fully specified how could they be used in the development of the new systems? How will it be possible to tell whether the system actually meets the requirements?
One of the criteria for assessing programming languages is concerned with the humans who will use the languages for designing new systems, for debugging or modifying existing systems, and for explaining to colleagues what their programs do and how they work.
Often a programming language that is good for experienced programmers is not good for beginners who are encountering computing concepts for the first time -- and vice versa..
Languages that are good for designing systems that need to be rigorously checked and carefully maintained may not be good for users whose work is mainly exploring, testing, and modifying new, incomplete, ideas and communicating those ideas to fellow researchers.
And languages that are excellent for computers running one process at a time may be seriously inadequate for designing programs that are divided into parts that run in parallel.
A separate task is developing technology that allows different users to have their programs "time-shared" on a single computer. This began to be used in the late 1950s, to allow computers to be shared between users who alternated between running their programs and thinking about what to do next, with the same computer process. Time-shared systems allowed them to interleave thinking, programming and testing programs without constantly having to hand over the computer to others, and re-start later.
A great deal of work in theoretical computer science has been concerned with issues of complexity and efficiency: whether changing a program for performing some task in a certain way can speed up its running time, or reduce its memory requirements, for example.
A typical case is sorting some collection of information items into an order that may depend on something as simple as the orders of numerical measures or alphabetic orders of descriptive labels, or on something outside the computer that can change, e.g. stock-market values.
For computers handling very large numbers of operations, differences in efficiency of sorting and searching can have huge effects on cost of computer memory required, or numbers of processors required, and so on.
This is by no means a complete list of types of task that computer scientists are engaged in, and in any case the list keeps changing as new problems, new opportunities, new task domains are discovered.
One of the most challenging long term tasks is finding computer models of biological structures and processes, including the processes of biological evolution, which can itself be seen as a form of computation -- and one that produces new forms of computation as new organisms evolve with new powers of perception, learning, action, communication, and cooperation, including new symbiotic relationships or new forms of selective breeding!
Moreover, since a vast amount of biological information processing is chemistry based (as was almost all of it before neural mechanisms evolved) there seems to be a great deal still to be learnt about possible forms of information processing used in evolution, development, learning, and behaviour -- and much of it may be chemical information processing whose features are very different from those found on current computers, just as current computers, with multiple processing units, constantly interacting with other computers across local, national and international networks, are very different from the earliest computers that could only run one program at a time, presented to the computer in a sequence of cards or holes punched in a paper tape.
Video recording of invited lecture at Association for Learning Technology (ALT)
Conference, 2012:
https://www.youtube.com/watch?v=QXAFz3L2Qpo
Maintained by
Aaron Sloman
http://www.cs.bham.ac.uk/~axs
Last updated: 12 Aug 2016