This file is http://www.cs.bham.ac.uk/research/projects/cogaff/misc/research-impact.html
A partial index of discussion notes is in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html
I am not interested in impact, only quality of research, which does not always correlate
with impact, since the latter is often subject to fashion and transient funding policies, etc.
If you want a study of impact you would do better to consult a social scientist, or better
still, commission an expert to do a systematic investigation of the proposer's current
impact, since what academics know about impact will tend to be biased by their interests
and restricted to the community they interact with: they may be totally unaware of deep
impact outside their own field -- especially when the work is interdisciplinary in content.
Moreover high impact as measured by citations is often a consequence of making mistakes
that many other people comment on, or choosing a topic that just happens to be fashionable
at the time, or putting forward an idea that is slightly different from other well known
ideas and not so original that people have to think really hard to comment or criticise.
Many great research achievements of the past could not possibly have been assessed for their
impact until many years later, in some cases long after the death of the researcher (e.g. Gregor Mendel).
When I do agree to write reviews I say what I think about the research quality: things
like the depth, difficulty, and importance of the questions addressed, the originality,
clarity, precision and explanatory power of the theories developed, and how well they fit
known facts, as far as I can tell, or whether they make interesting new predictions worth
If there are engineering products I may have some comments on their contribution to
research. I shall NOT be able to evaluate their contribution to wealth or happiness --
and I doubt the reliability of people who think they can, however sincere they may be.NOTE:
I am aware that estimates of "importance" can be both subjective (what's important
for X may be regarded as trivial or frivolous by Y) and highly conjectural -- e.g. if it
depends on how well the research outcomes eventually impact on problems and
theories that are currently agreed to be important).
So estimates of importance require judgment, and judgment can vary between
sincere, well informed, highly able judges.
But there is no means of guaranteeing that funding decisions are correct!
Fortunately, seriously mistaken rejection of a research topic by one agency may
be compensated for by acceptance by another. But that is unlikely to happen if all
funding agencies stop using intelligent judgment and instead try to evaluate
researchers and proposals by "objective" metrics. That is guaranteed to
lead to rejection of some superb proposals -- whose real worth will never be known.
If I have heard the applicant give presentations or have other sources of information
about the quality of their teaching and communication skills I am willing to comment on
those abilities, though I know that a good teacher for one group of students could be a
poor one for a different group. Sometimes the best teachers for the outstanding students
are constantly challenging their students with hard, but not impossible, problems and
tasks, and that can get them poor ratings from the majority of students.
Researchers and teachers should not allow themselves to be dictated
to by managers following the latest managerial fashions.
It never ceases to amaze me that neither senior academics in universities, nor senior
administrators in funding agencies, nor senior politicians, can see the huge wastefulness
in trying to make major service organisations funded by the nation put substantial
resources into behaviour that is comparable to a group of monkeys all struggling to be
near the food at the top of a greasy pole.
Instead of contributing to all that wasted effort, they should be cooperating to produce a
national system of research and education that, among other things, provides the best
possible opportunities for all young minds with academic potential to be stretched to
their limits, and (with international collaboration) pushes research frontiers in as many
directions as possible: for we never know where new knowledge gems lie in wait.
If funds available nationally for research can't support all the academics needed for
teaching then perhaps we should return to the pre 1992 system where higher education
institutions included a significant proportion of polytechnics who focused mainly on
teaching, including providing 'update' courses for local commerce and industry. They
performed a major national service which was seriously distorted by making them all
universities, and in the process the university system was also seriously damaged, perhaps
irretrievably, by the need to provide teaching suited to students with a lower general
level of prior knowledge and ability, many of whom could have done very well in polytechnics.
For allocation of very scarce research funding we should not pretend that we can evaluate
research in advance, and instead provide a reasonable base level for all university
departments (after reducing their numbers) and for the more expensive research requiring
extra resources, use a weighted, dynamically adjusted lottery for allocating funding,
perhaps as described here (a first draft).
The current system, which causes publicly funded public service organisations to fight for
high positions in league tables because that's the only way to get adequate funding, can be
compared with looking after a bridge over a wide river by regularly examining the underwater
supports and putting resources into maintaining the pillars that are in the best condition.
School of Computer Science
The University of Birmingham