News

Scientists ‘bad at judging peers’ published work,’ says new study

Are scientists any good at judging the importance of the scientific work of others?

According to a study in the journal PLOS Biology (08 October 2013)1, scientists are unreliable judges of the importance2 of fellow researchers’ published papers.

The paper’s lead author, Professor Adam Eyre-Walker of the University of Sussex, says: “Scientists are probably the best judges of science, but they are pretty bad at it.”

Professor Eyre-Walker and colleagues studied three methods of assessing published scientific papers, using two sets of peer-reviewed articles.3

The three assessment methods the researchers looked at were:

  • Peer review: subjective post-publication peer review where other scientists give their opinion of a published work;
  • Number of citations: the number of times a paper is referenced as a recognised source of information in another publication;
  • Impact factor: a measure of a journal’s importance, determined by the average number of times papers in a journal are cited by other scientific papers.

The findings, say the authors, show that scientists are unreliable judges of the importance of a scientific publication: they rarely agree on the importance of a particular paper and are strongly influenced by where the paper is published, over-rating science published in high-profile scientific journals.

Furthermore, the authors show that the number of times a paper is subsequently referred to by other scientists bears little relation to the underlying merit of the science.

The authors argue that the study’s findings could have major implications for any future assessment of scientific output, such as currently being carried out for the Government’s forthcoming Research Excellence Framework (REF).

Professor Eyre-Walker, who led the study, says: “The three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased and an expensive method by which to assess merit.

“While the impact factor may be the most satisfactory of the methods considered, since it is a form of prepublication review, it is likely to be a poor measure of merit, since it depends on subjective assessment.”

Professor Eyre-Walker adds: “The quality of the assessments generated during the REF is likely to be of very poor quality, and call into question whether the REF in its current format a suitable method to assess scientific output.”


Notes for Editors

1 ‘The assessment of science: the relative merits of post-publication review, the impact factor and the number of citations’, A. Eyre-Walker, N. Stoletzki, PLOS Biology, 08 October 2013.

Public Library of Science (PLOS) Biology is an open-access peer-reviewed scientific journal covering all aspects of biology.

2 Importance is employed here to mean science that has landmark significance, as defined in the Wellcome Trust data set described below.

3 Data sets: One set comprised 716 papers on research funded at least in part by the Wellcome Trust (each of which had been scored by two assessors and published in 2005. The other set comprised 5,811 papers, also published in 2005, from the Faculty of1000 (F1000) database, 1,328 of which had been assessed by more than one assessor.

For each paper the researchers collated citation information for up to six years after publication. They also obtained the impact factor of the journal in which the paper had been published.

University of Sussex Press office contacts: Maggie Clune and Jacqui Bealing. Tel: 01273 678 888. Email: press@sussex.ac.uk or contact Professor Eyre-Walker on 07805 873841.

View press releases online at: http://www.sussex.ac.uk/newsandevents/


By: Maggie Clune
Last updated: Tuesday, 8 October 2013

Share: