Peer Review Is Broken

Introduction 

Children are taught that science is a systemic way to find out truths about how the world functions. That view is too simplistic but for a non-obvious reason; scientific peer review is broken. Peer review issues include: too many false results, p-hacking, lack of experimental reproducibility, exclusion of null results, an eminence culture that silences new voices, publication bias and an increasing amount of fake scientific journals.

Statistical Manipulation 

John Ioannidis, one of the most cited scientists in the world, is a professor of medicine and statistics at Stanford. He is the founder of the Meta-Research Innovation Center at Stanford (METRICS). In “Why Most Published Research Findings Are False,” Ioannidis found that “a research finding is less likely to be true when”:

    • the studies conducted in a field are smaller

    • when there is a greater number of tested relationships 

    • where there is greater flexibility in designs, definitions, outcomes, and analytical modes

    • when there is greater financial prejudice

    • when more teams are involved in a scientific field

His study found that, “in modern research, false findings may be the majority or even the vast majority of published research claims” (Ioannidis, 2005). In a later study published in JAMA, Ioannidis documented “p-hacking” by scientists attempting to show statistical significance of irrelevant findings (Belluz, 2016).

Lack of Reproducibility

Reproducibility is one of the key tenets of the scientific method but a lot of experiments can’t be reproduced. Nature admitted, “[t[here is growing alarm about results that cannot be reproduced.” They now track reproducibility as a separate tab on their website (Nature, 2016). The magazine surveyed over 1,500 scientists and found that “[m]ore than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments” (Baker, 2016).

Eminence Culture

Eminence is a title used by clergy and nobility. An eminent person is above question or reproach. Science all too often confers eminence upon a prestigious scientist. Eminent scientists have the power to determine what is accepted by the scientific community as truth. Economists measured the relationship between eminent scientists and scientific paper publication counts. The economists “examin[ed] entry rates into the fields of 452 academic life scientists who pass[ed] away while at the peak of their scientific abilities.” The economists found that, “the flow of articles by collaborators into affected fields decreases precipitously after the death of a star scientist. In contrast, we find that the flow of articles by non-collaborators increases by 8% on average.” 

After the loss of a leading light, insiders stopped publishing as many papers and a slew of new scientists managed to get their papers published. Not only were new scientists able to publish but their papers were “disproportionately likely to be highly cited.” These new papers were “more likely to be authored by scientists who were not previously active in the deceased superstar's field.” The authors conclude that “outsiders are reluctant to challenge leadership within a field when the star is alive. [T]hese results paint a picture of scientific fields as scholarly guilds to which elite scientists can regulate access, providing them with outsized opportunities to shape the direction of scientific advance in that space” (Azoulay, 2015). Other evidence of eminence culture comes from the increasing acknowledgement that young scientists have a hard time getting grants to do research (Mulhere, 2015).

Publication Bias

Publication bias is how the outcome of a study impacts the likelihood that a paper will get published. When an experiment returns a result that is outside of expectations, it is considered a null result. Null results are rarely published. These negative test results are just as valid, per the scientific method, as an expected result (Song, 2013). Publication bias is incredibly important with the rise of meta-analysis studies. Meta-analysis is only as good as the data. Without null results, meta-analysis studies will not be able to paint a true picture of the dataset being studied (Kicinski, 2015).

Publication bias also leads to the decline effect. First observed in the 1930’s, “[m]any scientifically discovered effects published in the literature seem to diminish with time.” The lack of unpublished null results is one of the key reasons the decline is believed to be observed (Schooler, 2011).

Fake Scientific Journals

In a publish or die world, many scientists will pay several hundred dollars for an article to get published. Fee based journals don’t offer any (or minimal) peer review and quick publishing timelines. They are known as “predatory publishers” because they often hurt sincere scientists who accidentally submit to them. A site that tracks these journals shows that their number has been growing exponentially; there were 23 in 2012 and 923 in 2016 (Beal, 2016). Predatory journals published 420,000 articles in 2014 (Straumsheim, 2015).

Conclusion

In scientific papers, it is now possible to claim almost anything. Scientific papers can no longer be accepted prima facie meaningful. The public’s rejection of climate change, acceptance of a flat earth or denial of evolution is made tougher to dispute by science’s failure to police itself. 

References

Azoulay, Pierre, Christian, F., Joshua, S. Graff, Zivin, “Does Science Advance One Funeral at a Time?,” National Bureau of Economic Research, 2015, http://www.nber.org/papers/w21788

Baker, Monya, “1,500 scientists lift the lid on reproducibility,” Nature, 2016, http://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970

Beal, Jeffery, “Beal’s List of Predatory Publishers 2016," https://scholarlyoa.com/2016/01/05/bealls-list-of-predatory-publishers-2016/

Belluz, Julia, “An unhealthy obsession with p-values is ruining science,” Vox, 2016, http://www.vox.com/2016/3/15/11225162/p-value-simple-definition-hacking

“Challenges in irreproducible research,” Nature, retrieved 12/7/2016, http://www.nature.com/news/reproducibility-1.17552)

Ioannidis, John, “Why Most Published Research Findings Are False,” PLOS Medicine, 2005, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/

Kicinski, Michal et al, “Publication bias in meta-analyses from the Cochrane Database of Systematic Reviews,” Wiley Online Library, 2015, https://onlinelibrary.wiley.com/doi/abs/10.1002/sim.6525

Mulhere, Kaitlin, “Freezing Out Young Scientists,” Inside Higher Ed, 2015, https://www.insidehighered.com/news/2015/01/07/share-research-funding-going-young-scientists-declining

Song, F, Hooper, L, Loke, YK, “Publication bias: what is it? How do we measure it? How do we avoid it?,” Dovepress, 2013, https://www.dovepress.com/publication-bias-what-is-it-how-do-we-measure-it-how-do-we-avoid-it-peer-reviewed-article-OAJCT

Schooler, Jonathan, ”Unpublished results hide the decline effect,” Nature, 2011, http://www.nature.com/news/2011/110223/full/470437a.html

Straumsheim, Carl, “Predatory' Publishing Up,” Inside Higher Ed, 2015, https://www.insidehighered.com/news/2015/10/01/study-finds-huge-increase-articles-published-predatory-journals In philosophy of science