Photo Credits: Pixabay

The Reproducibility Crisis in Science

In science, reproducibility is a measure of the extent to which multiple experimentalists in multiple institutions following the same experimental methodology achieve the same results. It is one of the key tenets of the scientific method and a high degree of reproducibility is a necessary condition for any finding to be considered a scientific fact. To illustrate the importance of this mechanism, I would remind you that it was a single scientific article in The Lancet medical journal that led to the unsubstantiated belief that vaccines cause autism. The claim was shown to be fraudulent by subsequent experiments but in the media’s treatment of a single, irreproducible result as scientific fact, the anti-vaccination movement gained support, directly risking the health of thousands. Obviously, this is an extreme example, but it does highlight the importance of reproducibility in science which is why the current crisis is so concerning.

A survey published in Nature in the summer of 2016 revealed that 70% of researchers had tried and failed to reproduce another scientist’s experiments. Additionally, the Reproducibility Project at the University of Virginia tried to replicate 97 psychological studies and found that only 35 of them produced results resembling those of the initial studies. A 2018 article suggested that around $28 billion worth of US research per year in medical fields is non-reproducible and a separate analysis found that 85% of biomedical research carried out in the world recently cannot be reproduced.

For the last 2 decades, this problem has plagued multiple scientific fields, often because experiments were not being designed to a sufficient standard to ensure the researchers’ biases wouldn’t interfere with their interpretation of results. The pressure on scientists to publish quickly and the tendency of journals to favour the publication of sensational findings have both been blamed for motivating scientists to be less careful in their work. More recently, the use of machine learning systems in data analysis has accelerated the crisis. According to Dr Genevera Allen of Rice University, the algorithms are designed to sift through huge datasets and highlight interesting patterns, regardless of whether these make sense in the real world. Often these findings are only shown to be false when the next large data set is produced (an expensive and time-consuming process) and application of the same algorithm throws up results either irrelevant or contradictory to the initial findings.

These errors are wasteful of both resources and time in many scientific fields, so precautions need to be taken to avoid them. In the UK, for example, it has been suggested that the Research Excellence Framework should better fund and credit the publication of less revolutionary findings. Additionally, scientists including Dr Genevera Allen are working to create the next generation of machine learning algorithms which will hopefully have the capacity to assess the uncertainty of the results they produce. The reproducibility crisis does pose a significant threat to the credibility of scientific institutions, but if the flaws of the current system are adequately addressed, the scientific method will emerge stronger than ever.

Previous Story

Animal Justice – An Alternative Perspective

Next Story

Chance Adaptations – The Fascinating Outcomes of Natural Selection