New! Sign up for our free email newsletter.
Science News
from research organizations

Science learns from its mistakes too

Date:
September 26, 2018
Source:
BfR Federal Institute for Risk Assessment
Summary:
A mathematical model shows that even seemingly inconclusive studies speed up the gain in knowledge.
Share:
FULL STORY

Scientific studies should always be published irrespective of their result. That is one of the conclusions of a research project conducted by the German Centre for the Protection of Laboratory Animals at the German Federal Institute for Risk Assessment (BfR), the results of which have now been published in the journal PLOS ONE.

Using a mathematical model, the scientists examined the influence that individual benchmarks have on further research when preparing the studies. "The research community should do everything possible to maintain social trust in science," says BfR President, Professor Dr. Dr. Andreas Hensel. "This also means that results have to be understandable and reproducible so that false conclusions can be refuted easily. Our study shows that we achieve better results when seemingly inconclusive studies are published."

Investigations show that scientific studies have a better chance of getting published if they have a desired "positive" result, such as measuring an expected effect or detecting a substance or validating a hypothesis. "Negative" or "null" results, which do not have any of these effects, have a lesser chance of publication.

It goes without saying that scientists also have a great interest in achieving meaningful results that are worthy of publication, thus advancing research. The great significance the publishing of a study in journals has on reputation and future sponsorship further intensifies this interest. The result of this can be, however, that studies are published the results of which are not reproducible and which therefore only appear to be "positive."

These seemingly positive results then lead to further studies which build on the supposedly proven effect. The well-established practice among publishers of mainly publishing studies with positive results thus favours studies which do not stand up to scrutiny and therefore entail further unnecessary studies.

The mathematical model presented in the publication shows how the mechanism of "false positive" results can be broken through. If all studies -- irrespective of their results -- were to be published after complying with good scientific practice, a false result could be disproven more quickly.

This means that a seemingly negative result is not a drawback but rather a gain in knowledge, too. An animal test, for example, which cannot prove the efficacy of a new drug, would then not be a failure in the eyes of science but rather a valuable result which prevents unnecessary follow-on studies (and further animal tests) and speeds up the development of new therapies.

As it turned out, an additional criterion helps to facilitate the knowledge gain when preparing studies: in biomedical studies, a scientifically argued, sufficiently high number of test animals for a single experiment increases the likelihood of achieving correct and reproducible results at the first attempt. In the long run, unnecessary follow on tests with animals based on false assumptions can be avoided in this way. Ultimately therefore, the use of more test animals in a single experiment can reduce the total number of animals used.

The calculations of the BfR research team are based on biomedical research with laboratory animals, but the results can in general be applied to the life sciences.

The background of the study is the reproducibility crisis bemoaned in the life sciences and psychological research. Depending on meta-research, between 51 and 89 percent of the results published in bioscientific studies cannot be reproduced by other researchers. Neuroscientific studies show that shortcomings in the statistical evaluation of experiments are often a reason why studies cannot be reproduced.


Story Source:

Materials provided by BfR Federal Institute for Risk Assessment. Note: Content may be edited for style and length.


Journal Reference:

  1. Matthias Steinfath, Silvia Vogl, Norman Violet, Franziska Schwarz, Hans Mielke, Thomas Selhorst, Matthias Greiner, Gilbert Schönfelder. Simple changes of individual studies can improve the reproducibility of the biomedical scientific process as a whole. PLOS ONE, 2018; 13 (9): e0202762 DOI: 10.1371/journal.pone.0202762

Cite This Page:

BfR Federal Institute for Risk Assessment. "Science learns from its mistakes too." ScienceDaily. ScienceDaily, 26 September 2018. <www.sciencedaily.com/releases/2018/09/180926110910.htm>.
BfR Federal Institute for Risk Assessment. (2018, September 26). Science learns from its mistakes too. ScienceDaily. Retrieved March 28, 2024 from www.sciencedaily.com/releases/2018/09/180926110910.htm
BfR Federal Institute for Risk Assessment. "Science learns from its mistakes too." ScienceDaily. www.sciencedaily.com/releases/2018/09/180926110910.htm (accessed March 28, 2024).

Explore More

from ScienceDaily

RELATED STORIES