Reproducibility in Behavioral Neuroscience: Methods Matter

Is there a reproducibility crisis?

In a recent survey (Baker, 2016) on the reproducibility in research Nature addressed 1,576 researchers. More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments.

reproducibility in behavioral neuroscience - Reproducibility statistics

Figure 1

Although 52% of those surveyed agree that there is a significant ‘crisis’ of reproducibility (Figure 1), less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature.

What is underlying this crisis?

The surveyed scientists appointed two main groups of underlying reasons which presumably led to problems in reproducibility
(Figure 2).

  1. Competition for grants and positions
  2. Lack of appropriate experimental design, data collection and statistical analysis

 

More than 60% of respondents said that pressure to publish and selective reporting always or often contributed. More than half pointed to insufficient replication in the lab, poor oversight or low statistical power. A smaller proportion pointed to obstacles such as variability in reagents or the use of specialized techniques that are difficult to repeat.

reproducibility in behavioral neuroscience - What factors contribute to irreproducible research?

Figure 2

Judith Kimble (developmental biologist at the University of Wisconsin–Madison) even postulates an omnipresent drive behind these reasons, namely the “[…] competition for grants and positions, and a growing burden of bureaucracy that takes away from time spent doing and designing research.”

In practice, it is not feasible to change the way grant proposals are evaluated or how rewards systems and promotions are decided. Therefore, in the next section we will focus on ways to improve reproducibility which are in the control of or at least influencable by the individual researcher.

How to improve replicability in practice?

Out of 11 different approaches to improve reproducibility in research, the surveyed scientists were asked to rate these. Nearly 90% choose “more robust experimental design”, “better statistics” and “better mentorship”.

To prevent cherry-picking statistically significant results in the later stages of studies, one of the best-publicized approaches to boosting reproducibility is pre-registration, where scientists submit hypotheses and plans for data analysis to a third party before performing experiments.

Obviously, improving statistical methods will not be of any value if the experimental design and execution is flawed and therefore yields faulty or unreliable data.

For example, handling laboratory animals during test procedures is an important source of stress that may impair reliability of test responses (Gouveia & Hurst, 2017). One approach to address this is the use of the ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments), which are intended to improve the reporting of research using animals (Kilkenny et al., 2010; see https://www.nc3rs.org.uk/arrive-guidelines).

However, even with the use of ARRIVE, most current behavioural assays for rodents still have short test durations, novel test environments and require human interference, which introduce coercion (Hager et al., 2014).

How to improve reproducibility in behavioral neuroscience?

Biobserve has developed a range of fully automated home-cage based experiments which reduce or eliminate ambiguity of interpretations in behavioral research and thus can improve the reproducibility in behavioral neuroscience.

We can help you in designing and establishing robust and reproducible behavioural experiments and we offer validated software and support for gathering and analysing your research data. Biobserve in short:

  • Staffed with scientists only
  • Covering all common behavioral tests
  • Ability to support novel approaches and experiments

 

Please do not hesitate to contact us regarding your next behavioural study!

References:

Baker M. (2016) 1,500 scientists lift the lid on reproducibility. Nature. 26;533(7604):452-4. doi: 10.1038/533452a.

Gouveia K, Hurst JL. (2017) Optimising reliability of mouse performance in behavioural testing: the major role of non-aversive handling. Sci Rep. 21;7:44999. doi: 10.1038/srep44999.

Hager T, Jansen RF, Pieneman AW, Manivannan SN, Golani I, van der Sluis S, Smit AB, Verhage M, Stiedl O. (2014) Display of individuality in avoidance behavior and risk assessment of inbred mice. Front Behav Neurosci. 16;8:314. Doi 10.3389/fnbeh.2014.00314.

Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. (2010) Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 8(6):e1000412. doi: 10.1371/journal.pbio.1000412.