Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Reprodutibilidade em resultados de pesquisa (Olavo Bohrer Amaral)


Published on

Apresentação do Prof. Dr. Olavo Bohrer Amaral na Reunião de Editores Científicos do CRICS10, em 04/12/2018

Published in: Education
  • Be the first to comment

  • Be the first to like this

Reprodutibilidade em resultados de pesquisa (Olavo Bohrer Amaral)

  1. 1. Olavo B. Amaral Reunião de Editores Científicos CRICS10 Dezembro 2018 Research reproducibility and scientific publication
  2. 2. • We are used to hearing that scientific articles are the most reliable source of evidence to which we have access. • That said, we have surprisingly little empirical evidence for this claim. An established paradigm
  3. 3. Some numbers...
  4. 4. Some numbers...
  5. 5. Some numbers...
  6. 6. Some numbers...
  7. 7. Some numbers...
  8. 8. • 11 published studies: - 4 mostly replicated - 3 partially replicated - 2 non-replicated - 2 not interpretable Some numbers
  9. 9. An empirical conclusion... • In the few areas of biomedical science studied, we cannot presume that most published findings are true.
  10. 10. The rotten apple fallacy • Retractions for misconduct have become more common, but their frequency does not begin to explain the problem.
  11. 11. The standardization fallacy • An opposite approach is to try to explain irreproducibility by purely methodological questions that can’t be controlled across laboratories. (that said...)
  12. 12. Hell is in the grey zone • The most useful way to approach irreproducibility is to look at the grey zone between methodological diffficulties and misconduct.
  13. 13. Chance and bias • With loose statistical standards, the sum of our biases can easily turn chance into scientific ‘truth’.
  14. 14. The tragedy of p < 0.05 • The definition of ‘significance’ used by most life scientists means that, with enough data and ways to analyze it, one can prove basically anything.
  15. 15. • With sufficient experiments, ‘significant’ results will occur in 1 out of 20 experiments by definition, even if nothing is true.
  16. 16. • Analyzing the same experiment in 20 different ways, it is usually not hard to find a ‘significant’ result either.
  17. 17. Publication/ Reporting Bias
  18. 18. A basic flaw Replication Citations Impact Changed plans Data exploration Analysis options Obstacles Negative results Failed experiments Criticism • When reading a scientific article, we do not know the original hypotheses, the number of comparisons and the number of analysis options.
  19. 19. Like a Tinder profile • This is not all that different from other ‘non-scientific’ syntheses of reality. Replication Citations Impact Changed plans Data exploration Analysis options Obstacles Negative results Failed experiments Criticism
  20. 20. Unmasking ‘impact’ • Traditional peer review conflates the evaluation of methods and results (and of data and stories). • Beautiful stories with lots of shaky data tend to trump solid, well-performed experiments. Volume Novelty/Impact - Number of experiments, methods, etc. - Coherence of results - Novelty of theory - Potential impact Rigor/Reproducibility - Experimental design - Protocol registration - Data availability - Statistical rigor
  21. 21. Doomed to fail • Irreproducibility is the natural outcome in a system in which impact and novelty count but truth does not.
  22. 22. Wrong time, wrong place • Even when peer review does work, it comes at the wrong time (after results are in) and behind closed doors. Registered Reports
  23. 23. The death of prepublication peer review? • It is time to discuss whether prepublication peer review should not give way to different forms of quality control.
  24. 24. Time to experiment • Do we lose that much by giving up on prepublication peer review? Score(%applicableitems) bioRxiv PubMed 0 20 40 60 80 100 Quality of reporting of methods/results in preprints vs. peer-reviewed articles bioRxiv 59.4% vs. 63.9% PubMed p = 0.019, r2 = 0.036 Carneiro et al., in prep.
  25. 25. Actual peer review • We urgently need better data on reproducibility in various scientific environments. • 71 lab-strong initiative to reproduce 50-100 experiments from Brazilian life sciences articles over the last 20 years. • Each experiment reproduced in 3 labs to estimate interlab variability. •
  26. 26. No need to fear change • Unlike with democracy, we have not really tried out anything different for the last century.
  27. 27. Barriers are cultural, not scientific • Peer review has become more important for science as a form of branding than of actual quality control.
  28. 28. Conclusions • The current model of article peer review as a barrier to publication based is more harmful than helpful for science. • Changing the way we publish science to make it more reproducible and accessible is an ethical imperative. • These changes must be accompanied by changes in how we evaluate science. • The barriers to change are cultural rather than financial or technological.
  29. 29. The tide is changing • It is time to decide whether to cling to a sinking ship or to help building new ones. •
  30. 30. Come aboard! @BrRepInitiative