Presentation given at Organization for Human Brain Mapping Annual Meeting in Singapore 2018
Video recording: https://www.pathlms.com/ohbm/courses/8246/sections/12538/video_presentations/116214
2. Reproducibility vs replicability
• Reproducibility – can you recreate the same result
using original data and code?
• Replicability – can you recreate the same result
using new data but same experimental design?
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5778115/
3. Why reproducibility is useful?
1. You can come back to your own analysis after a
break (think peer review) or on a new machine
2. You can verify and extend other people’s analyses
5. Data
• Access to data is necessary, but not sufficient for
reproducibility!
• Publish your raw data:
• OpenNeuro
• FCP/INDI
• Protected institutional archive
6. Code: automation
• Automate your data analysis!
• If you do something twice write code for it.
• Use containerized and versioned preprocessing
tools:
• aa - http://automaticanalysis.org
• C-PAC - https://fcp-indi.github.io
• FMRIPREP – http://fmriprep.org
10. Code: version control
• Git and GitHub are useful for everyday work
• They also provide a way for you to share your code
• Use tags/releases
• Zenodo.org for archival
18. What makes a good replication?
Dimension 1: The need for replicating the original
finding (1-5).
• Is the original finding used in policy making?
• Does the original finding clinical practice?
• Did the original finding open up a new subfield of
research?
https://figshare.com/articles/Replication_Award_Creation_Kit/5567083
19. What makes a good replication?
• Is there a debate about the original finding?
• Are there studies undermining the original finding?
• Are there studies confirming the original finding?
https://figshare.com/articles/Replication_Award_Creation_Kit/5567083
20. What makes a good replication?
Dimension 2: Quality of the replication attempt (1-
5).
• Was the replication study preregistered?
• Was the study protocol discussed with the original
researchers prior to acquiring data and/or
performing analysis?
• Was the replication performed by an independent
team of researchers or was it done by the same
people?
https://figshare.com/articles/Replication_Award_Creation_Kit/5567083
21. What makes a good replication?
• Was the sample size sufficient considering the
originally reported effect size?
• Were the methods used in the replication attempt
in accordance with current academic standards?
• Would the departures from the original protocol in
the replication attempt change the conclusion of
the original study if they were applied originally?
https://figshare.com/articles/Replication_Award_Creation_Kit/5567083
22. What makes a good replication?
Quality
Importance
Important topic
Poor quality
Great execution
Niche topic
OHBM Replication Award
Winners
23. Publishing replication studies
1. Most neuroimaging studies are underpowered
2. Appropriately powered replications might yield
null results
3. Null results are hard to publish
25. Summary
• High level of reproducibility can be achieved with
• Data sharing
• Code version control
• Software containers
• Replication studies
• Require careful planning
• Are a great fit for Registered Reports
26. - www.winrepo.org
- over 700 profiles
- easy search
- recommendations
Repository for women in neuroscience
Support the project:
sign up
spread the word
submit recommendations