Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

The Reproducibility Crisis in Psychological Science: One Year Later

406 views

Published on

Slides for my talk at the "Psychologist in the Pub" event, Stoke-on-Trent. 2nd November 2016.

Published in: Science
  • Be the first to comment

  • Be the first to like this

The Reproducibility Crisis in Psychological Science: One Year Later

  1. 1. The Reproducibility Crisis in Psychological Science Jim Grange www.jimgrange.wordpress.com
  2. 2. 2011 – “A Year of Horrors”
  3. 3. http://bit.ly/2eNL05d “Derailed”
  4. 4. “…the field of psychology currently uses methodological and statistical strategies that are too weak, too malleable, and offer too many opportunities for researchers to befuddle themselves and their peers”
  5. 5. Each case raised unique questions about how science is conducted in psychology From how research is planned right through to how data are analysed and published
  6. 6. All cases pertain to growing concern over the number of false positives in the literature
  7. 7. “How reproducible are psychology findings?”
  8. 8. Open Science Collaboration 270+ researchers from across the globe
  9. 9. Open Science Collaboration Performed close replications of 100 psychology studies
  10. 10. Only 36% of studies replicated!!
  11. 11. A Year Later
  12. 12. Recommendations 1. Replicate, replicate, replicate… 2. Know your statistics 3. Open your science 4. Incorporate open science practices in teaching 5. Reward open science practices
  13. 13. Recommendations 1. Replicate, replicate, replicate… 2. Know your statistics 3. Open your science 4. Incorporate open science practices in teaching 5. Reward open science practices
  14. 14. VERIFICATION!!!
  15. 15. Devoting resources to verification is irrational if the original findings are valid
  16. 16. Devoting resources to verification is rational if the original findings are invalid
  17. 17. We have a professional responsibility to ensure the findings we are reporting are robust and replicable
  18. 18. Recommendations 1. Replicate, replicate, replicate… 2. Know your statistics 3. Open your science 4. Incorporate open science practices in teaching 5. Reward open science practices
  19. 19. Recommendations 1. Replicate, replicate, replicate… 2. Know your statistics 3. Open your science 4. Incorporate open science practices in teaching 5. Reward open science practices
  20. 20. 2. Know Your Statistics
  21. 21. 2. Know Your Statistics
  22. 22. False-positives propagate because most researchers don’t understand the p-value (Cumming, 2012)
  23. 23. a) the probability that the results are due to chance b) the probability that the results are not due to chance c) the probability of observing results as extreme (or more) as obtained if there is no effect in reality d) the probability that the results would be replicated if the experiment was conducted a second time
  24. 24. a) the probability that the results are due to chance b) the probability that the results are not due to chance c) the probability of observing results as extreme (or more) as obtained if there is no effect in reality d) the probability that the results would be replicated if the experiment was conducted a second time
  25. 25. True or False? The p-value tells us something about the size of an effect
  26. 26. True or False? The p-value tells us something about the importance of an effect
  27. 27. True or False? The p-value tells us something about the probability of our hypothesis
  28. 28. a) the probability that the results are due to chance b) the probability that the results are not due to chance c) the probability of observing results as extreme (or more) as obtained if there is no effect in reality d) the probability that the results would be replicated if the experiment was conducted a second time
  29. 29. 2. Know Your Statistics p(D|H)
  30. 30. 2. Know Your Statistics p(D|H) p(H|D) Same?
  31. 31. p(Dead|Murdered)? = 1 p(Murdered|Dead) ~<.001
  32. 32. p(Dead|Murdered)? = 1 p(Murdered|Dead) ~<.001
  33. 33. p(Dead|Murdered)? = 1 p(Murdered|Dead) ~<.001
  34. 34. p(Dead|Murdered)? = 1 p(Murdered|Dead) ~<.001
  35. 35. Recommendations 1. Replicate, replicate, replicate… 2. Know your statistics 3. Open your science 4. Incorporate open science practices in teaching 5. Reward open science practices
  36. 36. Recommendations 1. Replicate, replicate, replicate… 2. Know your statistics 3. Open your science 4. Incorporate open science practices in teaching 5. Reward open science practices
  37. 37. Science works by verification
  38. 38. 27% (!!) What percentage of researchers shared their data?
  39. 39. Researchers should NOT keep their data private if they have published from it
  40. 40. Nice Bonus (1): It’s going public, so I make sure data & analysis is of high quality
  41. 41. Nice Bonus (2): I can find my data easily if asked for it
  42. 42. As of January 1, 2017, signatories (as reviewers and/or editors) make open practices a pre-condition for more comprehensive review
  43. 43. EXPLORATORY CONFIRMATORY
  44. 44. “I appreciate your results were unexpected, but in order to tell a nicer story, you should re- write your introduction as if you expected these results”
  45. 45. HARK-ing Hypothesising After the Results are Known
  46. 46. HARK-ing 92% of psychology articles report confirmed hypotheses (Fanelli, 2010)
  47. 47. www.osf.io
  48. 48. Recommendations 1. Replicate, replicate, replicate… 2. Know your statistics 3. Open your science 4. Incorporate open science practices in teaching 5. Reward open science practices
  49. 49. Recommendations 1. Replicate, replicate, replicate… 2. Know your statistics 3. Open your science 4. Incorporate open science practices in teaching 5. Reward open science practices
  50. 50. Recommendations 1. Replicate, replicate, replicate… 2. Know your statistics 3. Open your science 4. Incorporate open science practices in teaching 5. Reward open science practices
  51. 51. Recommendations 1. Replicate, replicate, replicate… 2. Know your statistics 3. Open your science 4. Incorporate open science practices in teaching 5. Reward open science practices
  52. 52. Strong Incentives to Pursue New Ideas
  53. 53. Strong Incentives to Pursue New Ideas Publications
  54. 54. Strong Incentives to Pursue New Ideas Grant Income ($$)
  55. 55. Strong Incentives to Pursue New Ideas Employment
  56. 56. Strong Incentives to Pursue New Ideas Promotion
  57. 57. Strong Incentives to Pursue New Ideas Fame…?
  58. 58. 5. Reward Open Science Practices Good for Science: - Truth seeking - Rigour - Quality - Reproducibility Good for Individuals/ Institutions: - Publishable - Quantity - Novelty - Impact
  59. 59. “…the solution requires making the incentives for getting it right competitive with the incentives for getting it published” (Nosek et al., 2012)
  60. 60. Individual Reputations Are At Stake
  61. 61. University Reputations Are At Stake
  62. 62. (Utopian) Ideas for Institutions Doing research right takes longer
  63. 63. (Utopian) Ideas for Institutions Be tolerant of lower output (if doing it right)
  64. 64. (Utopian) Ideas for Institutions Limit the “Publish or Perish” mentality
  65. 65. (Utopian) Ideas for Institutions “Rigour or Rot” “Rigour or Perish”
  66. 66. (Utopian) Ideas for Institutions Reward those doing it right
  67. 67. Universe A & B: • Investigating embodiment of political extremism • Participants (N = 1,979!) from the political left, right, and center • Moderates perceived shades of grey more accurately than left or right (p<.01).
  68. 68. Universe A & B: Moderates perceived shades of grey more accurately than left or right (p<.05).
  69. 69. Universe A • Moderates perceived shades of grey more accurately than left or right (p<.01).
  70. 70. Universe A • Moderates perceived shades of grey more accurately than left or right (p<.01).
  71. 71. Universe A • Moderates perceived shades of grey more accurately than left or right (p<.01).
  72. 72. Universe B • Moderates perceived shades of grey more accurately than left or right (p<.01). • Surprised by the effect, so tries to replicate the result before publishing –Uses even larger sample size than Study 1 • Replication fails to reproduce the effect –No publication
  73. 73. In which universe will this student most likely receive a lectureship position?
  74. 74. There is something wrong with hiring decisions if “getting it published” is rewarded more than “getting it right”
  75. 75. (Utopian) Ideas for Hiring Committees Look for evidence of open science practice
  76. 76. (Utopian) Ideas for Hiring Committees Have open science practice as a “desired” (or “essential”!) item on job specification
  77. 77. (Utopian) Ideas for Hiring Committees Judge publication quality rather than quantity
  78. 78. Recommendations 1. Replicate, replicate, replicate… 2. Know your statistics 3. Open your science 4. Incorporate open science practices in teaching 5. Reward open science practices
  79. 79. Thank You! www.jimgrange.wordpress.com

×