Your SlideShare is downloading. ×
IMS.BrownBagSeminar.QRP
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

IMS.BrownBagSeminar.QRP

187
views

Published on

Seminar on questionable research practices. …

Seminar on questionable research practices.
Note: part of the slides are from Greg Francis' presentation on Febr 5th 2013 in Brussels.

Published in: Technology, Education

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
187
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. From questionable research practices to questions about research IMS Brown Bag Seminar April 9th 2013 @TimSmitsTim
  • 2. Why this seminar? Why me?
  • 3. I am of a social psychological breed, so I feel a bit of a spotlight shining onme, considering the recent fraud cases…
  • 4. But also:-I hate the spotlight on experimental lab studies-I experienced a fair share of “too good to be true” moments-Sometimes, I am just like a child when it comes to ethical standards andthen …
  • 5. What is NOT at stake?
  • 6. IN MY OPINION…… it is not a problem of particular researchers. Frauds will always bethere but the major threat = dark-grey research practices… it is not a problem of one single discipline (social psychology) or onesingle research method (experimental research)
  • 7. What is at stake?
  • 8. Evolution of a whole lot of research disciplines depends on how we deal withthe situation right now.-Singular fraud vs. systemic questionable practices?And I fear the day this will inspire policy makers to cut down on our resources(there is a crisis, you know) or business people deciding to design their owneducational system (or even worse, their own flawed research).
  • 9. FRAUD Diederik Stapel (50) Data fabrication Paper duplication Yoshitaka Fuji (183) Plagiarism Dirk Smeesters Lack of IRB approval P-hacking File drawer One-sided lit. review Biased content analysis Biased interviews We? (communication sciences; KULeuven; Other questionable IMS) research practicesTHE POPE ; JESUS
  • 10. Institutional and big-scheme interventions
  • 11. -Retractions of flawed or fraudulent research papers; mega-corrections topublished articles-Research on fraud and questionable research practices (and fiercediscussions among protagonists)-Calls for replication studies; publication based on reviewed study design-Open science networks: e.g. mere publication/replication depositories –open science framework-Post-publication review: Less intrusive than a letter to the editor; more openaccess; closer to true academic discussion-Judicial sanctions for busted researchers-…
  • 12. From retractionwatch.wordpress.com
  • 13. YOUR interventions!!
  • 14. -You ARE a scientist. So trust your feelings when they say “too good to betrue…”An extreme example: Greg Francis’ research (though criticized by a.o. UriSimonsohn)***For the following slides:ALL CREDITS to Greg’s presentation on Febr 5th 2013 in Brussels***
  • 15. Experimental methods• Suppose you hear about two sets of experiments that investigate phenomena A and B• Which effect is more believable? Effect A Effect B Number of 10 19 experiments Number of experiments that 9 10 reject H0 Replication rate 0.9 0.53
  • 16. • Effect A is Bem’s (2011) precognition study that reported evidence of people’s ability to get information from the future – I do not know any scientist who believes this effect is real• Effect B is from a meta-analysis of a version of the bystander effect, where people tend to not help someone in need if others are around – I do not know any scientist who does not believe this is a real effect• So why are we running experiments? Effect A Effect B Number of 10 19 experiments Number of experiments that 9 10 reject H0 Replication rate 0.9 0.53
  • 17. Hypothesis testing (for means)• We start with a null hypothesis: no effect, H0• Identify a sampling distribution that describes variability in a test statistic X1 - X 2 t= sX -X 1 2
  • 18. Hypothesis testing (for two means)• We can identify rare test statistic values as those in the tail of the sampling distribution• If we get a test statistic in either tail, we say it is so rare (usually 0.05) that we should consider the null hypothesis to be unlikely• We reject the null H0 X1 - X 2 t= sX -X 1 2
  • 19. Alternative hypothesis• If the null hypothesis is not true, then the data came from some other sampling distribution (Ha) H0 Ha
  • 20. Power• If the alternative hypothesis is true• Power is the probability you will reject H0• If you repeated the experiment many times, you would expect to reject H0 with a proportion that reflects the power H0 Ha
  • 21. Power• Use the pooled effect size to compute the pooled power of each experiment (probability this experiment would reject the null hypothesis) Sample Effect Power• Pooled effect size size (g) size Exp. 1 100 0.249 0.578 – g*=0.1855 Exp. 2 150 0.194 0.731• The sum of the power Exp. 3 97 0.248 0.567 Exp. 4 99 0.202 0.575values (E=6.27) is the Exp. 5 100 0.221 0.578expected number of times Exp. 6 Negative 150 0.146 0.731these experiments would Exp. 6 Erotic 150 0.144 0.731 Exp. 7 200 0.092 0.834reject the null hypothesis Exp. 8 100 0.191 0.578 (Ioannidis & Trikalinos, 2007) Exp. 9 50 0.412 0.363
  • 22. Take-home-message of Greg’s studies-The file drawer phenomenon might be immense. Don’t put your money onpublished studies-Think not only about the p of your failed studies, but also their power.-For most studies in our discipline, there is about a 50% chance to discover antrue phenomenon (since many studies are underpowered)-Increase your N per hypothesis! It increases your “power” to discover aneffect (Ha= true) and (a bit) to refute an effect’s existence (H0= true)Note:To “detect” that men weigh more than women at an adequate power of .8,you need to have n=46!!! (Simmons et al., 2013).Are we studying effects that are stronger than men outweighing women??
  • 23. -You ARE a scientist. So trust your feelings when they say “too good to betrue…”-Engage in post-publication reviewing: do some active blogging about yourown studies; engage in discussions about others’ research-Replicate! Or make others replicate. That is, investigate what others havedone already. Use all available data for your insights and do not take anysingle study’s results for granted. Go ahead and p-hack your own data, butreplicate your own results-Document your studies in a good way. Genuinely question yourself: is thisreally everything one should need to know in order to replicate my study?-Openness in reporting and reviewing. Be honest and confront reviewers ifthey fetish immaculate papers-Preferably, collaborate with other researchers and use shared repositories tostore data, analyses, notes, etc.
  • 24. IRONICALLY,the net result will be that more papers will be published rather than fewer, IguessStandards for what is good enough to be published, should go down. As aresult, more will be published, and meta-analysis will become the truekatalyst to scientific progress rather than single studies.