Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- Presentación3 by maquensi 50 views
- The attraction of_drawing_cartoons_ by wiegerhul 466 views
- Content management tools nl by Peter van Gameren 298 views
- Blog curation software nl by Peter van Gameren 239 views
- Earnings Disclaimer by Peter van Gameren 233 views
- Terms Of Use by Peter van Gameren 307 views

401 views

Published on

Note: part of the slides are from Greg Francis' presentation on Febr 5th 2013 in Brussels.

No Downloads

Total views

401

On SlideShare

0

From Embeds

0

Number of Embeds

2

Shares

0

Downloads

0

Comments

0

Likes

1

No embeds

No notes for slide

- 1. From questionable research practices to questions about research IMS Brown Bag Seminar April 9th 2013 @TimSmitsTim
- 2. Why this seminar? Why me?
- 3. I am of a social psychological breed, so I feel a bit of a spotlight shining onme, considering the recent fraud cases…
- 4. But also:-I hate the spotlight on experimental lab studies-I experienced a fair share of “too good to be true” moments-Sometimes, I am just like a child when it comes to ethical standards andthen …
- 5. What is NOT at stake?
- 6. IN MY OPINION…… it is not a problem of particular researchers. Frauds will always bethere but the major threat = dark-grey research practices… it is not a problem of one single discipline (social psychology) or onesingle research method (experimental research)
- 7. What is at stake?
- 8. Evolution of a whole lot of research disciplines depends on how we deal withthe situation right now.-Singular fraud vs. systemic questionable practices?And I fear the day this will inspire policy makers to cut down on our resources(there is a crisis, you know) or business people deciding to design their owneducational system (or even worse, their own flawed research).
- 9. FRAUD Diederik Stapel (50) Data fabrication Paper duplication Yoshitaka Fuji (183) Plagiarism Dirk Smeesters Lack of IRB approval P-hacking File drawer One-sided lit. review Biased content analysis Biased interviews We? (communication sciences; KULeuven; Other questionable IMS) research practicesTHE POPE ; JESUS
- 10. Institutional and big-scheme interventions
- 11. -Retractions of flawed or fraudulent research papers; mega-corrections topublished articles-Research on fraud and questionable research practices (and fiercediscussions among protagonists)-Calls for replication studies; publication based on reviewed study design-Open science networks: e.g. mere publication/replication depositories –open science framework-Post-publication review: Less intrusive than a letter to the editor; more openaccess; closer to true academic discussion-Judicial sanctions for busted researchers-…
- 12. From retractionwatch.wordpress.com
- 13. YOUR interventions!!
- 14. -You ARE a scientist. So trust your feelings when they say “too good to betrue…”An extreme example: Greg Francis’ research (though criticized by a.o. UriSimonsohn)***For the following slides:ALL CREDITS to Greg’s presentation on Febr 5th 2013 in Brussels***
- 15. Experimental methods• Suppose you hear about two sets of experiments that investigate phenomena A and B• Which effect is more believable? Effect A Effect B Number of 10 19 experiments Number of experiments that 9 10 reject H0 Replication rate 0.9 0.53
- 16. • Effect A is Bem’s (2011) precognition study that reported evidence of people’s ability to get information from the future – I do not know any scientist who believes this effect is real• Effect B is from a meta-analysis of a version of the bystander effect, where people tend to not help someone in need if others are around – I do not know any scientist who does not believe this is a real effect• So why are we running experiments? Effect A Effect B Number of 10 19 experiments Number of experiments that 9 10 reject H0 Replication rate 0.9 0.53
- 17. Hypothesis testing (for means)• We start with a null hypothesis: no effect, H0• Identify a sampling distribution that describes variability in a test statistic X1 - X 2 t= sX -X 1 2
- 18. Hypothesis testing (for two means)• We can identify rare test statistic values as those in the tail of the sampling distribution• If we get a test statistic in either tail, we say it is so rare (usually 0.05) that we should consider the null hypothesis to be unlikely• We reject the null H0 X1 - X 2 t= sX -X 1 2
- 19. Alternative hypothesis• If the null hypothesis is not true, then the data came from some other sampling distribution (Ha) H0 Ha
- 20. Power• If the alternative hypothesis is true• Power is the probability you will reject H0• If you repeated the experiment many times, you would expect to reject H0 with a proportion that reflects the power H0 Ha
- 21. Power• Use the pooled effect size to compute the pooled power of each experiment (probability this experiment would reject the null hypothesis) Sample Effect Power• Pooled effect size size (g) size Exp. 1 100 0.249 0.578 – g*=0.1855 Exp. 2 150 0.194 0.731• The sum of the power Exp. 3 97 0.248 0.567 Exp. 4 99 0.202 0.575values (E=6.27) is the Exp. 5 100 0.221 0.578expected number of times Exp. 6 Negative 150 0.146 0.731these experiments would Exp. 6 Erotic 150 0.144 0.731 Exp. 7 200 0.092 0.834reject the null hypothesis Exp. 8 100 0.191 0.578 (Ioannidis & Trikalinos, 2007) Exp. 9 50 0.412 0.363
- 22. Take-home-message of Greg’s studies-The file drawer phenomenon might be immense. Don’t put your money onpublished studies-Think not only about the p of your failed studies, but also their power.-For most studies in our discipline, there is about a 50% chance to discover antrue phenomenon (since many studies are underpowered)-Increase your N per hypothesis! It increases your “power” to discover aneffect (Ha= true) and (a bit) to refute an effect’s existence (H0= true)Note:To “detect” that men weigh more than women at an adequate power of .8,you need to have n=46!!! (Simmons et al., 2013).Are we studying effects that are stronger than men outweighing women??
- 23. -You ARE a scientist. So trust your feelings when they say “too good to betrue…”-Engage in post-publication reviewing: do some active blogging about yourown studies; engage in discussions about others’ research-Replicate! Or make others replicate. That is, investigate what others havedone already. Use all available data for your insights and do not take anysingle study’s results for granted. Go ahead and p-hack your own data, butreplicate your own results-Document your studies in a good way. Genuinely question yourself: is thisreally everything one should need to know in order to replicate my study?-Openness in reporting and reviewing. Be honest and confront reviewers ifthey fetish immaculate papers-Preferably, collaborate with other researchers and use shared repositories tostore data, analyses, notes, etc.
- 24. IRONICALLY,the net result will be that more papers will be published rather than fewer, IguessStandards for what is good enough to be published, should go down. As aresult, more will be published, and meta-analysis will become the truekatalyst to scientific progress rather than single studies.

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment