Cullen: Why most published research is wrong
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Cullen: Why most published research is wrong

on

  • 525 views

Question everything! Louise cullen examines the minefield of published research and importance of reading around topics, not articles.

Question everything! Louise cullen examines the minefield of published research and importance of reading around topics, not articles.

Statistics

Views

Total Views
525
Views on SlideShare
248
Embed Views
277

Actions

Likes
2
Downloads
4
Comments
0

2 Embeds 277

http://intensivecarenetwork.com 276
https://twitter.com 1

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • When Chris asked me to do this I thought – are you for real Chris? Why all published research is wrong….Do you really think I can cover all of this in 20mins? <br />
  • Sounds easy? Right? Great – but how… <br />
  • Sounds easy? Right? Great – but how… <br />
  • Where this all come from. <br /> Open access article by John in 2005. <br /> In here John – began the discussion about issues relating to problems with studies, analyses, their publication and reporting <br />
  • Ioannidis randomly selected 50 ingredients <br /> PubMed queries for recent studies that evaluated relationship with cancer. <br />
  • Effect size shrunk with meta-analyses. <br />
  • Ronald Fisher (a UK statistician) introduced the P value in the 1920s. <br /> He did not mean it to be a definitive test <br /> He intended it simply as an informal way to judge whether evidence was significant in the old-fashioned sense: i.e. that it was worth a second look. <br /> The idea was to run an experiment, then see if the results were consistent with what random chance may produce. <br /> Researchers would first set up a ‘null hypothesis” that they wanted to disprove – such as there being no correlation or no difference between 2 groups. <br /> Next hey would play devils advocate and, assuming that this null hypothesis was in fact true, calculate the chances of getting results at least as extreme as what was actually observed. <br /> This probability s the P value <br />
  • Most look at this and say that there is a 1% chance of the findings/results being wrong. <br /> But this is wrong. <br /> The P value cannot say this – you also need to know the odds that the real effect was there in the first place. <br /> The P value actually means that there is a 1% chance of results as extreme as these would occur when there is really no difference occurring in the experiment – e.g. that a drug has no effect. <br />
  • Most look at this and say that there is a 1% chance of the findings/results being wrong. <br /> But this is wrong. <br /> The P value cannot say this – you also need to know the odds that the real effect was there in the first place. <br /> The P value actually means that there is a 1% chance of results as extreme as these would occur when there is really no difference occurring in the experiment – e.g. that a drug has no effect. <br />
  • Consider 1000 hypotheses, of which only 10% are true. <br />
  • Random error makes a hypothesis that is really false – look true. <br /> These = FP. <br /> Medicine accepts that this happens in the order of 1 in 20 times, so in 1000 hypotheses (where 100 are TP) this means there are 45 FPs <br />
  • If there are 100 TP and 45 FP then almost a third of the results that look positive would be wrong. <br />
  • But it is worse than that. <br /> There is another type of error – and that is False negatives. Where there is a true effect, but it is misinterpreted as a false one. <br /> Say 20% of the true finding fail to be detected (and this figure is difficult to quantity). That is in this case - 20 cases. <br /> Now researches see 125 hypotheses as true (where 45 are not true) <br />
  • A growing number cannot be replicated, because many studies may have not found a real result in the first place. <br /> Perhaps we should only be looking at P values &lt;0.005 <br />
  • A growing number cannot be replicated, because many studies may have not found a real result in the first place. <br /> Perhaps we should only be looking at P values &lt;0.005 <br />
  • Start at the beginning: <br /> Studies themselves <br />
  • Interesting but not really relevant? <br />
  • Steroids for traumatic brain injury <br /> Others Medical example – Investigations into H2 blockers in PUD <br /> Before realised that PUD related to H bactor pylori <br />
  • Selective <br /> ACS – pts over 75 excluded from most studies <br /> Underpowered <br />
  • The National Registry of Myocardial Infarction (NRMI) <br /> Large, US observational registry <br /> Collects baseline data, procedural, therapeutic and outcome data on discharge <br /> &gt;1million NSTEACs <br />
  • More lilkey that the findigns are false with Small size <br />
  • More lilkey that the findigns are false <br />
  • More lilkey that the findigns are false <br />
  • Not clinically relevant endpoints <br /> (Ad/Vasoproessin/ high dose Ad) <br />
  • Others – Milronone – Congestive heart failure – inc cardiac contractility – inc mortaility. <br />
  • Since publication in 1990, results from the National Acute Spinal Cord Injury Study II (NASCIS II) trial have changed the way patients suffering an acute spinal cord injury (SCI) are treated. <br /> though well-designed and well-executed, both NASCIS II and III failed to demonstrate improvement in primary outcome measures as a result of the administration of methylprednisolone. Post-hoc comparisons, although interesting, did not provide compelling data to establish a new standard of care in the treatment of patients with acute SCI. <br /> Evidence of the drug&apos;s efficacy and impact is weak and may only represent random events. <br />
  • Renamed <br />
  • In the late 1940s before there was a polio vaccine Health authorities noted that polio cases increased with ice cream and soft drink consumption. <br /> Eliminating such treats were part of the advice given to combat the spread of the diseases. <br /> Polio was more common in summer, when people eat more icecream <br /> Hence a Correlation vs causation <br />
  • Spin <br />
  • Incentives <br /> Egos, cudos, <br /> Academics – KPIs, output, promotions, grants <br /> Tied to remuneration – QH contracts - KPI <br />
  • Pharma – All bad <br /> 75% of all US research funded by Pharma <br />
  • Pharma – All bad <br /> 75% of all US research funded by Pharma <br /> Surgeon – cannot recommend surgery <br /> Interventional cardiologist - stenting <br />
  • UQ RORT: <br /> One of Australia&apos;s leading universities is investigating new concerns of possible academic misconduct by two former academics. <br /> The University of Queensland&apos;s Dr Caroline Barwood and Professor Bruce Murdoch published a peer-reviewed paper in the prestigious European Journal of Neurology, heralding a major breakthrough in the treatment of Parkinson&apos;s disease. <br /> theuniversity made the unusual admission that it could find no data or evidence that the research was ever conducted. <br /> Before the article was retracted, the study&apos;s apparent success led to a number of grants. <br /> Ten months after the allegations of academic misconduct were first raised and one month after the investigation was referred to the CMC, the university accepted part of a $300,000, five-year research fellowship on behalf of Dr Barwood. <br />
  • Incentives <br /> Egos, cudos, <br /> Academics – KPIs, output, promotions, grants <br /> Tied to remuneration – QH contracts - KPI <br />
  • Don’t believe open access – where if I pay the $ to many journal they will simply accept my paper! <br />
  • Neg trials hard to publish <br /> High impact journals – ‘1st’ of something – often exaggerated. <br />
  • Where we end up at is this&gt;&gt;&gt; <br /> What are we doing now that is harmful to our patients? <br />
  • Don’t just skim article <br /> Not just abstract and conclusions <br /> Learn a little about stats <br /> Don’t be fooled by high quality journal <br /> Know/review the literature/topic. Not just article <br />
  • I don’t recommend that you go home tonight and try a “booty call” with your partner <br /> Perhaps read the article and find out the details. <br />
  • Consistent and transparent <br />
  • Be aware of your own biases – especially confirmation biases. <br /> Stop searching for information to confirm your own views. Read broadly. <br /> Don’t believe a single articles findings – look for bodies of work around a topic. <br />
  • Who do you believe and what do you believe? <br /> Do you believe who speaks loudest? <br />
  • Sit and contemplate your position…………Put the patient (not the patients leg) at the forefront of your focus. <br /> Be sceptical! <br />

Cullen: Why most published research is wrong Presentation Transcript

  • 1. “Why most published research is wrong.” Louise Cullen (Clinician researcher)
  • 2. Disclosure Information
  • 3. “It is everyone’s responsibility to find out how to ask questions systematically, find answers from searching the literature, critically appraise the literature and apply the results to practice.” Rinaldo Bellomo
  • 4. “It is everyone’s responsibility to find out how to ask questions systematically, find answers from searching the literature, critically appraise the literature and apply the results to practice.” Rinaldo Bellomo
  • 5. 40 ingredients associate with cancer Most single studies showed implausibly large effects.
  • 6. The p value
  • 7. The p value Observed size of Effect
  • 8. p=0.01
  • 9. p=0.01 There is a 1% chance of results as extreme as these would occur when there is really no difference occurring in the experiment.
  • 10. 1000 hypotheses
  • 11. Replication of studies
  • 12. Replication of studies
  • 13. Problems with the study itself.
  • 14. Wrong question
  • 15. Wrong Theory
  • 16. Wrong population studied
  • 17. 2 ACS: Trial and community populations Circulation. 115(19):2549-69, 2007 May 15.
  • 18. n=2
  • 19. Wrong design
  • 20. • Greater the flexibility in – designs – definitions – outcomes – analytical modes
  • 21. • Greater the flexibility in – designs – definitions – outcomes – analytical modes • Hotter a scientific field with more teams involved.
  • 22. Wrong Endpoints
  • 23. Ad and high dose Ad Ca++ in cardiac arrest COX-2 inhibitors Milrinone
  • 24. Methodology Statistical hypothesis inference testing
  • 25. Problems with reporting
  • 26. Interpretation
  • 27. • “a little significance” • “a definite trend is evident” • “a clear tendency” • “almost achieved significance”
  • 28. • “a little significance” • “a definite trend is evident” • “a clear tendency” • “almost achieved significance” The data is practically meaningless
  • 29. • “In my experience” • “In case after case” • “In a series of cases” • “It is generally believed that..” • “A highly significant area for exploratory study” • Once • Twice • Three times • A couple of others think so too • A totally useless topic in my underpowered study…….
  • 30. Omitting facts deliberately ….!
  • 31. Why? Incentives
  • 32. Pharma
  • 33. Pharma
  • 34. Why? Ethical practice of researchers
  • 35. Problems with publishing
  • 36. Don’t believe in the review process
  • 37. Journal publishing practices
  • 38. • 2004 “original articles” in NEJM – 363 tested an established therapy – 146 (40%) reversed that practice – 138 (38%) reaffirmed it
  • 39. What can you do about it?
  • 40. Read more than the title!
  • 41. Reporting Framework CONSORT (http://bit.ly/14qUNEF) – Standards for reporting of trials STARD – Standards for the Reporting of diagnostic accuracy studies
  • 42. Biases
  • 43. Be sceptical!
  • 44. Thank you @louiseacullen