Your SlideShare is downloading.
×

×
# Saving this for later?

### Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

#### Text the download link to your phone

Standard text messaging rates apply

Like this presentation? Why not share!

- Heart murmurs: what you need to know by Aqua Velvet 238 views
- B & B Ch 7_5.26.10 by Daberkow 513 views
- Important quotes by Dara Reth 18404 views
- Subjectl/Literature Review by Mark Ingham 5088 views
- Literature Review for Research Gran... by Janet Martin 192 views
- The literature review by s1150031 1789 views
- Litelature review by WataruSanuki 935 views
- Literature review by University of joh... 2699 views
- What's wrong with scholarly publish... by Björn Brembs 29941 views
- Writing+A+Literature+Review by guestd9aa5 2239 views
- Writing+a+literature+review by wqwqqw wqqww 1063 views
- Expert reviews on Radiofrequency En... by The Radiation Doc... 8396 views

Like this? Share it with your network
Share

635

views

views

Published on

Question everything! Louise cullen examines the minefield of published research and importance of reading around topics, not articles.

Question everything! Louise cullen examines the minefield of published research and importance of reading around topics, not articles.

No Downloads

Total Views

635

On Slideshare

0

From Embeds

0

Number of Embeds

2

Shares

0

Downloads

6

Comments

0

Likes

2

No embeds

No notes for slide

Open access article by John in 2005.

In here John – began the discussion about issues relating to problems with studies, analyses, their publication and reporting

PubMed queries for recent studies that evaluated relationship with cancer.

He did not mean it to be a definitive test

He intended it simply as an informal way to judge whether evidence was significant in the old-fashioned sense: i.e. that it was worth a second look.

The idea was to run an experiment, then see if the results were consistent with what random chance may produce.

Researchers would first set up a ‘null hypothesis” that they wanted to disprove – such as there being no correlation or no difference between 2 groups.

Next hey would play devils advocate and, assuming that this null hypothesis was in fact true, calculate the chances of getting results at least as extreme as what was actually observed.

This probability s the P value

But this is wrong.

The P value cannot say this – you also need to know the odds that the real effect was there in the first place.

The P value actually means that there is a 1% chance of results as extreme as these would occur when there is really no difference occurring in the experiment – e.g. that a drug has no effect.

But this is wrong.

The P value cannot say this – you also need to know the odds that the real effect was there in the first place.

The P value actually means that there is a 1% chance of results as extreme as these would occur when there is really no difference occurring in the experiment – e.g. that a drug has no effect.

These = FP.

Medicine accepts that this happens in the order of 1 in 20 times, so in 1000 hypotheses (where 100 are TP) this means there are 45 FPs

There is another type of error – and that is False negatives. Where there is a true effect, but it is misinterpreted as a false one.

Say 20% of the true finding fail to be detected (and this figure is difficult to quantity). That is in this case - 20 cases.

Now researches see 125 hypotheses as true (where 45 are not true)

Perhaps we should only be looking at P values &lt;0.005

Perhaps we should only be looking at P values &lt;0.005

Studies themselves

Others Medical example – Investigations into H2 blockers in PUD

Before realised that PUD related to H bactor pylori

ACS – pts over 75 excluded from most studies

Underpowered

Large, US observational registry

Collects baseline data, procedural, therapeutic and outcome data on discharge

&gt;1million NSTEACs

(Ad/Vasoproessin/ high dose Ad)

though well-designed and well-executed, both NASCIS II and III failed to demonstrate improvement in primary outcome measures as a result of the administration of methylprednisolone. Post-hoc comparisons, although interesting, did not provide compelling data to establish a new standard of care in the treatment of patients with acute SCI.

Evidence of the drug&apos;s efficacy and impact is weak and may only represent random events.

Eliminating such treats were part of the advice given to combat the spread of the diseases.

Polio was more common in summer, when people eat more icecream

Hence a Correlation vs causation

Egos, cudos,

Academics – KPIs, output, promotions, grants

Tied to remuneration – QH contracts - KPI

75% of all US research funded by Pharma

75% of all US research funded by Pharma

Surgeon – cannot recommend surgery

Interventional cardiologist - stenting

One of Australia&apos;s leading universities is investigating new concerns of possible academic misconduct by two former academics.

The University of Queensland&apos;s Dr Caroline Barwood and Professor Bruce Murdoch published a peer-reviewed paper in the prestigious European Journal of Neurology, heralding a major breakthrough in the treatment of Parkinson&apos;s disease.

theuniversity made the unusual admission that it could find no data or evidence that the research was ever conducted.

Before the article was retracted, the study&apos;s apparent success led to a number of grants.

Ten months after the allegations of academic misconduct were first raised and one month after the investigation was referred to the CMC, the university accepted part of a $300,000, five-year research fellowship on behalf of Dr Barwood.

Egos, cudos,

Academics – KPIs, output, promotions, grants

Tied to remuneration – QH contracts - KPI

High impact journals – ‘1st’ of something – often exaggerated.

What are we doing now that is harmful to our patients?

Not just abstract and conclusions

Learn a little about stats

Don’t be fooled by high quality journal

Know/review the literature/topic. Not just article

Perhaps read the article and find out the details.

Stop searching for information to confirm your own views. Read broadly.

Don’t believe a single articles findings – look for bodies of work around a topic.

Do you believe who speaks loudest?

Be sceptical!

- 1. “Why most published research is wrong.” Louise Cullen (Clinician researcher)
- 2. Disclosure Information
- 3. “It is everyone’s responsibility to find out how to ask questions systematically, find answers from searching the literature, critically appraise the literature and apply the results to practice.” Rinaldo Bellomo
- 4. “It is everyone’s responsibility to find out how to ask questions systematically, find answers from searching the literature, critically appraise the literature and apply the results to practice.” Rinaldo Bellomo
- 5. 40 ingredients associate with cancer Most single studies showed implausibly large effects.
- 6. The p value
- 7. The p value Observed size of Effect
- 8. p=0.01
- 9. p=0.01 There is a 1% chance of results as extreme as these would occur when there is really no difference occurring in the experiment.
- 10. 1000 hypotheses
- 11. Replication of studies
- 12. Replication of studies
- 13. Problems with the study itself.
- 14. Wrong question
- 15. Wrong Theory
- 16. Wrong population studied
- 17. 2 ACS: Trial and community populations Circulation. 115(19):2549-69, 2007 May 15.
- 18. n=2
- 19. Wrong design
- 20. • Greater the flexibility in – designs – definitions – outcomes – analytical modes
- 21. • Greater the flexibility in – designs – definitions – outcomes – analytical modes • Hotter a scientific field with more teams involved.
- 22. Wrong Endpoints
- 23. Ad and high dose Ad Ca++ in cardiac arrest COX-2 inhibitors Milrinone
- 24. Methodology Statistical hypothesis inference testing
- 25. Problems with reporting
- 26. Interpretation
- 27. • “a little significance” • “a definite trend is evident” • “a clear tendency” • “almost achieved significance”
- 28. • “a little significance” • “a definite trend is evident” • “a clear tendency” • “almost achieved significance” The data is practically meaningless
- 29. • “In my experience” • “In case after case” • “In a series of cases” • “It is generally believed that..” • “A highly significant area for exploratory study” • Once • Twice • Three times • A couple of others think so too • A totally useless topic in my underpowered study…….
- 30. Omitting facts deliberately ….!
- 31. Why? Incentives
- 32. Pharma
- 33. Pharma
- 34. Why? Ethical practice of researchers
- 35. Problems with publishing
- 36. Don’t believe in the review process
- 37. Journal publishing practices
- 38. • 2004 “original articles” in NEJM – 363 tested an established therapy – 146 (40%) reversed that practice – 138 (38%) reaffirmed it
- 39. What can you do about it?
- 40. Read more than the title!
- 41. Reporting Framework CONSORT (http://bit.ly/14qUNEF) – Standards for reporting of trials STARD – Standards for the Reporting of diagnostic accuracy studies
- 42. Biases
- 43. Be sceptical!
- 44. Thank you @louiseacullen

Be the first to comment