Doug Altman 15 Jan09 V4

1,063 views
968 views

Published on

How much confidence do we have in published...

Published in: Spiritual, Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,063
On SlideShare
0
From Embeds
0
Number of Embeds
18
Actions
Shares
0
Downloads
29
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide
  • Doug Altman 15 Jan09 V4

    1. 1. How much confidence can we have in published medical research? Doug Altman Centre for Statistics in Medicine University of Oxford
    2. 2. Richard Horton Editor, Lancet <ul><li>“Can you trust the research published in medical journals? Mostly I believe you can. But not always … some of the most newsworthy reports published by our leading journals spontaneously combust under the gentlest of scrutiny.” </li></ul><ul><li>Guardian 16 Sept 2004 </li></ul>
    3. 3. John Ioannidis (Epidemiologist, Greece) <ul><li>“ Most research findings are false for most research designs and for most fields” </li></ul><ul><li>[ PLoS Medicine 2005] </li></ul>
    4. 4. Research and publication <ul><li>Medical research should advance scientific knowledge and – directly or indirectly – lead to improvements in treatment or prevention of disease </li></ul><ul><li>A research report is usually the only tangible evidence that the study was done, how it was done, and what the findings were </li></ul><ul><li>Scientific manuscripts should present sufficient data so that the reader can fully evaluate the information and reach his or her own conclusions about results </li></ul><ul><ul><li>Assess reliability and relevance </li></ul></ul>
    5. 5. The purpose of a research article <ul><li>Clinicians might read it to learn how to treat their patients better </li></ul><ul><ul><li>“ Editors, reviewers and authors are often tempted to pander to this group, by sexing up the results with unjustified clinical messages – sometimes augmented by an even more unbalanced press release.” [Buckley Emerg Med Australas 2005] </li></ul></ul><ul><li>Researchers might read it to help plan a similar study or as part of a systematic review </li></ul><ul><ul><li>Need a clear understanding of exactly what was done </li></ul></ul>
    6. 6. Some major causes of bias <ul><li>Incorrect methods </li></ul><ul><li>Poor reporting </li></ul><ul><ul><li>Failure to disclose key aspects of how the study was done </li></ul></ul><ul><ul><li>Selective reporting of only some of a large number of results </li></ul></ul><ul><ul><ul><li>Groups, outcomes, subgroups, etc </li></ul></ul></ul><ul><li>Errors of interpretation </li></ul>
    7. 7. 3 underlying major problems in medical research <ul><li>Unclear main study question </li></ul><ul><li>Multiplicity </li></ul><ul><li>Over-reliance on significance tests and the holy grail of P<0.05 </li></ul><ul><li>… very often in combination </li></ul>
    8. 8. Composite outcomes <ul><li>Lotrafiban vs placebo in coronary and cerebrovascular disease [ Topol et al, Circulation 2003] </li></ul>
    9. 9. Post hoc specification <ul><li>Choosing the analysis after looking at the data </li></ul><ul><ul><li>A specific form of incomplete reporting </li></ul></ul><ul><ul><li>Many other analyses are implied but not carried out </li></ul></ul><ul><li>Data-derived analyses are misleading </li></ul><ul><ul><li>can have a huge effect </li></ul></ul>
    10. 10. Vasopressin vs epinephrine for out-of-hospital cardiopulmonary resuscitation [Wenzel et al, NEJM 2004] <ul><li>Primary outcome – hospital admission (alive) </li></ul><ul><li>The overall comparison gives OR = 1.26 (95% CI 0.98 to 1.62) [P=0.06 ] </li></ul>
    11. 11. Wenzel et al, NEJM 2004
    12. 12. Wenzel et al, NEJM 2004 No mention in Methods of planning to look at subgroups Comparison of P values is wrong No significant difference between 3 groups (interaction test)
    13. 13. Comparing trial publications with protocols <ul><li>Comparison of protocols and publications </li></ul><ul><ul><li>102 RCTs submitted to a Danish Ethics committee in 1994-95 </li></ul></ul><ul><ul><ul><li>Protocols and subsequent journal articles (122 articles) </li></ul></ul></ul><ul><ul><ul><li>Questionnaire sent to all authors </li></ul></ul></ul><ul><li>Frequent discrepancies between the protocol and the published report </li></ul><ul><ul><li>Specification of primary outcomes: 51/82 (62%) </li></ul></ul>[Chan et al, JAMA 2004]
    14. 14. [Chan et al, JAMA 2004]
    15. 15. Example <ul><li>Neurology trial </li></ul><ul><li>Surgical intervention </li></ul>Primary outcome: % with Score<3 at 1 year Primary outcome: % dead/dependent at 1 year P  0.05 Primary outcome: % with Score<3 at 1 yr Protocol P<0.05 Publication
    16. 16. Comparing trial publications with protocols <ul><li>Unacknowledged discrepancies between protocols and publications </li></ul><ul><ul><li>sample size calculations (18/34 trials), </li></ul></ul><ul><ul><li>methods of handling protocol deviations (19/43) </li></ul></ul><ul><ul><li>missing data (39/49), </li></ul></ul><ul><ul><li>primary outcome analyses (25/42) </li></ul></ul><ul><ul><li>subgroup analyses (25/25) </li></ul></ul><ul><ul><li>adjusted analyses (23/28) </li></ul></ul><ul><li>Interim analyses were described in 13 protocols but mentioned in only five corresponding publications </li></ul><ul><li>[Chan et al, BMJ 2008] </li></ul>
    17. 17. Biased interpretation <ul><li>Authors often interpret nonsignificant results as evidence of no effect, regardless of sample size </li></ul><ul><li>“ Conclusions in trials funded by for-profit organizations may be more positive due to biased interpretation of trial results” [Als-Nielsen et al, JAMA 2003] </li></ul>
    18. 19. Doyle et al, Arch Surg 2008
    19. 20. “ Patients who undergo liver transplantation at age 60 or above have 1-year and 5-year survival rates similar to those of younger patients …” No mention that the study was of hepatitis C positive patients
    20. 21. Doyle et al Effect of age of donor <ul><li>Age>60 was not statistically significant in the multivariate analysis </li></ul><ul><li>HR=3.03 (95%CI: 0.70-12.20) </li></ul><ul><li>P = 0.12 </li></ul><ul><li>“ There are at least 25 other studies that have demonstrated that older donors are associated with worse outcomes in recipients with hepatitis C…” </li></ul>
    21. 22. Importance of transparent reporting of research <ul><li>Scientific manuscripts should present sufficient data so that the reader can fully evaluate the information and reach his or her own conclusions about results </li></ul><ul><ul><li>Reliability and relevance </li></ul></ul><ul><li>There is much evidence that this does not happen </li></ul><ul><li>Assessment of reliability of published articles is seriously impeded by inadequate reporting </li></ul><ul><li>Authors (and journals) have an obligation to ensure that research is reported adequately </li></ul><ul><ul><li>i.e. transparently and completely </li></ul></ul>
    22. 23. Evidence of poor reporting <ul><li>Poor reporting: key information is missing or ambiguous </li></ul><ul><li>There is considerable evidence that many published articles do not contain the necessary information </li></ul><ul><ul><li>We cannot tell exactly how the research was done </li></ul></ul>
    23. 24. 519 Randomised trials published in December 2000 <ul><li>Failure to report key aspects of trial conduct: </li></ul><ul><li>73% Sample size calculation </li></ul><ul><li>55% Defined primary outcome(s) </li></ul><ul><li>60% Whether blinded </li></ul><ul><li>79% Method of random sequence generation </li></ul><ul><li>82% Method of allocation concealment </li></ul><ul><li>[Chan & Altman Lancet 2005] </li></ul>
    24. 25. Poor reporting is a serious problem for systematic reviews and clinical guidelines <ul><li>“The biggest problem was the quality of reporting, which did not allow us to judge the important methodological items ...” </li></ul><ul><li>“Data reporting was poor. 15 trials met the inclusion criteria for this review but only 4 could be included as data were impossible to use in the other 11.” </li></ul><ul><ul><ul><ul><ul><li>(Reviews on Cochrane Library, accessed on 18 Sept 07) </li></ul></ul></ul></ul></ul>
    25. 26. Other study types <ul><li>There is most evidence for randomised trials but similar concerns apply to all types of research </li></ul><ul><li>Systematic reviews </li></ul><ul><li>Phase II trials </li></ul><ul><li>Diagnostic accuracy </li></ul><ul><li>Epidemiology </li></ul><ul><li>Genetic epidemiology </li></ul><ul><li>Prognostic markers </li></ul><ul><li>etc </li></ul>
    26. 27. Case-control studies <ul><li>“The reporting of methods in the 408 identified papers was generally poor, with basic information about recruitment of participants often absent …” </li></ul><ul><li>“Poor reporting of recruitment strategies threatens the validity of reported results and reduces the generalisability of studies.” </li></ul><ul><li>[Lee et al. Br J Psychiatry 2007] </li></ul>
    27. 28. Selective reporting <ul><li>In addition, there is accumulating evidence of two major threats to the medical literature </li></ul><ul><li>Study publication bias – studies with less interesting findings are less likely to be published </li></ul><ul><li>Outcome reporting bias – results included within published reports are selected to favour those with statistically significant results </li></ul>
    28. 29. Impact of poor reporting <ul><li>Cumulative published evidence is misleading </li></ul><ul><ul><li>Biased results </li></ul></ul><ul><ul><li>Methodological weaknesses may not be apparent </li></ul></ul><ul><li>Adverse effects on </li></ul><ul><ul><li>Other researchers </li></ul></ul><ul><ul><li>Clinicians </li></ul></ul><ul><ul><li>Patients </li></ul></ul>
    29. 30. Whose fault is poor reporting? <ul><li>Poor reporting indicates a collective failure of authors, peer reviewers, and editors </li></ul><ul><ul><li>… on a massive scale </li></ul></ul><ul><li>Researchers (authors) may not know what information to include in a report of research </li></ul><ul><li>Editors may not know what information should be included </li></ul><ul><li>What help can be given to authors? </li></ul>
    30. 31. What can be done to improve research reports? <ul><li>Research Publication Knowledge dissemination </li></ul><ul><li>Research Scientific writing guidance </li></ul><ul><li>conduct Journals’ Instructions to Authors </li></ul><ul><li>guidance </li></ul>
    31. 32. What can be done to improve research reports? <ul><li> Closing the gap </li></ul><ul><li>Research Publication Knowledge dissemination </li></ul><ul><li>Research Scientific writing guidance </li></ul><ul><li>conduct Journals’ Instructions to Authors </li></ul><ul><li>guidance </li></ul><ul><li> Reporting guidelines </li></ul>
    32. 33. Transparency and reproducibility <ul><li>All key aspects of how the study was done </li></ul><ul><ul><li>Allow repetition (in principle) if desired </li></ul></ul><ul><li>“ Describe statistical methods with enough detail to enable a knowledgeable reader with access to the original data to verify the reported results.” </li></ul><ul><ul><li> [International Committee of Medical Journal Editors] </li></ul></ul><ul><ul><li>Same principle should extend to all study aspects </li></ul></ul><ul><ul><li>Only 49% of 80 consecutive reports accepted for publication in Evidence-Based Medicine (2005-06) gave sufficient details of the treatment studied to allow clinicians to reproduce it [Glasziou et al 2007] </li></ul></ul><ul><li>Reporting of results should not be misleading </li></ul>
    33. 34. An early call for guidance on reporting clinical trials (1950) <ul><li>“ This leads one to consider if it is possible, in planning a trial, in reporting the results, or in assessing the published reports of trials, to apply criteria which must be satisfied if the analysis is to be entirely acceptable…. </li></ul><ul><li>“ A basic principle can be set up that … it is at least as important to describe the techniques employed and the conditions in which the experiment was conducted, as to give the detailed statistical analysis of results.” </li></ul><ul><li>[Daniels M. Scientific appraisement of new drugs in tuberculosis. Am Rev Tuberc 1950;61:751-6.] </li></ul>
    34. 35. The CONSORT Statement for reporting RCTs [Moher et al, JAMA/Annals/Lancet 2001] <ul><li>Minimum set of essential items necessary to evaluate the study </li></ul><ul><li>22 items that should be reported in a paper </li></ul><ul><ul><li>Based on empirical evidence where possible </li></ul></ul><ul><li>Also a flow diagram describing patient progress through the trial </li></ul><ul><li>Long explanatory paper (E&E) </li></ul><ul><li>www.consort-statement.org </li></ul>
    35. 36. Goals of CONSORT <ul><li>Main objective </li></ul><ul><li>To facilitate critical appraisal and interpretation of RCTs by providing guidance to authors about how to improve the reporting of their trials </li></ul><ul><li>Secondary objective </li></ul><ul><li>To encourage and provide incentives for researchers to conduct high-quality, unbiased randomized trials </li></ul>
    36. 37. Impact of CONSORT <ul><li>All leading general medical journals and hundreds of specialist journals support CONSORT </li></ul><ul><ul><li>Not necessarily enforced </li></ul></ul><ul><li>Adoption of CONSORT by journals is associated with improved reporting [Plint et al, Med J Aust 2006] </li></ul><ul><li>Official extensions </li></ul><ul><ul><li>Cluster trials, non-inferiority and equivalence trials, harms, non-pharmacological treatments, pragmatic trials, abstracts </li></ul></ul><ul><li>Unofficial extensions </li></ul><ul><ul><li>e.g. Acupuncture (STRICTA), Nonrandomised public health interventions (TREND) </li></ul></ul>
    37. 38. Other guidelines <ul><li>Other study types – CONSORT as a model </li></ul><ul><ul><li>QUOROM (meta-analyses of RCTs) (  PRISMA) </li></ul></ul><ul><ul><li>STARD (diagnostic studies) </li></ul></ul><ul><ul><li>STROBE (observational studies) </li></ul></ul><ul><ul><li>REMARK (tumour marker prognostic studies) </li></ul></ul><ul><ul><li>… </li></ul></ul><ul><li>Such guidelines are still not widely supported by medical journals or adhered to by researchers </li></ul><ul><ul><li>Their potential impact is blunted </li></ul></ul>
    38. 39. STROBE
    39. 40. Key aspects of reporting guidelines <ul><li>Guidance not requirements </li></ul><ul><ul><li>Journals may enforce adherence </li></ul></ul><ul><li>For authors, editors, and readers </li></ul><ul><li>Not methodological quality </li></ul><ul><li>“ Accurate and transparent reporting is like turning the light on before you clean up a room: It doesn’t clean it for you but does tell you where the problems are.” </li></ul><ul><li>[Frank Davidoff, Ann Intern Med 2000] </li></ul><ul><li>Adherence does not guarantee a high quality study! </li></ul>
    40. 41. Perrone et al, Eur J Cancer 2008
    41. 42. Perrone et al, Eur J Cancer 2008
    42. 43. EQUATOR: E nhancing the QUA lity and T ransparency O f health R esearch <ul><li>EQUATOR grew out of the work of CONSORT and other guidelines groups </li></ul><ul><li>Guidelines are available but not widely used </li></ul><ul><li>Recognised the need to actively promote guidelines </li></ul><ul><li>EQUATOR Network </li></ul><ul><ul><li>Editors of general and specialty journals, researchers, guideline developers, medical writers </li></ul></ul><ul><li>The goal: Better reporting, better reviewing, better editing </li></ul>
    43. 44. www.equator-network.org
    44. 45. EQUATOR Core Programme: Objectives <ul><li>Provide resources enabling the improvement of health research reporting </li></ul><ul><ul><li>Website </li></ul></ul><ul><ul><li>Courses </li></ul></ul><ul><li>Monitor progress in the improvement of health research reporting </li></ul><ul><li>Achieving funding for such work is a major headache! </li></ul>
    45. 46. What does the poor quality of published studies tell us about peer review? <ul><li>Peer review is difficult and only partly successful </li></ul><ul><li>Reviewers (and editorial staff) are unable to eliminate errors in methodology and interpretation </li></ul><ul><li>Readers should not assume that papers published in peer reviewed journals are scientifically sound </li></ul><ul><ul><li>But, many do! </li></ul></ul>
    46. 47. Some (partial) solutions <ul><li>Monitoring/regulation </li></ul><ul><ul><li>Ethics committees, data monitoring committees, funders </li></ul></ul><ul><li>Trial registration </li></ul><ul><li>Publication of protocols </li></ul><ul><li>Availability of raw data </li></ul><ul><li>Journal restrictions </li></ul>
    47. 48. Reproducible research
    48. 49. How much confidence can we have in published medical research? <ul><li>Can’t rely on authors </li></ul><ul><ul><li>Poor methodological quality </li></ul></ul><ul><ul><li>Poor reporting </li></ul></ul><ul><li>Can’t rely on peer review </li></ul><ul><ul><li>Fallible </li></ul></ul><ul><li>In general, must rely on one’s own training, experience and judgement </li></ul><ul><ul><li>Read with caution! </li></ul></ul><ul><ul><li>Be aware of what may be missing </li></ul></ul>

    ×