Case study: Methodology Reviews

1,666 views
1,578 views

Published on

UCSF researcher, Lisa Bero, PhD, presents. View more related presentations and resources at http://accelerate.ucsf.edu/research/cer

Published in: Health & Medicine
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,666
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
13
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Case study: Methodology Reviews

  1. 1. Cochrane Methodology Reviews Lisa Bero San Francisco Branch, US Cochrane Center UCSF CER Symposium, January 2012
  2. 2. Methodology Reviews• A "methodology study" is a study of the methods used in randomized trials, other healthcare evaluations or systematic reviews. – Consent procedures – Recruitment methods – Association of allocation concealment with estimates of treatment effect• A "methodology review" is a systematic review of methodology studies
  3. 3. Study Designs for CER/ PCOR•head to head randomized trials, • observational study analysis•cluster randomized trials, approaches employing so-called•adaptive designs, causal inference techniques, which can include instrumental•practice / pragmatic trials, variables, marginal structural• PBE-CPI “practice based evidence models, propensity scores,for clinical practice improvement,” among others.•natural experiments,•observational or cross-sectional • “NEW” design terms such asstudies of registries and databases “observational randomizationincluding electronic medical records, study”•meta-analysis,•network meta-analysis,•modelling and simulation Source: IOM-National Priorities Committee 2009 4
  4. 4. “Best “ study design for CER?•A number of reviews comparing the effect sizes and biases inrandomized and non-randomized studies have been conducted. •Most compared randomized to non-randomized trials. •Most often limited the comparison with observational studies to cohort studies, or the types of observational designs included were not specified. •Most published between 1982 and 2003•Compared to RCTs, observational designs have been found tooverestimate treatment effects ; underestimate treatmenteffects; or show no difference.
  5. 5. Protocol: in press, Cochrane Library Health care outcomes assessed with non- experimental designs compared with those assessed in randomized trials Lisa Bero Andrew Anglemyer Tara Horvath San Francisco Branch of the US Cochrane Center HIV / AIDS Cochrane Review Group UCSF Funding: Clinical and Translational Sciences Institute (CTSI), University of California, San Francisco (UCSF), USA
  6. 6. Objectives• To assess the impact of study design--to include RCTs vs observational study designs, different types of observational studies, and/or choice of analytic techniques -- on the effect measures estimated in observational and randomized studies• To explore methodological variables that might explain any differences identified• To identify gaps in the existing research comparing study designs
  7. 7. Inclusion criteria• Systematic or non-systematic reviews designed as methodological studies to compare study designs• Clinical outcomes: efficacy or harms of alternative interventions to prevent or treat a clinical condition or improve the delivery of care
  8. 8. A priori subgroup analyses• Comparisons of drug interventions• Clinical topic• Heterogeneity of included methodological studies
  9. 9. PRELIMINARY DATA: Included studies – RCT vs. observational
  10. 10. PRELIMINARY DATA: 9 studies in meta-analysis• Included 19 – 276 studies• Evaluated a mix of interventions – Lower back pain – Digestive surgery – Various interventions• One focused on drug –drug comparisons (Naudet)• One focused on adverse events from (mostly) pharmacological treatements (Golder)
  11. 11. PRELIMINARY DATA: Risk Ratio Risk Ratio Study or Subgroup Weight IV, Random, 95% CI Year IV, Random, 95% CI 1.1.1 RCT vs All Observational Concato 2000 15.3% 1.07 [0.95, 1.21] 2000 Benson 2000 7.2% 0.95 [0.58, 1.56] 2000 Bhandari 2004 11.1% 0.70 [0.52, 0.95] 2004 Shikata 2006 12.9% 0.97 [0.77, 1.22] 2006 Furlan 2008 4.3% 1.94 [0.93, 4.05] 2008 Beynon 2008 14.9% 0.87 [0.75, 1.00] 2008 Mueller 2010 13.7% 1.48 [1.22, 1.80] 2010 Golder 2011 15.1% 1.08 [0.95, 1.23] 2011 Naudet 2011 5.6% 3.55 [1.94, 6.50] 2011 Subtotal (95% CI) 100.0% 1.11 [0.93, 1.33] Heterogeneity: Tau² = 0.05; Chi² = 44.79, df = 8 (P < 0.00001); I² = 82% Test for overall effect: Z = 1.19 (P = 0.23) 1.1.2 RCT vs Cohort Benson 2000 11.1% 1.52 [0.87, 2.64] 2000 Concato 2000 32.9% 1.04 [0.91, 1.18] 2000 Bhandari 2004 21.9% 0.70 [0.52, 0.95] 2004 Furlan 2008 7.2% 1.94 [0.93, 4.05] 2008 Golder 2011 26.9% 1.02 [0.82, 1.27] 2011 Subtotal (95% CI) 100.0% 1.03 [0.83, 1.29] Heterogeneity: Tau² = 0.03; Chi² = 10.95, df = 4 (P = 0.03); I² = 63% Test for overall effect: Z = 0.30 (P = 0.76) 1.1.3 RCT vs Case Control Concato 2000 58.9% 1.20 [0.94, 1.54] 2000 Golder 2011 41.1% 0.84 [0.57, 1.23] 2011 Subtotal (95% CI) 100.0% 1.04 [0.73, 1.46] Heterogeneity: Tau² = 0.04; Chi² = 2.34, df = 1 (P = 0.13); I² = 57% Test for overall effect: Z = 0.20 (P = 0.84) 0.2 0.5 1 2 5 Obs Reflect Greater Risk RCTs Reflect Greater Risk Test for subgroup differences: Chi² = 0.49, df = 2 (P = 0.78), I² = 0%
  12. 12. Preliminary Findings• Differences in effect measures not associated with study design – explore other reasons• Conduct subgroup analyses• Need methodological studies comparing trials with other observational designs (not just cohorts, case control) and different analytic methods for observational data

×