Introduction to meta analysis


Published on

An introduction on how to go about a meta-analysis. Primarily designed for people with non statistical background. Heavily borrows from Cochrane Handbook of Systematic Reviews of Interventions.

1 Comment
  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Introduction to meta analysis

  1. 1. Introduction to Meta Analysis Dr Santam Chakraborty Associate Professor, Radiation Oncology, Malabar Cancer Center
  2. 2. Have not personally performed a meta analysis ..... .... however read quite a few Disclaimer
  3. 3. 1. Wikipedia 2. Cochrane Handbook of Systematic Reviews of Interventions* 3. How to conduct a meta-analysis : Dr Arindam Basu 4. PRISMA statement *Most material referred Acknowledgements
  4. 4. Cochrane Handbook Section 9 (Analyzing data) 9.1.1 Do not start here! It can be tempting to jump prematurely into a statistical analysis when undertaking a systematic review. The production of a diamond at the bottom of a plot is an exciting moment for many authors, but results of meta-analyses can be very misleading if suitable attention has not been given to formulating the review question; specifying eligibility criteria; identifying, selecting and critically appraising studies; collecting appropriate data; and deciding what would be meaningful to analyse. Warning
  5. 5. A Historical Perspective ● Key paper ● Published 1904 ● Correlation between typhoid inoculation & mortality ● In soldiers serving British Empire
  6. 6. ● Noted that individual studies showed consistent correlation. ● Noted the high probability for selection bias in the studies ● However values of correlation between studies were variable - ○ An example of Heterogeneity. ● Overall found that correlation coefficients had low value ○ Failed to convince him of the value of routine inoculations. “Assuming that the inoculation is not more than a temporary inconvenience, it would seem to be possible to call for volunteers... [and] only to inoculate every second volunteer... with a view to ascertaining whether any inoculation is likely to prove useful... In other words, the ‘experiment’ might demonstrate that this first step to a reasonably effective prevention was not a false one.” Historical Perspective ..
  7. 7. ● Efforts to synthesize increasing research volume - lead to various methods of quantitative synthesis. ● Gene Glass (1976) coined the term meta analysis ○ The statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings” ● Origins
  8. 8. Origins ● 1st randomized controlled trial showed a weak effect of Aspirin in preventing heart attacks ● Subsequent trial results were assembled and synthesized (Meta Analysis).
  9. 9. ● The statistical synthesis of the data from separate but similar, i.e. comparable studies, leading to a quantitative summary of the pooled results. ● This is different from a systematic review: ○ The application of strategies that limit bias in the assembly, critical appraisal, and synthesis of all relevant studies on a specific topic. ○ Meta analysis may be a part of the process of a systematic review. Definitions
  10. 10. Conceptually following major advantages: 1. Improved power and precision due to larger sample size 2. Quantify inconsistency in results between studies. 3. Generalization of results 4. Hypothesis testing can be applied on summary statistics 5. Investigate presence of publication bias Also forms an important part of “research synthesis” process. Why Meta Analysis
  11. 11. How to Conduct Formulate the problem Literature Search Study Selection Decide which variables are allowed Select the model
  12. 12. Formulating the Problem
  13. 13. Should be a well framed question - define the boundaries. Specify: ● Population (P) ● Intervention (I) ● Comparisons (C) ● Outcomes (O) Formulating the Problem
  14. 14. ● Define the disease / condition of interest: ○ Have explicit criteria ○ But take care not to be too exclusive ● Identify the important demographic characteristics ● Identify the setting of the studies ● Handling of studies that involve a “subset” of the above participants. Defining the Population
  15. 15. ● What is the intervention and control ● Does the intervention have variations ? ○ Dose / Mode of delivery / frequency of delivery / who delivers ● Which variations should be included? ● How will you handle studies where only part of the intervention is handled. ● How will trials that combine the intervention with another intervention of interest be handled. Defining Interventions
  16. 16. 1. Reporting of outcome does not usually determine eligibility for inclusion. 2. Include outcomes that are meaningful: a. Patient b. Clinician c. Society 3. Address outcomes that relate the beneficial effects as well as adverse effects 4. Define how the outcomes will be measured. 5. Limit primary outcomes to 2 - 3 which have impact on patient care. Defining Types of Outcomes
  17. 17. ● A priori decision regarding study type needed. ● Randomized trials favoured: ○ Best way to eliminate bias and confounding. ○ Better and more “conservative” estimate of the effect as compared to retrospective studies. ○ May eliminate “publication bias” to some extent. Define Study Types
  18. 18. ● It is good practice to have a trial search coordinator ● Characteristics of a good search: ○ Thorough ○ Objective ○ Reproducible ○ Search diverse sources ● Don't restrict your search to a single database e.g. Medline ● Searching across multiple database also limits selection bias for studies that are found. Searching for Studies
  19. 19. Sources to Search ● Bibliographic Databases: MEDLINE, EMBASE, CENTRAL ○ Fast and easy ○ Search electronically ○ Various search methodology ○ Indexing terms available ● Citation Indexes: SciSearch, SCOPUS ○ Checks for citing articles for a given article ● Dissertations and Theses can be searched in specific databases: ProQuest, Theses (UK), DissOnline (German). ● Grey Literature: Conference abstracts
  20. 20. Sources to Search ● Handsearching: ○ Not all trial reports are included in bibliographic databases ○ May not contain relevant search terms. ● Full text search of journals in internet (not indexed) ● Conference abstracts: ○ 50% of all trials reported dont reach full publication !! ● Trial registers : National (CTRI) and International
  21. 21. ● Develop a search strategy ○ Develop a set of keywords ● Document the search strategy : Important if search is to be reporducible ● Understand use of filters in databases ● Understand use of “Boolean operators” ● Planning the Search
  22. 22. ● Use a reference manager: ○ Zotero ○ Refman ○ EndNote Managing References
  23. 23. ● Most important step in the process. ● Multiple heads better than one. ● Define selection criteria and have a form ○ May have a pilot test for the selection criteria on 10 -12 randomly selected studies. ● Using a reference manager can help sort out “studies” from “reports” ● Define a list of excluded studies also. Study Selection
  24. 24. ● Study design data ● Participants data ● Interventions data ● Outcome measures data ● Results Reference: Checklist of data to collect Cochrane Handbook What data to collect
  25. 25. ● Original Report ● Correspondence with investigators ● Individual Patient data - Perhaps the most robust form of data collection. Sources of Data
  26. 26. ● Bias is a systematic deviation / error from the truth in result / inferences. ● Can lead to underestimation / overestimation ● Can vary in magnitude ● Can vary in direction : Same bias can lead to different effect on the estimate ● However actual bias is difficult to understand ○ Hence better to consider a “risk of bias” ● Important to distinguish from imprecision: bias is systematic which means if the same study is repeated again and again the same error will occur. Imprecision is a random error that can be overcome by larger samples. Assesing Bias
  27. 27. ● Selection bias (Non randomized) ● Performance bias (Non blinded - patient & care givers) ● Detection bias (Non blinded - assessor) ● Attrition bias ● Reporting bias - may be one of the most important - can exist within study. Sources of Bias
  28. 28. ● Publication Bias: Results determine if published. ● Time lag bias : Results determine time taken to publish ● Multiple publication bias ● Location bias ● Citation bias: Results determine citations ● Language bias ● Outcome reporting bias : Selective reporting of only some outcomes Types of Reporting Biases
  29. 29. ● The objective of the analysis is to get an idea of the “effect” ● Effect = Difference between two groups. ● 4 Questions that need to be answered: ○ Direction (+ve/-ve) ○ Size ○ Consistency ○ Strength of evidence for the effect ● Meta-analysis can answer 1 - 3. Planning the Analysis
  30. 30. ● Dichotomous data : Risk ratio, Risk Difference, NNT and Odds Ratio ● Continuous data: Mean Difference ● Ordinal data: Proportional odds ratio ● Counts & rates of events data: Rate ratio ● time to event (e.g. Survival) data - Censored: Hazard ratio Effect Measures
  31. 31. ● Risk : Is the probability of getting the event. Ranges between 0 - 1. ○ Risk Ratio: Ratio of risk in the two groups ○ Risk Difference : Difference between risk in the two groups. ● Odds: Ratio of the probability of occurrence of an event to that it does not occur. Can range between 0 - infinity. ○ Odds Ratio: Ratio of odds of an event in the two groups. Thus Odds = Risk / 1 - Risk Hence for trials with larger number of events Odds >> Risk Risk versus Odds
  32. 32. ● Measures the absolute difference between mean values of two groups. ● Can be used with summary statistics of ALL studies are made in SAME scale. ● Standardized mean difference: Used when studies measure the same outcome but do it in different ways. Mean Difference
  33. 33. ● Hazard ratio conceptually similiar to risk ratio: ○ However calculated on censored data ○ Changes instantaneously. Hazard Ratio
  34. 34. ● After the summary estimate or “effect” for each trial is calculated - meta- analytic methods allow obtaining a summary of the effect. ● A weighted average is calculated. ● Weights account for the study information ● May optionally incorporate an assumption that studies are not measuring the SAME intervention. ○ Basis of random effects meta-analysis ○ Fixed effect : When assumption is each study is estimating the same quantity. ● Standard error and confidence intervals of the weighted average effect are also reported - precision of the estimate. Summary of the Estimate
  35. 35. ● Can be of different types: ○ Clinical diversity : Arising due to participants / interventions / outcomes studied. ○ Methodological diversity: Arising due to variation in study methods ○ Statistical diversity: Variability in the intervention effects being studied in the meta-analysis. ● Statistical diversity (Heterogeneity) - Arises due to clinical and methodological diversities. ● Thus observed differences in the effect between studies becomes MORE than what would be expected by chance alone. Heterogeneity
  36. 36. ● The following information should be presented: ○ Characteristics of included studies ■ Risk of bias table ○ Data and analysis ○ Figures : Study flow, forest plots, funnel plots, risk of bias plots ○ Summary of findings ● PRISMA statement has a complete checklist to be followed when reporting such studies. ( Reporting a Meta-Analysis
  37. 37. ● Results are susceptible to diversity in studies - outcomes studied should be similiar ● Bias at each study influences the results ● Publication bias is a serious source of problems. ● Agenda driven bias - perhaps most serious !! Problems with Meta-analysis
  38. 38. ❖ Meta-analysis is a powerful technique for synthesizing study results. ❖ Allows improved power as well as precision ❖ Can make the results more generalizable. ❖ However very prone to bias ❖ A proper methodology to be followed Conclusion
  39. 39. Thank You