Assessment of Bias


Published on

Cochrane Review author training workshop, January 22-23, 2009 at the University of Calgary Health Sciences Centre

Published in: Business, Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Assessment of Bias

    1. 1. Risk of Bias Assessment <ul><li>Handbook, Chapter 8 </li></ul>
    2. 2. Why assess study quality? We now assess Risk of Bias <ul><li>if poor quality trials are the building blocks of the review, the review may follow high quality methods, but the quality of evidence may still be poor </li></ul>
    3. 3. What is bias? <ul><li>A systematic error or deviation from the truth in results or inferences </li></ul><ul><li>Can occur in either direction or vary in direction </li></ul><ul><li>Can vary in magnitude: small to large </li></ul><ul><li>Results of a study may be unbiased despite a methodologic flaw…therefore, should consider risk of bias </li></ul>
    4. 4. What is bias? (continued) <ul><li>Differences in risk of bias can help explain variation in the results of studies </li></ul><ul><li>Important to assess in all studies regardless if anticipated variability in results or validity of the included studies </li></ul><ul><li>Used to help judge the quality of evidence </li></ul>
    5. 5. What is bias? (continued) <ul><li>Cochrane Bias Methods Group provides the methodologic guidance for assessing and addressing bias in Cochrane reviews </li></ul><ul><li>Researches empirical evidence behind various biases </li></ul><ul><li>Details on empirical evidence in Chapter 8 </li></ul>
    6. 6. What is bias? (continued) <ul><li>Aren’t we supposed to talk about quality? </li></ul><ul><li>A study may be conducted to the highest possible standards but still have an important risk of bias </li></ul><ul><ul><li>For some interventions, cannot blind investigators or participants -> acceptable due to nature of intervention but is not free of bias </li></ul></ul><ul><li>Other markers of ‘quality’ unlikely to have direct implications for risk of bias eg, reporting a study according to CONSORT guidelines </li></ul><ul><li>Risk of bias overcomes ambiguity between quality of reporting and quality of the research that was conducted </li></ul>
    7. 7. What is bias? (continued) <ul><li>How it differs from precision </li></ul><ul><li>Bias : systematic error </li></ul><ul><ul><li>Repeating the study multiple times would reach the wrong answer on average </li></ul></ul><ul><li>Imprecision : random error </li></ul><ul><ul><li>Different effect estimates because of sampling variation </li></ul></ul><ul><ul><li>Smaller studies…greater sampling variation…less precise </li></ul></ul><ul><ul><li>Reflected in confidence interval </li></ul></ul>
    8. 8. What tool do we use? <ul><li>Use of scales explicitly discouraged </li></ul><ul><ul><li>Not supported by empirical evidence…difficult to justify weights that are used for summary scores </li></ul></ul><ul><ul><li>Often based on whether something was reported rather than if done appropriately </li></ul></ul><ul><li>No longer use the Jadad scale </li></ul>
    9. 9. Collecting information <ul><li>Focus is at the individual study level </li></ul><ul><li>Include in data extraction form </li></ul><ul><li>Distinguish reporting from conduct </li></ul><ul><ul><li>If not reported, you can’t determine whether it was done </li></ul></ul><ul><li>Incomplete reporting an issue </li></ul><ul><ul><li>Use open-ended quesitons when asking trial authors for information…may help to reduce overly positive answers </li></ul></ul>
    10. 10. Sources of bias in clinical trials <ul><li>Focus of session on RCTs </li></ul>
    11. 11. Risk of Bias tool <ul><li>Recommended tool for assessing risk of bias in Cochrane reviews </li></ul><ul><li>Not a scale or a checklist but a domain-based evaluation </li></ul><ul><li>Two parts: (1) describe what was reported (2) judgement based on that information </li></ul><ul><li>Question always framed so that: </li></ul><ul><ul><li>Yes = low risk of bias </li></ul></ul><ul><ul><li>No = high risk of bias </li></ul></ul><ul><ul><li>Unclear = unclear or unknown risk of bias </li></ul></ul>
    12. 12. Risk of Bias tool (continued) <ul><li>Description </li></ul><ul><ul><li>Transparency for how judgements made </li></ul></ul><ul><ul><li>Should include verbatim quotes from reports or correspondence </li></ul></ul><ul><ul><li>May include a summary of known facts or comment from the review authors </li></ul></ul><ul><ul><li>Should include other information that influences judgement </li></ul></ul><ul><ul><li>When no information available, state explicitly </li></ul></ul>
    13. 13. Risk of Bias tool (continued) Examples of descriptions
    14. 14. Risk of Bias tool (continued) <ul><li>‘ Unclear’ judgements </li></ul><ul><ul><li>If insufficient detail reported </li></ul></ul><ul><ul><li>If what happened in study is know but the risk of bias is unknown </li></ul></ul><ul><ul><li>Outcome not measured in a study </li></ul></ul><ul><ul><li>RevMan 5: if text box left empty, then will be omitted in published version </li></ul></ul><ul><li>Process : Collect information -> Make judgements of risk -> Make summary assessments -> Incorporate into analyses (Chapter 8) </li></ul>
    15. 15. Sequence generation <ul><li>Mechanism for allocating intervention to participants </li></ul><ul><li>Adequate methods (randomization) </li></ul><ul><ul><li>eg, random number table, computer random number generator, coin toss, throwing dice </li></ul></ul><ul><li>Inadequate methods (non-random) </li></ul><ul><ul><li>Eg, date of birth, alternation, allocation by judgement of the investigator </li></ul></ul><ul><li>Unclear </li></ul><ul><ul><li>eg, ‘we randomly allocated’, ‘using a randomized design’ </li></ul></ul>
    16. 16. Selection bias <ul><li>systematic differences in participant characteristics at the start of a trial </li></ul>Intervention group Control group
    17. 17. Allocation concealment <ul><li>Preventing foreknowledge of the next allocations </li></ul><ul><li>What is used to implement the sequence </li></ul><ul><li>Don’t confuse with blinding of participants, personnel, etc </li></ul>
    18. 18. Allocation concealment (continued) <ul><li>Adequate methods </li></ul><ul><ul><li>Eg, central allocation; sequentially numbered, opaque, sealed envelopes </li></ul></ul><ul><li>Inadequate methods </li></ul><ul><ul><li>Eg, posted list of random numbers, alternation, date of birth, envelopes met 2 of 3 criteria </li></ul></ul><ul><li>Unclear </li></ul><ul><ul><li>Insufficient information to make a judgement eg, use of envelopes described but no indication of other components </li></ul></ul>
    19. 19. Blinding <ul><li>Emphasis should be placed on participants, providers, and outcome assessors </li></ul><ul><li>Could bias the actual outcomes (eg, differential cross-over) or the assessment of outcomes </li></ul><ul><li>All outcome assessment can be influenced, but especially for subjective outcomes </li></ul><ul><li>Situations where blinding impossible (eg, oral vs intravenous medications) </li></ul>
    20. 20. Blinding (continued) <ul><li>Use of terms like ‘double-blinded’ problematic </li></ul><ul><ul><li>You don’t know exactly who was blinded! </li></ul></ul><ul><li>What to consider when assessing: </li></ul><ul><ul><li>Who was and was not blinded </li></ul></ul><ul><ul><li>Risk of bias in actual outcomes due to lack of blinding during the study (eg, co-intervention or differential behaviour) </li></ul></ul><ul><ul><li>Risk of bias in outcome assessments (subjective vs objective) </li></ul></ul><ul><li>Assessment of risks may need to be made for different (groups of) outcomes </li></ul>
    21. 21. Blinding (continued) <ul><li>Adequate blinding </li></ul><ul><ul><li>Eg, no blinding but the review authors judge that the outcome not likely to be influenced by lack of blinding </li></ul></ul><ul><ul><li>Eg, blinding of participants and key study personnel ensured and unlikely to have been broken </li></ul></ul><ul><li>Inadequate blinding </li></ul><ul><ul><li>No blinding or complete blinding and the outcome likely influenced by lack of blinding </li></ul></ul><ul><li>Unclear risk of bias </li></ul><ul><ul><li>Insufficient information </li></ul></ul><ul><ul><li>Study did not address this outcome </li></ul></ul>
    22. 22. Allocation concealment vs. blinding time randomisation Concealment of allocation Blinding Selection bias Performance bias
    23. 23. Performance bias <ul><li>systematic differences, other than the intervention being investigated, in the treatment of the two groups </li></ul><ul><li>occurs at the time of performing the intervention </li></ul><ul><li>avoid performance bias by: </li></ul><ul><ul><li>blinding the care provider </li></ul></ul><ul><ul><li>blinding the participant </li></ul></ul>
    24. 24. Another form of performance bias is inadequate delivery of the intervention <ul><li>Assess if the study used a process analysis to ensure all participants in the trial received the entire intervention according to the trial protocol by following a manual </li></ul><ul><li>E.g. did the researchers visit every classroom and observe to ensure that all students received the entire intervention </li></ul><ul><li>E.g. did the researchers ask each participant to assess the quality of the presentation of the intervention </li></ul>
    25. 25. Incomplete outcome data <ul><li>Missing outcome data </li></ul><ul><li>Incomplete outcome data : drop-outs for exclusions </li></ul><ul><li>‘ Missing’ : participant’s outcome is not available </li></ul><ul><li>Some exclusions may be justifiable and should not be considered as leading to missing outcome data </li></ul><ul><li>When possible and appropriate, can reinclude participant into an analysis (exclusions were inappropriate and data available) </li></ul>
    26. 26. Attrition bias <ul><li>systematic differences in the loss of participants to follow up between groups </li></ul><ul><li>occurs over the duration of follow up </li></ul><ul><li>avoid attrition bias by: </li></ul><ul><ul><li>describe proportion of participants lost to follow-up </li></ul></ul><ul><ul><li>use intention-to-treat analyses </li></ul></ul><ul><li>completeness of follow up </li></ul><ul><ul><li>participants lost to follow up, or not included in the outcome assessment, could be different from those who remained in the trial </li></ul></ul>
    27. 27. Incomplete outcome data (continued) <ul><li>Risk of bias depends on several factors, including: </li></ul><ul><ul><li>Amount and distribution across intervention groups </li></ul></ul><ul><ul><li>Reasons for missing outcomes </li></ul></ul><ul><ul><li>Likely difference in outcome between participants with and without data </li></ul></ul><ul><ul><li>What study authors have done to address the problem in their reported analyses </li></ul></ul><ul><ul><li>Clinical context </li></ul></ul>
    28. 28. Incomplete outcome data (continued) <ul><li>Low risk of bias </li></ul><ul><li>No missing outcome data </li></ul><ul><ul><li>Confident that participants included in the analysis are exactly those who were randomized into the trials </li></ul></ul><ul><ul><li>If numbers randomized not clearly reported, risk of bias is unclear </li></ul></ul><ul><ul><li>Intention-to-treat analyses rare; care with understanding and use of term </li></ul></ul>
    29. 29. Incomplete outcome data (continued) <ul><li>Low risk of bias (continued) </li></ul><ul><li>Acceptable reasons </li></ul><ul><ul><li>Eg, move away </li></ul></ul><ul><ul><li>Eg, for survival, censoring done and reason for censoring unrelated to prognosis </li></ul></ul><ul><ul><li>Eg, reasons are reported and balanced across groups (may not be possible, though, due to incomplete reporting) </li></ul></ul>
    30. 30. Incomplete outcome data (continued) <ul><li>Impact of missing data on effect estimates </li></ul><ul><li>For dichotomous data, depends on the amount of information missing relative to participants with events </li></ul><ul><ul><li>The higher the ratio, the greater the potential for bias </li></ul></ul><ul><li>For continuous data, impact increases with the proportion of participants with missing data </li></ul><ul><li>Imputation </li></ul><ul><li>Common, but potentially dangerous </li></ul><ul><li>Can lead to serious bias </li></ul><ul><li>Consult statistician if you encounter in your trials </li></ul>
    31. 31. Incomplete outcome data (continued) <ul><li>High risk of bias </li></ul><ul><li>Importance of considering reasons for incomplete outcome data…often unavailable, but is likely to improve through use of the CONSORT statement </li></ul><ul><li>‘ As treated’ (per-protocol) analyses </li></ul>
    32. 32. Selective outcome reporting <ul><li>Selection of a subset of the original variables recorded, on the basis of the results, for inclusion in the publication of trials </li></ul><ul><li>Concern: statistically non-significant results might be selectively excluded from publication </li></ul><ul><li>Bias resulting from selective reporting of different measurements of outcome seem likely eg, published vs unpublished rating scales for schizophrenia </li></ul><ul><li>Need to consider whether an outcome was collected but not reported or simply not collected </li></ul>
    33. 33. Selective outcome reporting (continued) <ul><li>Bias can occur through selective…: </li></ul><ul><li>Omission of outcomes from reports : if based on statistical significance </li></ul><ul><li>Choice of data for an outcome: if choice of timepoints or measurement scales based on results </li></ul><ul><li>Reporting of analyses using same data: eg, choice of continuous vs dichotomous analysis, final vs change-from-baseline </li></ul><ul><li>Reporting of subsets of data: eg, selecting subsets of events </li></ul><ul><li>Under-reporting of data: eg, in adequate data for use in meta-analysis (‘not significant’ or p>0.05) </li></ul>
    34. 34. Selective outcome reporting (continued) <ul><li>Other items to consider: </li></ul><ul><li>Comparing the trial report with its published protocol, if available </li></ul><ul><li>Checking abstracts of subsequently published trials for outcomes not in published version </li></ul><ul><li>Occurrences of missing data that seems sure to have been collected </li></ul><ul><li>If suspicion of or direct evidence of selective outcome reporting, is desirable to ask the study authors for more information </li></ul>
    35. 35. Selective outcome reporting (continued) <ul><li>When completing RoB tool, assessment for selective outcome reporting is to be done for the study as a whole even if the bias doesn’t apply to all outcomes </li></ul><ul><li>In ‘Description’ part of tool, list those outcomes for which there is evidence of selective reporting </li></ul>
    36. 36. Other sources of bias <ul><li>Potential sources of bias should not be included here if more appropriately covered in the previous domains </li></ul><ul><li>Use for other sources that are important for considering in your review, examples: </li></ul><ul><ul><li>inappropriate influence of funders </li></ul></ul><ul><ul><li>inappropriate co-intervention </li></ul></ul><ul><ul><li>contamination </li></ul></ul><ul><ul><li>selective reporting of subgroups </li></ul></ul><ul><ul><li>baseline imbalance in important factors </li></ul></ul>
    37. 37. Other sources of bias (continued) <ul><li>Reminder! Use answering convention: </li></ul><ul><ul><li>Yes = low risk No = high risk </li></ul></ul>
    38. 38. Risk of bias in RevMan 5 RoB for each study
    39. 39. RoB in RevMan 5 – a closer look    <ul><li>One line per table (ie, per study) </li></ul>
    40. 40. RoB in RevMan 5 – a closer look  <ul><li>2 or more entries allowed: </li></ul><ul><li>assessments for different outcomes </li></ul>
    41. 41. RoB in RevMan 5 – a closer look ‡ ‡ Entries depend on item, whether at study or outcome level
    42. 42. Example Risk of Bias table
    43. 43. RoB optional figure (RevMan 5)
    44. 44. RoB optional figure (RevMan 5)
    45. 45. Summary assessments <ul><li>Need to make judgements about which domains are important </li></ul><ul><li>Judgements for an outcome within (single study) and across studies (for Summary of Findings tables) </li></ul><ul><li>How judgements are reached should be made explicit and should be informed by: </li></ul><ul><ul><li>Empirical evidence of bias ( Sections 8.5 to 8.14 ) </li></ul></ul><ul><ul><li>Likely direction of bias </li></ul></ul><ul><ul><li>Likely magnitude of bias </li></ul></ul>
    46. 46. Summary assessments (continued) <ul><li>Next few slides -> Possible approach for summary assessments of risk of bias for each important outcome (across domains) within and across studies </li></ul>
    47. 47. Possible approach Most information is from studies at low risk of bias Low risk of bias for all key domains Plausible bias unlikely to seriously alter the results Low Across studies Within a study Interpretation Risk of bias
    48. 48. Possible approach (continued) Most information is form studies at low or unclear risk of bias Unclear risk of bias for one or more key domains Plausible bias that raises some doubts about the results Unclear Across studies Within a study Interpretation Risk of bias
    49. 49. Possible approach (continued) The proportion of information from studies at high risk of bias is sufficient to affect the interpretation of the results High risk of bias for one or more key domains Plausible bias that seriously weakens confidence in the results High Across studies Within a study Interpretation Risk of bias
    50. 50. Reporting biases <ul><li>Occur when the dissemination of research findings is influenced by the nature and direction of results </li></ul><ul><li>Chapter 10: how to address in a Cochrane review </li></ul>
    51. 51. Reporting biases (continued)
    52. 52. Risk of bias exercise