2. What is Bias?
• Systematic error
• Deviation from the truth (in results and
inferences)
• May be
– Trivial
– Substantial
• Can lead to
– Underestimation of effect
– Overestimation of effect
2
3. Significance of assessing bias in
systematic review
• Difference in risk of bias are related to
variation in the results of studies included in
systematic review.
• All results may be consistent among
themselves, but all studies could be flawed
due to bias.
3
5. Sources of bias
5
Selection bias
Reporting bias
Attrition bias
Performance bias
Detection bias
Systematic difference
in baseline
characteristics
between two groups
E.g. Selection of
participants with
differential age
characteristics
6. Sources of bias
6
Selection bias
Reporting bias
Attrition bias
Performance bias
Detection bias
Systematic difference in
attention and treatment
(other than intervention)
between two groups
E.g. participant who are
aware that they are not
receiving any treatment
(intervention) may seek
other forms of care
7. Sources of bias
7
Selection bias
Reporting bias
Attrition bias
Performance bias
Detection bias
Systematic difference in
loss to follow up (or
withdrawals) between
two groups
8. Detection bias
Attrition bias
Sources of bias
8
Selection bias
Reporting bias
Performance bias
Systematic difference in
between two groups in
how outcomes are
determined
E.g. If the researcher is
aware of intervention/
research, the outcome
might be assessed with
bias
9. Sources of bias
9
Selection bias
Reporting bias
Attrition bias
Performance bias
Detection bias
Systematic differences
between reported and
unreported findings
E.g. Only publishing
favorable outcomes,
leaving readers with
incomplete and skewed
understanding of the
results
10. Assessing risk of bias
• Domain based assessment: Cochrane
recommendation
• Impossible to know the true risk of bias
• Involves subjectivity: E.g. lack of blinding of
participants may have affected ……..
10
11. Domains for assessing risk of bias
11
Allocation concealment
Selective outcome reporting
Blinding of outcome assessment
Blinding of participants and personnel
Incomplete outcome data
Randon Sequence Generation
Other sources of bias
13. How to assess risk of bias
• Reviewing author’s judgment in the article
• Searching for clues to bias
• Looking for missing information
– Reviewing study protocol
– Contacting authors
13
14. Random Sequence Generation
• Random assignment of people into intervention
and control groups
• Avoids selection bias and confounders (both
known and unknown)
Fundamental questions:
• Was the allocation sequence adequately
generated?
• What were the methods used to generate
allocation sequence?
14
15. Judgment:
Random Sequence Generation
• Low risk of bias if allocation done by
– Referring to a random number table
– Using Computer random number generator used
– Coin tossing
– Shuffling cards or envelopes
– Throwing dice
15
16. Judgment:
Random Sequence Generation
• High risk of bias if
– Sequence generated by odd or even date of birth,
day of visit, etc.
– Allocation by judgment of clinician, participant
– Allocation based on results of laboratory test
16
17. Allocation Concealment
• Concealing allocation sequence from those
assigning participants to intervention groups,
until the moment of assignment
• Ensures that patients enroll into a study without
knowing which group they will be assigned to.
• If the researcher knows the next patient will be
allocated to intervention, he/she may try to help
a certain patient whom he thinks will benefit
more from intervention.
17
18. Allocation concealment
Fundamental questions
• Was the allocation adequately concealed?
• What methods are used to conceal the
allocation sequence?
• Could intervention allocations have been
foreseen in advance of, or during enrolment?
18
19. Judgment:
Allocation Concealment
• Low risk of bias if
– Central allocation (including telephone, web-
based and pharmacy-controlled randomization)
– Sequentially numbered drug containers of
identical appearance
– Sequentially numbered, opaque, sealed
envelopes.
19
20. Judgment:
Allocation Concealment
• High risk of bias if
– Open random allocation schedule (e.g. list of
random numbers)
– Unsealed or non-opaque envelopes
– Date of birth
– Case record number
20
21. Blinding of participants, personnel &
outcome assessors
• Masking of participants & assessor from
knowledge about which intervention was
received.
• Ensures control group receive similar amount
of attention, ancillary treatment and
investigations
• Avoids performance and detection bias
21
22. Blinding of participants, personnel &
outcome assessors
Fundamental questions
• Was knowledge of the allocated intervention
adequately prevented during the study?
• What measures were used for blinding?
• Is it likely that blinding was broken?
22
23. Judgment: Blinding
• Low risk of bias if
– No blinding or incomplete blinding, but outcome
unlikely to be influenced
– Blinding done, and unlikely that the blinding could
have been broken
– No blinding, but outcome assessment blinded and
non-blinding unlikely to introduce bias
23
24. Judgment: Blinding
• High risk of bias if
– No blinding or incomplete blinding, and outcome
likely to be influenced
– Blinding done, but likely that the blinding could
have been broken
24
25. Incomplete Outcome Data
• Unavailability of complete outcome data for
review
– Attrition: Loss to follow up, withdrawals
– Exclusions: Available data not included in analysis and
report
• Can lead to attrition bias
• High proportion of missing outcomes, or a large
difference in proportions between two groups, is
the main cause for concern over bias.
25
26. Incomplete Outcome Data
Fundamental questions
• How much data is missing from each group?
• Why is it missing?
• How were they analyzed?
• Were incompletely outcome data adequately
addressed?
26
27. Judgment:
Incomplete Outcome Data
• Low risk of bias if
– No missing outcome data
– Reasons for missing data not related to outcome
– Missing data balanced across groups and reasons
are similar
– Proportion of missing outcomes not enough to
have a clinically relevant effect.
– Plausible effect size (difference in mean or
standardized diff. in means) not enough to have a
clinically relevant effect
27
28. Judgment:
Incomplete Outcome Data
• High risk of bias if
– Reasons for missing data related to outcome
– Imbalance in numbers or reasons for missing data
across groups
– Proportion of missing outcomes enough to have a
clinically relevant effect.
– Plausible effect size (difference in mean or
standardized diff. in means) enough to have a
clinically relevant effect
28
29. Selective Outcome Reporting
• Statistically significant results more likely to be
reported as planned and in detail
• Can lead to reporting bias
• Difficult to assess. Possible ways:
1.Comparing methods to results to look for
– Outcomes measured (or likely to be measured but not
reported
– Outcomes added, statistically changed, subgroups only
– Reporting that cannot be used in a review (non-
significance without numerical results)
2.Referring to study protocol or trial register
29
31. Judgment:
Selective Outcome Reporting
• Low risk of bias if
– Protocol is available and all pre-specified
outcomes of interest to the review are reported in
the pre-specified way
– Protocol not available but it is clear that all pre-
specified and expected outcomes of interest are
reported
31
32. Judgment:
Selective Outcome Reporting
• High risk of bias if
– Primary outcomes have not been reported as pre-
specified or expected
– Outcomes are reported incompletely so that they
cannot be entered in a meta-analysis
32
33. Other threats to Validity
• Bias due to other problems not covered
elsewhere above.
33
34. Other threats to Validity
Fundamental Question
• Was the study prone to problems that could
put it at a high risk of bias?
34
35. Judgment:
Other threats to validity
• Low risk of bias if
– The study appears to be free of other sources of
bias
• High risk of bias if
– Has a potential source of bias related to the
specific design used
– Had extreme baseline imbalance
– Claimed to have been fraudulent
35
36. Tool for Assessment
Risk of bias table (RevMan)
• Categorizes each domain into
– Low risk
– High risk
– Unclear
• Support for judgment
– Direct quotes from article
– Additional comments with rationale
• One for each study reviewed
36
38. Risk of Bias
Summary
38
Figure Source:
Dorniak-Wall T, Grivell RM, Dekker GA, Hague W,
Dodd JM. The role of L-arginine in the prevention
and treatment of pre-eclampsia: a systematic
review of randomised trials. Journal of human
hypertension. 2014 Apr 1;28(4):230.
39. Risk of Bias Graph
39
Figure Source:
Dorniak-Wall T, Grivell RM, Dekker GA, Hague W, Dodd JM. The role of L-arginine in the prevention and treatment of pre-
eclampsia: a systematic review of randomised trials. Journal of human hypertension. 2014 Apr 1;28(4):230.
40. Overall conclusion
• Bias can lead to overestimation or
underestimation of results
• Bias can be assessed for seven domains
• Judgments should be used for assessing risk
by
– Searching for clues
– Reviewing author’s discussion
• Cautions should be applied in judgments
40
42. References
• Higgins JP, Green S, editors. Cochrane handbook for systematic
reviews of interventions. John Wiley & Sons; 2011 Aug 24.
• http://methods.cochrane.org/bias/assessing-risk-bias-included-
studies
• Eble A, Boone P. Risk and evidence of bias in randomized controlled
trials; 2012.
• Cochrane training powerpoint slide: Assessing risk of bias in
included studies.
• Dorniak-Wall T, Grivell RM, Dekker GA, Hague W, Dodd JM. The role
of L-arginine in the prevention and treatment of pre-eclampsia: a
systematic review of randomised trials. Journal of human
hypertension. 2014 Apr 1;28(4):230.
42