Assessing Quality of Individual Studies
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Assessing Quality of Individual Studies

on

  • 750 views

 

Statistics

Views

Total Views
750
Views on SlideShare
750
Embed Views
0

Actions

Likes
0
Downloads
3
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Assessing the Quality of Individual Studies
  • Systematic Review Process Overview This slide illustrates the steps in the systematic review process. This module focuses on assessing the quality of individual studies.
  • Learning Objectives
  • What Is Quality Assessment? Different institutions may use the word “quality” for varied purposes. In this context, quality assessment is synonymous with internal validity, is relevant for individual studies, and is distinct from the evaluation of the risk of bias for a body of evidence. Reference: Lohr KN, Carey TS. Assessing “best evidence”: issues in grading the quality of studies for systematic reviews. Jt Comm J Qual Improv 1999;25:470-9. http://www.ncbi.nlm.nih.gov/pubmed/10481816
  • What Are the Components of Quality Assessment? The most comprehensive assessment of quality includes considerations of systematic errors, nonsystematic errors, and inferential errors. Reference: Lohr KN, Carey TS. Assessing “best evidence”: issues in grading the quality of studies for systematic reviews. Jt Comm J Qual Improv 1999;25:470-9. http://www.ncbi.nlm.nih.gov/pubmed/10481816
  • Consider the Contribution of an Individual Study to the Body of Evidence The quality or internal validity of a study (i.e., limitations of study design and conduct) is one of several ways in which an individual study can contribute to the body of evidence. When evaluating the strength of the entire body of evidence (see the module, “Grading the Strength of Evidence”), the quality of the study and the type of design influence the evaluation of the risk of systematic bias and inferential error for the entire body of evidence. The size of the study could result in nonsystematic or random error; it influences considerations of precision of the body of evidence. The direction and magnitude of the results influence considerations of consistency of the body of evidence. The use of indirect comparisons rather than direct comparisons, or surrogate outcomes rather than health outcomes, can influence the directness of the body of evidence. The relevance of the results to the key questions influences considerations of applicability or generalizability of the body of evidence. Reference: Owens DK, Lohr KN, Atkins D, et al. Grading the strength of a body of evidence when comparing medical interventions. In: Methods guide for comparative effectiveness reviews . Rockville, MD: Agency for Healthcare Research and Quality, Posted July 2009. Available at: http://effectivehealthcare.ahrq.gov/ehc/products/60/318/2009_0805_grading.pdf.
  • Reasons for Quality Assessment Quality assessment is an integral element of the systematic review process. Systematic reviews incorporate the quality of individual studies in interpreting their results and in grading the body of evidence. Decisions made at the time of quality assessment may have significant consequences for the review, depending on how the systematic review treats poor-quality studies. Depending upon the review topic and the volume of evidence, poor-quality studies may be excluded from meta-analyses or from the review itself.
  • What Are the Steps in Quality Assessment of Each Individual Study (I)? Before quality assessment can occur, reviewers should have finalized selection criteria, including eligible study designs when relevant. When assessing the quality of an individual study, the first step is to evaluate the study design and classify studies as randomized or nonrandomized as a minimal cut. Depending on the topic, further distinctions based on study design may be necessary. The second step is to evaluate design-specific sources of bias. Other considerations include study conduct and discrepancies between study design and study conduct. In the absence of adequate information on study design and conduct, the quality of reporting often influences quality assessment. Reference: Helfand M, Balshem H. Principles in developing and applying guidance. In: Methods guide for comparative effectiveness reviews . Rockville, MD: Agency for Healthcare Research and Quality, Posted August 2009. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.
  • What Are the Steps in Quality Assessment of Each Individual Study (II)? The third step is to use the information from the assessment across multiple sources of bias to arrive at a summary judgment of the study’s overall quality. Reference: Helfand M, Balshem H. Principles in developing and applying guidance. In: Methods guide for comparative effectiveness reviews . Rockville, MD: Agency for Healthcare Research and Quality, Posted August 2009. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.
  • Questions To Consider When Classifying Study Design This slide shows questions to consider when classifying study design. Consider using a tool that uses these or similar questions to arrive at design classifications (e.g., Hartling et al., 2009). Reference: Viswanathan M. A tool for the classification of study designs in systematic reviews of interventions and exposures [slide presentation]. Agency for Healthcare Research and Quality 2009 Annual Conference; 2009 Sept 13-16; Bethesda, MD. Available at: http://www.ahrq.gov/about/annualconf09/viswanathan.htm.
  • Apply Predefined Criteria for All Study Types (I) Commonly used tools can be found in the following publications: Randomized controlled trials quality rating tools West S, King V, Carey TS, et al. Systems to Rate the Strength of Scientific Evidence , Evidence Report/Technology Assessment No. 47 (Prepared by the Research Triangle Institute–University of North Carolina Evidence-based Practice Center under Contract No. 290-97-0011. Rockville, MD: Agency for Healthcare Research and Quality, March 2002. AHRQ Publication No. 02-E015. Available at: http://www.ahrq.gov/clinic/epcsums/strengthsum.pdf. Observational intervention quality rating tools Deeks JJ, Dinnes J, D’Amico R, et al, for the International Stroke Trial Collaborative Group and the European Carotid Surgery Trial Collaborative Group. Evaluating non-randomised intervention studies. Health Technol Assess 2003;7(27):iii-x, 1-173. Observational epidemiology quality rating tools Sanderson S, Tatt ID, Higgins JP. Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: a systematic review and annotated bibliography. Int J Epidemiol 2007;36:666-76. Diagnostic accuracy tools Whiting P, Rutjes AWS, Dinnes J, et al. Development and validation of methods for assessing the quality of diagnostic accuracy studies. Health Technol Assess 2004;8(25):iii, 1-234. Each tool may have somewhat different instructions for how to rate these criteria. These instructions must be interpreted in the context of the specific systematic review. After selecting the most appropriate tool for the review, the principal investigator of the systematic review should augment these instructions with specific examples related to the topic of the review. Relevant baseline characteristics, prognostic factors, and primary outcomes relevant to the review should be identified a priori when possible.
  • Apply Predefined Criteria for All Study Types (II) The predefined criteria listed here have been demonstrated to influence study outcomes and should be routinely evaluated for all study types when applicable to that design.
  • Additional Criteria for Trials Reference: Helfand M, Balshem H. Principles in developing and applying guidance. In: Methods guide for comparative effectiveness reviews . Rockville, MD: Agency for Healthcare Research and Quality, Posted August 2009. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.
  • Additional Criteria for Observational Studies (I) Nonrandomized studies have additional sources of bias that need to be evaluated. This list may be modified or expanded to account for other design-specific sources of bias. As with other criteria, this list needs to be interpreted in the context of the specific systematic review. The principal investigator for the review should specify acceptable methods for measuring exposure variables for the topic of the review. Reference: Helfand M, Balshem H. Principles in developing and applying guidance. In: Methods guide for comparative effectiveness reviews . Rockville, MD: Agency for Healthcare Research and Quality, Posted August 2009. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.
  • Additional Criteria for Observational Studies (II) The principal investigator should specify acceptable methods for dealing with selection bias, recall bias, and confounding. Reference: Helfand M, Balshem H. Principles in developing and applying guidance. In: Methods guide for comparative effectiveness reviews . Rockville, MD: Agency for Healthcare Research and Quality, Posted August 2009. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.
  • Arrive at a Comprehensive Judgment of Quality The “fair” category is broad, covering a range of analytic concerns, and many studies are likely to receive this rating. Universal judgments of quality should be accompanied by text explaining the reason for the rating, particularly for studies rated “fair” or “poor.” The evaluation of overall quality should not be based on a raw quantitative score that may equate grave errors in design with minor errors. A transparent system that clearly indicates how individual elements were considered in the overall rating is less likely to suffer from false precision. Reference: Helfand M, Balshem H. Principles in developing and applying guidance. In: Methods guide for comparative effectiveness reviews . Rockville, MD: Agency for Healthcare Research and Quality, Posted August 2009. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.
  • Attributes of Good-Quality Studies Standards for good-quality studies must be set in the context of each topic and key question. Principal investigators should identify an appropriate measurement of outcomes, appropriate statistical and analytical methods, and ideal drop-out rates for the topic a priori . Reference: Helfand M, Balshem H. Principles in developing and applying guidance. In: Methods guide for comparative effectiveness reviews . Rockville, MD: Agency for Healthcare Research and Quality, Posted August 2009. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.
  • Attributes of Fair-Quality Studies Fair-quality studies may have a range of deficiencies. Often, because criteria are interrelated, a flaw in one criterion will result in a fair rating in another criterion. Reference: Helfand M, Balshem H. Principles in developing and applying guidance. In: Methods guide for comparative effectiveness reviews . Rockville, MD: Agency for Healthcare Research and Quality, Posted August 2009. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.
  • Attributes of Poor-Quality Studies As with fair-quality studies, criteria may be interrelated, so a major flaw in one criterion may influence another criterion. For example, unclear randomization and allocation concealment may result in important differences at baseline. Reference: Helfand M, Balshem H. Principles in developing and applying guidance. In: Methods guide for comparative effectiveness reviews . Rockville, MD: Agency for Healthcare Research and Quality, Posted August 2009. Available at: http://www.effectivehealthcare.ahrq.gov/ehc/products/60/294/2009_0805_principles1.pdf.
  • Treatment of Poor-Quality Studies in the Review
  • Reporting Quality Ratings
  • Key Messages (I): Definition of Quality Assessment In acknowledging the central role of judgment in the process of quality assessment, the key principle is transparency.
  • Key Messages (II): Rationale of and Steps in Quality Assessment
  • Key Messages (III): Reporting
  • References (I)
  • References (II)
  • References (III)
  • Author

Assessing Quality of Individual Studies Presentation Transcript