Literature Evaluation
Literature evaluationis a skill that healhcare professionals develop with
practice; it requires knowledge in several areas, including clinical trial
design, outcome measures and statistical techniques.
Reasons to read clinical literature
• Improve patient care
• Learn about research
• Educate peers and students about clinical care
References :Tertiary references
•These include:
• General textbooks,
• Formulary,
• Computer database e.g MICROMEDEX (for rare adverse effects
during clinical trials)
• Tertiary sources provide detailed background and quick
information
4.
Secondary References
• Wheninformation is outdated in tertiary sources, then the
secondary sources are used.
• These include indexing, citing and abstracting
• Medline and International pharmaceutical abstract
Primary References
• Itconsists of original studies or published report in biomedical
journals.
• These provide most recent information
• Critical evaluation of the primary references is required.
Forming Answerable questions:
1.Background questions:
Understand the problem in general
2. Foreground questions:
Decision making questions
9.
The ‘pico’ formatfor foreground question
• P = Patient and problem (Population, kids, women, men, patients)
• I = Intervention (test)
• C = Comparison intervention (Control group)
• O= Outcomes
10.
Systemic approach
• Toavoid poor quality literature and mastering irrelevant
information we move to systemic approach, which includes;
1. Retrieve: Collect broad range of articles from reliable databases
(PubMed, Cochrane, Embase).
2. Review: Scan titles/abstracts to check relevance & study design.
3. Reject: Exclude irrelevant, outdated, or poor-quality studies.
4. Read: Critically appraise full-text for validity, results, and
applicability.
11.
Selecting an article:filtering process
1.Primary Survey (Initial evaluation and brief overview)
i. Analyze the title:
ii. Review the list of authors:
iii. Read the summary or abstract beginning with conclusion:
• If the conclusion is valid, important to me
• Results if they true, how useful they are
• Do the interventions make sense
• Can the information be generalized by patient
12.
Secondary survey
1. Introduction:
•Problem under study, context of the study, reasons for conducting study
• Importance of the topic, what is known, and what is known about the topic
• Specific questions (objective, goal of study and hypothesis) to be evaluated
• Study sample, primary outcome and intervention being evaluated
• Method design
• Conclusions should not extend beyond the stated objective
13.
2. Methods:
• Researchdesign-descriptive or comparative study
3. Study sample:
• How are the subjects and controls selected?
• Are the inclusion and exclusion criteria sufficiently clear to
describe the target population
4. Treatment Allocation:
• Randomization
• Masking(Blinding)
14.
5. Outcomes:
Primary outcome-allstudy
Secondary outcomes-some study
• How it was measured?
• Was the measurement free of bias?
• How the reproducible was the result?
• How to standardize measurements and to minimize inter-
observer variability?
6. Statistical Analysis:
• Statistical tests
15.
7. Results:
• Tablesand figures
• How many patients eligible for the study?
• How many enrolled?
• How many completed?
8. Discussion:
• Comparison with various studies, similarities and differences
• Limitation of the study
• Suggest new directions for appropriate study
16.
9. Conclusion:
• Mustconsistent with the study objective
• Justified by the study results
• Should not over generalize the results of the study
17.
Types of studyby content
1. Evaluation of a new therapy
2. Evaluation of a new diagnostic tests
3. Determination of the etiology of a condition
4. Prediction of outcomes
5. Natural course of a condition
18.
Once the trials
havebeen found
that seems to
answer the
questions, check
three things:
• Is the study valid?
• What are the results?
• Wil the results help
patient?
19.
1.Is the studyvalid?
• Did the author answer the questions
• What were the characteristics of the group
• Is it clear how the test was carried out
• Is the test results reproducible
• Was the reference standard appropriate
• Was the reference standards applied to all patients
• Was the test evaluated on an appropriate spectrum of patients
20.
2. What werethe results
• Are the sensitivity/specificity present
• Could the results occurred by chance
• Are there confidence limits
21.
3.Will the resultshelp the patient?
• Is the diagnostic test available, affordable, accurate and precise
• Are the results applicable to patient? Do my patients have a
similar mix of disease severity and competing conditions
• Will the results change the case management
• Will the information gain be sufficient to change a clinical decision
• Will patients be better off as a result of performing the test
22.
Bias
• Is systematicerror that can distort measurements and/or affect
investigations and their results
Bias can be reduced by:
Randomization
Control group
Blinding
Use of objective outcome measures
Validity
Validity refers tohow accurately a method measures what it is intended
to measure.
Internal validity
Internal validity is defined as the extent to which the observed results
represent the truth in the population we are studying
External validity
whether the study results apply to similar patients in a different setting
or not