EVALUATION OF LARGE
SCALE HEALTH PROGRAMS
By: Adam F. Izzeldin; BPEH, MPH, PhD candidate.
Department of International Heal...
ContentsContents
Planning the Evaluation
Impact models
Types of inference and choice of design
Defining the indicators and...
Why We Need Large-Scale
evaluation?
• In spite of large investments aimed at improving health
outcomes in low- and middle-...
Types of evaluations
• External evaluation:
 Independent
 Carried out by researchers not involved in
implementation
 Fu...
Examples for large scale evaluations
• The Multi-Country IMCI
Evaluation
• Accelerated Child Survival
and Development Init...
1. Planning the evaluation
• Who Will Carry Out the Evaluation?
• What Are the Evaluation Objectives?
• When to Plan the E...
Who Will Carry Out the Evaluation?
• For internal evaluation: implementing
institutions themselves or sometimes
with the h...
What Are the Evaluation Objectives?
• To review the available documentation on program
objectives and goals, and to turn t...
When to Plan the Evaluation?
• Before implementation; at the time the program is being
designed
• Early onset, prospective...
How Long Will the Evaluation Take?
• The answer depends on whether the evaluation is
retrospective, prospective, or a mixt...
Where Will the Evaluation Be Carried Out?
• Many large-scale programs are implemented
simultaneously in more than one coun...
2. Developing an Impact Model….
• The model helps to clarify the expectations of program
planners and implementers
• Contr...
health
services
attendance
rates or
mosquito
nets
Diagram
training,
logistics,
and
manage
ment
Inputs OutputsProcess
staff...
The IMCI Impact Model
Training of
health workers
Family and community
interventions
Health system
improvements
Improved
ho...
Development of an Impact Model
Steps in the Development of an Impact Model
Step Details
Learn about the
program
• Read doc...
A Stepwise Approach to Impact Evaluations
1.Policies; results-based planning:
Are the interventions and plans for delivery...
3.Types of inference and choice of design
• Adequacy Evaluations
(converge)
• Plausibility Evaluations
(comparison group)
...
4. Defining the indicators and
obtaining the data
• Documentation of Program Implementation
• Measuring Coverage (househol...
5. Carrying Out the Evaluation
• Starting the evaluation clock
• Feedback to implementers and midstream corrections
• Link...
Types of process, intermediate, and
outcome indicators and data needed
Type of Indicator Indicator What measured Additiona...
Joint interpretation of findings from
adequacy and plausibility analysis
How did
program areas
fare relative to
nonprogram...
6. Disseminating Evaluation Findings and
Promoting Their Uptake
• Policy makers and
program implementers
at country level....
7.Working in Large-Scale Evaluations
• First, good evaluations
require effective
communications
• Second, good evaluations...
8. Conclusion
• Conducting large-scale evaluations is not for the
fainthearted. This chapter has focused on the technical
...
Message taken home
• Ideal designs (based on textbooks like this
one) must often be modified to reflect what
is possible a...
Thank you for
listening
Upcoming SlideShare
Loading in...5
×

Evavluation of large scale health programs

161

Published on

Planning the Evaluation
Impact models
Types of inference and choice of design
Defining the indicators and obtaining the data
Carrying out the evaluation
Disseminating evaluation findings
Working in large-scale evaluations

Published in: Health & Medicine, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
161
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • .
  • .
  • .
  • .
  • The mission of public health is to "fulfill society's interest in assuring conditions in which people can be healthy." The three core public health functions are:
  • Such retrospective eval­uations have important limitations. In such cases, the resulting information is often incomplete, inconsistent, and difficult to verify. Baseline data are often unavailable, and even where they exist, they may be of poor quality, be based on sample sizes that are too small to address the evalu­ation questions
  • 1. Assess the technical soundness of implemen­tation plans in light of local epidemiological and health services characteristics
    2. Investigate whether the quantity and quality of the program being provided are compatible with a potential impact
    3. Assess whether data on outputs or utilization suggest that an impact is likely.
    4. Check whether adequate coverage has been reached
    5. Assess the impact on health.
    6. Measure cost-effectiveness. If there is evidence of an impact
  • There is no single "best" design for evaluations of large-scale programs. Different types of decisions re­quire different degrees of certainty to support their de­cisions. Whereas some decisions require randomized trials, other decisions may be adequately taken with observational studies
  • The are:
  • Dissemination activities should be planned and carried out with several audi­ences in mind:
  • Evavluation of large scale health programs

    1. 1. EVALUATION OF LARGE SCALE HEALTH PROGRAMS By: Adam F. Izzeldin; BPEH, MPH, PhD candidate. Department of International Health, TMDU CESAR G. VICTORA et., al. : evaluation of large scale health programs; Michael H. Merson, Robert E. Black, Anne J. Mills. Global health: Diseases, Programs, Systems and Policies, 2011
    2. 2. ContentsContents Planning the Evaluation Impact models Types of inference and choice of design Defining the indicators and obtaining the data Carrying out the evaluation Disseminating evaluation findings Working in large-scale evaluations
    3. 3. Why We Need Large-Scale evaluation? • In spite of large investments aimed at improving health outcomes in low- and middle-income countries, few programs been properly evaluated ("Evaluation," 2011; Evaluation Gap Working Group, 2006; Oxman et al., 2010). • Each year billions of dollars are spent on thousands of programs to improve health, education and other social sector outcomes in the developing world, but very few programs benefit from studies that could determine whether or not they actually made a difference (Evaluation Gap Working Group, 2006).
    4. 4. Types of evaluations • External evaluation:  Independent  Carried out by researchers not involved in implementation  Funded by third party • Internal evaluation:  Dependent  Carried out by implementing institutions  Funded by implementers themselves • Two categories for evaluation: formative and summative.
    5. 5. Examples for large scale evaluations • The Multi-Country IMCI Evaluation • Accelerated Child Survival and Development Initiative • Tanzanian National Voucher Scheme for Insecticide-Treated Nets
    6. 6. 1. Planning the evaluation • Who Will Carry Out the Evaluation? • What Are the Evaluation Objectives? • When to Plan the Evaluation? • How Long Will the Evaluation Take? • Where Will the Evaluation Be Carried Out?
    7. 7. Who Will Carry Out the Evaluation? • For internal evaluation: implementing institutions themselves or sometimes with the help of external consultants for specific tasks. • For external evaluation: national or international institution of research will be recruited (UNICEF commissioned the Bloomberg School of Public Health at Johns Hopkins University to conducted an independent retrospective evaluation of ACSD in Benin, Ghana, and Mali)
    8. 8. What Are the Evaluation Objectives? • To review the available documentation on program objectives and goals, and to turn these items into evaluation objectives. • The ultimate objective of an evaluation is to influence decisions. • Funders interested in impact outcomes: (Their decisions will be whether to continue funding ,or strategy needs to be reformulated) • Local implementers interested in quality of service and population coverage: (Their decisions are related to improving the program through specific actions)
    9. 9. When to Plan the Evaluation? • Before implementation; at the time the program is being designed • Early onset, prospective evaluations allow collection of baseline data. • Allows thorough, continuing documentation of program inputs and the contextual variables that may affect the program's impact. • Early planning may enable the evaluation team to influence how the program is rolled out, thereby improving the validity of future comparisons. • A disadvantage of prospective evaluations is that program implementation may change over time for reasons that are outside the control.
    10. 10. How Long Will the Evaluation Take? • The answer depends on whether the evaluation is retrospective, prospective, or a mixture of both techniques Fully prospective evaluations include sequential steps: 1. Collect baseline information 2. Wait until the large-scale program is fully implemented and reaches high population coverage 3. Allow time for a biological effect to take place in participating individuals 4. Wait until such effect can be measured in an endline survey 5. Clean the data and conduct the analysis
    11. 11. Where Will the Evaluation Be Carried Out? • Many large-scale programs are implemented simultaneously in more than one country • This decision is usually taken in agreement with the implementation agencies • Selection criteria should include characteristics that are desirable in all participating countries (geography, health system strength, and epidemiological profiles, and health system etc.) • The rationale for selecting some countries and not others, because will affect the external validity or generalizability of the evaluation findings
    12. 12. 2. Developing an Impact Model…. • The model helps to clarify the expectations of program planners and implementers • Contributes to the development of the evaluation proposal • Helps guide the analyses and attribution of the results • Can help track changes in assumptions as these evolve in response to early evaluation findings. • Helps implementers and evaluators stay honest about what was expected
    13. 13. health services attendance rates or mosquito nets Diagram training, logistics, and manage ment Inputs OutputsProcess staff, drugs, Equipment, teaching materials percentage of women giving birth at a healthcare facility or the proportion of children sleeping under an insecticide- treated mosquito net Outcomes Common framework for evaluation impacts reduced mortality or improved nutrition
    14. 14. The IMCI Impact Model Training of health workers Family and community interventions Health system improvements Improved household compliance/care Improved careseeking & utilization Improved preventive practices Introduction of IMCI Improved quality of care in health facilities Improved health/nutrition & reduced mortality Increased coverage for curative & preventive interventions
    15. 15. Development of an Impact Model Steps in the Development of an Impact Model Step Details Learn about the program • Read documents • Interview planners and implementers • Carry out field visits • Use special techniques as needed: cards, sorting exercise Develop drafts of the model • Focus on intentions and assumptions • Document responses from implementers • Record iterations and changes as model develops Quantify and check assumptions • Review existing evidence and literature • Identify early results from the evaluation - Documentations: what was actually done? - Outcomes: are assumptions confirmed? Use and evaluate the model • Develop an evaluation design, testing each assumption if possible • Plan for analysis, including contextual factors • Analyze • Interpret results with participation by implementers
    16. 16. A Stepwise Approach to Impact Evaluations 1.Policies; results-based planning: Are the interventions and plans for delivery technically sound and appropriate for the epidemiological and health system context? 2.Provision: Are adequate services being provided? at health facility/community levels? 3.Utilization: Are these services being used by the population? 4. Effective coverage: Have adequate levels of effective coverage been reached in the population? 5.Impact: Is there an impact on health and nutrition? 6.Cost-effectiveness: Is the program cost-effective?
    17. 17. 3.Types of inference and choice of design • Adequacy Evaluations (converge) • Plausibility Evaluations (comparison group) • Before-and-After Study in Program and Comparison Areas • The Ecological Dose- Response Design • Randomized (Probability) Evaluation Designs • Stepped Wedge Design Figure 16-3 Simplified Conceptual Framework of Factors Affecting Health, from the Standpoint of Evaluation Design Impact
    18. 18. 4. Defining the indicators and obtaining the data • Documentation of Program Implementation • Measuring Coverage (household surveys) • Measuring or Modeling Impact • Describing Contextual Factors • Measuring Costs ( unit cost, operations, utilizations) • Patient-Level Costs (severity of illness ) • Facility-Level Characteristics (quality, scope of service ) • Contextual Variables ( transport, supervision, patients' ability to access care) • Data Collection Methods (cost) and Allocation Methods
    19. 19. 5. Carrying Out the Evaluation • Starting the evaluation clock • Feedback to implementers and midstream corrections • Linking the independent evaluation to routine monitoring and evaluation • Data Analyses • Analyzing Costs and Cost-Effectiveness (process, intermediate, and outcome indicators) • Interpretation and Attribution
    20. 20. Types of process, intermediate, and outcome indicators and data needed Type of Indicator Indicator What measured Additional data Process cost-effectiveness Expected costs and value for money Budget projections, work plans, coverage Process total cost per person treated Services provided Utilization rates Process total cost per preventive item Services provided Utilization rates Process Cost per capita Services provided, program effort Population Intermediate cost of quality improvement Treatment leading to health gains Utilization rates adjusted by quality Outcome cost per death averted Mortality reduction Mortality rates Outcome cost per life year gained Mortality reduction Mortality rates and age of death (and life expectancy
    21. 21. Joint interpretation of findings from adequacy and plausibility analysis How did program areas fare relative to nonprogram areas? (plausibility assessment) How did impact indicators change over time in the program areas (adequacy Assessment) Improved No change Worsened Better Both areas improved, but the program led to faster improvement Program provided a safety net Program provided a partial safety net Same Both areas improved; no evidence of an additional program impact No change in either area; no evidence of program impact Indicators worsened in both areas; no evidence of a safety net Worse Both areas improved; presence of the program may have precluded the deployment of more effective strategy Program precluded progress; presence of the program may have hindered the deployment of more effective strategies Program was detrimental; presence of the program may have hindered the deployment of more effective strategies
    22. 22. 6. Disseminating Evaluation Findings and Promoting Their Uptake • Policy makers and program implementers at country level. • Global scientific public health communities.
    23. 23. 7.Working in Large-Scale Evaluations • First, good evaluations require effective communications • Second, good evaluations require a broad range of skills and techniques, as well as an interdisciplinary approach. • Third, good evaluations require patience and flexibility.
    24. 24. 8. Conclusion • Conducting large-scale evaluations is not for the fainthearted. This chapter has focused on the technical aspects of designing and conducting an evaluation, mentioning only in passing some of the political and personal challenges involved
    25. 25. Message taken home • Ideal designs (based on textbooks like this one) must often be modified to reflect what is possible and affordable in specific country contexts.
    26. 26. Thank you for listening
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×