• Like

Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Evavluation of large scale health programs

  • 106 views
Uploaded on

Planning the Evaluation …

Planning the Evaluation
Impact models
Types of inference and choice of design
Defining the indicators and obtaining the data
Carrying out the evaluation
Disseminating evaluation findings
Working in large-scale evaluations

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
106
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
7
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • .
  • .
  • .
  • .
  • The mission of public health is to "fulfill society's interest in assuring conditions in which people can be healthy." The three core public health functions are:
  • Such retrospective eval­uations have important limitations. In such cases, the resulting information is often incomplete, inconsistent, and difficult to verify. Baseline data are often unavailable, and even where they exist, they may be of poor quality, be based on sample sizes that are too small to address the evalu­ation questions
  • 1. Assess the technical soundness of implemen­tation plans in light of local epidemiological and health services characteristics2. Investigate whether the quantity and quality of the program being provided are compatible with a potential impact3. Assess whether data on outputs or utilization suggest that an impact is likely. 4. Check whether adequate coverage has been reached5. Assess the impact on health. 6. Measure cost-effectiveness. If there is evidence of an impact
  • There is no single "best" design for evaluations of large-scale programs. Different types of decisions re­quire different degrees of certainty to support their de­cisions. Whereas some decisions require randomized trials, other decisions may be adequately taken with observational studies
  • The are:
  • Dissemination activities should be planned and carried out with several audi­ences in mind:

Transcript

  • 1. EVALUATION OF LARGE SCALE HEALTH PROGRAMS By: Adam F. Izzeldin; BPEH, MPH, PhD candidate. Department of International Health, TMDU CESAR G. VICTORA et., al. : evaluation of large scale health programs; Michael H. Merson, Robert E. Black, Anne J. Mills. Global health: Diseases, Programs, Systems and Policies, 2011
  • 2. Contents Contents Planning the Evaluation Impact models Types of inference and choice of design Defining the indicators and obtaining the data Carrying out the evaluation Disseminating evaluation findings Working in large-scale evaluations
  • 3. Why We Need Large-Scale evaluation? • In spite of large investments aimed at improving health outcomes in low- and middle-income countries, few programs been properly evaluated ("Evaluation," 2011; Evaluation Gap Working Group, 2006; Oxman et al., 2010). • Each year billions of dollars are spent on thousands of programs to improve health, education and other social sector outcomes in the developing world, but very few programs benefit from studies that could determine whether or not they actually made a difference (Evaluation Gap Working Group, 2006).
  • 4. Types of evaluations • External evaluation:  Independent  Carried out by researchers not involved in implementation  Funded by third party • Internal evaluation:  Dependent  Carried out by implementing institutions  Funded by implementers themselves • Two categories for evaluation: formative and summative.
  • 5. Examples for large scale evaluations • The Multi-Country IMCI Evaluation • Accelerated Child Survival and Development Initiative • Tanzanian National Voucher Scheme for Insecticide-Treated Nets
  • 6. 1. Planning the evaluation • Who Will Carry Out the Evaluation? • What Are the Evaluation Objectives? • When to Plan the Evaluation? • How Long Will the Evaluation Take? • Where Will the Evaluation Be Carried Out?
  • 7. Who Will Carry Out the Evaluation? • For internal evaluation: implementing institutions themselves or sometimes with the help of external consultants for specific tasks. • For external evaluation: national or international institution of research will be recruited (UNICEF commissioned the Bloomberg School of Public Health at Johns Hopkins University to conducted an independent retrospective evaluation of ACSD in Benin, Ghana, and Mali)
  • 8. What Are the Evaluation Objectives? • To review the available documentation on program objectives and goals, and to turn these items into evaluation objectives. • The ultimate objective of an evaluation is to influence decisions. • Funders interested in impact outcomes: (Their decisions will be whether to continue funding ,or strategy needs to be reformulated) • Local implementers interested in quality of service and population coverage: (Their decisions are related to improving the program through specific actions)
  • 9. When to Plan the Evaluation? • Before implementation; at the time the program is being designed • Early onset, prospective evaluations allow collection of baseline data. • Allows thorough, continuing documentation of program inputs and the contextual variables that may affect the program's impact. • Early planning may enable the evaluation team to influence how the program is rolled out, thereby improving the validity of future comparisons. • A disadvantage of prospective evaluations is that program implementation may change over time for reasons that are outside the control.
  • 10. How Long Will the Evaluation Take? • The answer depends on whether the evaluation is retrospective, prospective, or a mixture of both techniques Fully prospective evaluations include sequential steps: 1. Collect baseline information 2. Wait until the large-scale program is fully implemented and reaches high population coverage 3. Allow time for a biological effect to take place in participating individuals 4. Wait until such effect can be measured in an endline survey 5. Clean the data and conduct the analysis
  • 11. Where Will the Evaluation Be Carried Out? • Many large-scale programs are implemented simultaneously in more than one country • This decision is usually taken in agreement with the implementation agencies • Selection criteria should include characteristics that are desirable in all participating countries (geography, health system strength, and epidemiological profiles, and health system etc.) • The rationale for selecting some countries and not others, because will affect the external validity or generalizability of the evaluation findings
  • 12. 2. Developing an Impact Model…. • The model helps to clarify the expectations of program planners and implementers • Contributes to the development of the evaluation proposal • Helps guide the analyses and attribution of the results • Can help track changes in assumptions as these evolve in response to early evaluation findings. • Helps implementers and evaluators stay honest about what was expected
  • 13. Common framework for evaluation Diagram Inputs staff, drugs, Equipment, teaching materials Process training, logistics, and manage ment Outputs Outcomes health services attendance rates or mosquito nets percentage of women giving birth at a healthcare facility or the proportion of children sleeping under an insecticidetreated mosquito net impacts reduced mortality or improved nutrition
  • 14. The IMCI Impact Model Introduction of IMCI Health system improvements Training of health workers Family and community interventions Improved quality of care in health facilities Improved household compliance/care Improved careseeking & utilization Improved preventive practices Increased coverage for curative & preventive interventions Improved health/nutrition & reduced mortality
  • 15. Development of an Impact Model Steps in the Development of an Impact Model Step Details Learn about the program • • • • Develop drafts of the model • Focus on intentions and assumptions • Document responses from implementers • Record iterations and changes as model develops Quantify and check assumptions • Review existing evidence and literature • Identify early results from the evaluation - Documentations: what was actually done? - Outcomes: are assumptions confirmed? Use and evaluate the model • Develop an evaluation design, testing each assumption if possible • Plan for analysis, including contextual factors • Analyze • Interpret results with participation by implementers Read documents Interview planners and implementers Carry out field visits Use special techniques as needed: cards, sorting exercise
  • 16. A Stepwise Approach to Impact Evaluations 6.Cost-effectiveness: Is the program cost-effective? 5.Impact: Is there an impact on health and nutrition? 4. Effective coverage: Have adequate levels of effective coverage been reached in the population? 3.Utilization: Are these services being used by the population? 2.Provision: Are adequate services being provided? at health facility/community levels? 1.Policies; results-based planning: Are the interventions and plans for delivery technically sound and appropriate for the epidemiological and health system context?
  • 17. 3.Types of inference and choice of design • Adequacy Evaluations (converge) • Plausibility Evaluations (comparison group) • Before-and-After Study in Program and Comparison Areas • The Ecological DoseResponse Design • Randomized (Probability) Evaluation Designs • Stepped Wedge Design Impact Figure 16-3 Simplified Conceptual Framework of Factors Affecting Health, from the Standpoint of Evaluation
  • 18. 4. Defining the indicators and obtaining the data • Documentation of Program Implementation • Measuring Coverage (house-hold surveys) • Measuring or Modeling Impact • Describing Contextual Factors • Measuring Costs ( unit cost, operations, utilizations) • Patient-Level Costs (severity of illness ) • Facility-Level Characteristics (quality, scope of service ) • Contextual Variables ( transport, supervision, pa-tients' ability to access care) • Data Collection Methods (cost) and Allocation
  • 19. 5. Carrying Out the Evaluation • Starting the evaluation clock • Feedback to implementers and midstream corrections • Linking the independent evaluation to routine monitoring and evaluation • Data Analyses • Analyzing Costs and Cost-Effectiveness (process, intermediate, and outcome in-dicators) • Interpretation and Attribution
  • 20. Types of process, intermediate, and outcome indicators and data needed Type of Indicator Indicator Process cost-effectiveness Expected costs and Budget value for money projections, work plans, coverage Process total cost per person treated Services provided Utilization rates Process total cost per preventive item Services provided Utilization rates Process Cost per capita Services provided, program effort Population Treatment leading to health gains Utilization rates adjusted by quality Intermediate cost of quality improvement What measured Additional data Outcome cost per death averted Mortality reduction Mortality rates Outcome cost per life year gained Mortality reduction Mortality rates and age of death (and life expectancy
  • 21. Joint interpretation of findings from adequacy and plausibility analysis How did How did impact indicators change over time in the program program areas areas (adequacy Assessment) fare relative to nonprogram areas? Improved No change Worsened (plausibility assessment) Better Both areas improved, but the program led to faster improvement Program provided a safety net Program provided a partial safety net Same Both areas improved; no ev-idence of an additional program impact No change in either area; no evidence of program impact Indicators worsened in both areas; no evidence of a safety net Worse Both areas improved; presence of the program may have precluded the deploy-ment of more effective strategy Program precluded progress; presence of the program may have hindered the deploy-ment of more effective strategies Program was detrimen-tal; presence of the program may have hin-dered the deployment of more effective strategies
  • 22. 6. Disseminating Evaluation Findings and Promoting Their Uptake • Policy makers and program implementers at country level. • Global scientific public health communities.
  • 23. 7.Working in Large-Scale Evaluations • First, good evaluations require effective communications • Second, good evaluations require a broad range of skills and techniques, as well as an interdisciplinary approach. • Third, good evaluations require patience and flexibility.
  • 24. 8. Conclusion • Conducting large-scale evaluations is not for the fainthearted. This chapter has focused on the technical aspects of designing and conducting an evaluation, mentioning only in passing some of the political and personal challenges involved
  • 25. Message taken home • Ideal designs (based on text-books like this one) must often be modified to reflect what is possible and affordable in specific country contexts.
  • 26. Thank you for listening