Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Summary
WHAT CAN WE CONCLUDE?
SHAHID KHANDKER
INTERNATIONAL FOOD POLICY RESEARCH INSTITUTE (IFPRI)
What Can We Conclude?
IE seeks to identify program effects – effects caused by program only
but not something else
The p...
Equivalent Group Comparison
(Y2-Y1) is the program effect
Y0 is income level for treated and not-treated before program ...
What Can We Conclude?
The problem with non-experimental method is to resolve sample
selection bias– program is not random...
DD and the Missing Data Problem
Resolves problem of counterfactual by observing
participants and non-participants in pre-...
What Can We Conclude?
The IV method identifies exogenous variation in treatment by using a third
variable that affects on...
All the Solutions for constructing counterfactual all Involve…
Knowing how the data are generated
Randomization
o Give a...
Implementation Issues
Policy relevance
Political Economy
Finding a good control group
o Retrospective versus prospectiv...
The Policy Context
IE needs answers policy questions:
o What policy questions need to be answered?
o What outcomes answer...
Political Economy
Is IE needed for some policy purpose?
Ex ante design build into implementing institutions?
Stakeholde...
Two Forms of Comparator Groups
Retrospective (or ex post):
o Try to evaluate after program implemented
o Statistically mod...
Easier in Prospective Design…
Generate to form good control groups
Most interventions cannot immediately deliver benefit...
Retrospective Designs
Hard to find good control groups
o Must live with arbitrary allocation rules
o Many times rules are...
Retrospective Evaluation
Need to control for differences between control &
treatment groups
Unless have baseline –diffic...
IE and Monitoring Systems
Projects/programs regularly collect data for management purposes
Typical content
o Lists of be...
Monitoring Systems Determine
Who is beneficiary
When started
What benefits were actually delivered
Compliance with any...
Use Monitoring Data for IE
Program monitoring data usually only collected in areas where
active
If start in control area...
Overall Messages
Impact evaluation useful for
o Validation program design
o Adjusting program structure
o Communicating to...
IE Design Messages
• Address policy questions
o Institutional use of resources
• Stakeholder buy-in
• Easiest to use prosp...
Upcoming SlideShare
Loading in …5
×

Conclusion (Module 7)

131 views

Published on

The goal of this course is to provide policy analysts and project managers with the tools for evaluating the impact of a project, program or policy. This course provides information on the methods that can be used to measure the impact of a project, program or policy on the well-being of individuals and households. The course addresses the ways in which the results of an impact evaluation may be put to use – such as, to improve the design of projects and programs, as an input into cost-benefit analysis, and as a basis for policy decisions.

Published in: Government & Nonprofit
  • Be the first to comment

  • Be the first to like this

Conclusion (Module 7)

  1. 1. Summary WHAT CAN WE CONCLUDE? SHAHID KHANDKER INTERNATIONAL FOOD POLICY RESEARCH INSTITUTE (IFPRI)
  2. 2. What Can We Conclude? IE seeks to identify program effects – effects caused by program only but not something else The purpose of IE is to construct a counterfactual– what would have happened to participants had the program not existed; however, you can’t observe the same person with and without a program at the same time So you construct a counterfactual or comparison group based on some assumptions RCT randomizes treated and non-treated groups based on similar traits and non-treated group comparison mimics as counterfactual In principle RCT is desirable, but it is not always applicable, besides it has its own pitfalls; hence, not always applicable
  3. 3. Equivalent Group Comparison (Y2-Y1) is the program effect Y0 is income level for treated and not-treated before program intervention Participants Time Income Y2 Y1 Y0 Program Control Impact = (Y2 -Y1)
  4. 4. What Can We Conclude? The problem with non-experimental method is to resolve sample selection bias– program is not randomly placed and that individuals are not randomly selected Non-experimental methods are always based on statistical assumptions to resolve sample selection bias PSM assumes that selection bias is based on observed characteristics and not for unobserved characteristics DD assumes that the observed changes over time for non-participants provide the counterfactual for participants where repeated observations are available for both groups and that unobserved heterogeneity is fixed over time
  5. 5. DD and the Missing Data Problem Resolves problem of counterfactual by observing participants and non-participants in pre- and post- intervention periods Impact Participants Time Income Y4 Y2 Y0 Program Control Y1 Y3 DD= (Y4 – Y0) – ( Y3 – Y1) Main assumption to identify program effect: time invariant unobserved heterogeneity
  6. 6. What Can We Conclude? The IV method identifies exogenous variation in treatment by using a third variable that affects only the treatment but not the outcomes of interest; it allows for both time-invariant and time-varying heterogeneity causing sample selection bias; however, finding a valid instrument is always a challenge. RD and pipeline methods are an extension of IV method. RD exploits exogenous eligibility rules to compare participants and non-participants around the cut-off point. Pipeline method constructs a comparison group from subjects who are eligible for the program but have not yet received it. In reality, no single evaluation method is perfect; so verifying results using alternative methods is wise. An ex ante design of an ex post evaluation is always a good practice
  7. 7. All the Solutions for constructing counterfactual all Involve… Knowing how the data are generated Randomization o Give all equal chance of being in control or treatment groups o Guarantees that all factors/characteristics will be on average equal between groups o Only difference is the intervention If not, need transparent & observable criteria for who is offered program
  8. 8. Implementation Issues Policy relevance Political Economy Finding a good control group o Retrospective versus prospective designs o Making the design compatible with operations o Ethical Issues Relationship to “results” monitoring
  9. 9. The Policy Context IE needs answers policy questions: o What policy questions need to be answered? o What outcomes answer those questions? o What indicators measures outcomes? o How much of a change in the outcomes would determine success? Example: student performance-based stipend program o Scale-up pilot? o Criteria: Need at least a 10% increase in test scores with no change in unit cost
  10. 10. Political Economy Is IE needed for some policy purpose? Ex ante design build into implementing institutions? Stakeholders: Collaboration between stakeholders & evaluation team o How will negative results affect program managers, policy makers & stakeholders? o Job performance vs knowledge generation o Reward for using IE to change/close weak programs
  11. 11. Two Forms of Comparator Groups Retrospective (or ex post): o Try to evaluate after program implemented o Statistically model how providers of programs & individuals made participation choices o Cannot alter treatment or control group Prospective (or ex ante): o Can introduce some reasons for participation that are uncorrelated with outcomes o Complement operational objectives o Easier and more robust
  12. 12. Easier in Prospective Design… Generate to form good control groups Most interventions cannot immediately deliver benefits to all those eligible o Budgetary limitations o Logistical limitations Typically phased in o Those who go first are potential treatments o Those who go later are potential controls Use Rollout to find control groups
  13. 13. Retrospective Designs Hard to find good control groups o Must live with arbitrary allocation rules o Many times rules are not transparent Administrative data must: o Be good enough to make sure program was implemented as described o Identify beneficiaries, otherwise surveys will be costly Unless originally randomized, need pre-intervention baseline survey: o Both controls and treatments
  14. 14. Retrospective Evaluation Need to control for differences between control & treatment groups Unless have baseline –difficult to use quasi- experimental design Sometimes can do it with baseline if: o Know why beneficiaries are beneficiaries o Observable criteria for program rollout
  15. 15. IE and Monitoring Systems Projects/programs regularly collect data for management purposes Typical content o Lists of beneficiaries o Distribution of benefits o Expenditures o Outcomes o Ongoing process evaluation Key for impact evaluation
  16. 16. Monitoring Systems Determine Who is beneficiary When started What benefits were actually delivered Compliance with any conditionality Necessary condition for program to have an impact: o Benefits need to get to targeted beneficiaries o Program implemented as designed
  17. 17. Use Monitoring Data for IE Program monitoring data usually only collected in areas where active If start in control areas at same time as in treatment areas have a baseline for both Add a couple of outcome indicators Very cost-effective as little need for additional special surveys Most IE’s use only monitoring data
  18. 18. Overall Messages Impact evaluation useful for o Validation program design o Adjusting program structure o Communicating to policymakers & civil society A good evaluation design requires estimating the counterfactual o What would have happened to beneficiaries if they had not received the program? o Need to know all reasons why beneficiaries got program & others did not
  19. 19. IE Design Messages • Address policy questions o Institutional use of resources • Stakeholder buy-in • Easiest to use prospective designs o Take advantage of phase rollout o Transparency & accountability: use quantitative and public criteria o Equity: give eligible equal chance of going 1st • Good monitoring systems & administrative data can improve IE design and lower costs

×