Similar to Impact Evaluation, Policy Making and Academic Research: Some Reflections and Examples - Professor Orazio Attanasio, University College London
Similar to Impact Evaluation, Policy Making and Academic Research: Some Reflections and Examples - Professor Orazio Attanasio, University College London (20)
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Impact Evaluation, Policy Making and Academic Research: Some Reflections and Examples - Professor Orazio Attanasio, University College London
1. Impact evaluation, policy making and academic
research:
Some reflections and examples
Orazio P. Attanasio
EDePo @ IFS & UCL
o.attanasio@ucl.ac.uk
Toward an Evidence Based Development Policy
October 11th 2010
2. 1 Impact evaluations & Academia.
Some history.
Recent developments.
Some criticism of small and narrow.
2 The political economy of impact evaluation.
The policy process.
From policy to academia and back.
3 Large and structural.
4 An Example: CCT evaluations
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 2 / 14
3. Impact evaluations & Academia. Some history.
Some history.
Impact evaluations have played a big role in applied economics for a
long time.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 3 / 14
4. Impact evaluations & Academia. Some history.
Some history.
Impact evaluations have played a big role in applied economics for a
long time.
From an scientific point of view, policies introduce potentially
exogenous variation in the environment that allows one to estimate
structural parameters.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 3 / 14
5. Impact evaluations & Academia. Some history.
Some history.
Impact evaluations have played a big role in applied economics for a
long time.
From an scientific point of view, policies introduce potentially
exogenous variation in the environment that allows one to estimate
structural parameters.
This is particularly true for randomized experiments, where the
variation in incentives is controlled.
Negative income tax experiments in the 1960s and 1970s
(SIME/DIME) .
Urban areas in New Jersey and Pennsylvania from 1968-1972 (1375
families).
Rural areas in Iowa and North Carolina from 1969-1973 (809 families).
Gary, Indiana from 1971-1974 (1800 families).
Seattle and Denver, from 1971-1982 (4800 families).
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 3 / 14
6. Impact evaluations & Academia. Some history.
Some history.
Impact evaluations have played a big role in applied economics for a
long time.
From an scientific point of view, policies introduce potentially
exogenous variation in the environment that allows one to estimate
structural parameters.
This is particularly true for randomized experiments, where the
variation in incentives is controlled.
Negative income tax experiments in the 1960s and 1970s
(SIME/DIME) .
Urban areas in New Jersey and Pennsylvania from 1968-1972 (1375
families).
Rural areas in Iowa and North Carolina from 1969-1973 (809 families).
Gary, Indiana from 1971-1974 (1800 families).
Seattle and Denver, from 1971-1982 (4800 families).
These experiments were interesting because they combined policy
interest (welfare policies) with academic interest (estimates of labour
supply elasticities).
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 3 / 14
7. Impact evaluations & Academia. Recent developments.
Recent developments.
More recently Randomized Control Trials (RCT) have received some
renewed interest.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 4 / 14
8. Impact evaluations & Academia. Recent developments.
Recent developments.
More recently Randomized Control Trials (RCT) have received some
renewed interest.
The proponents of RCTs have been extremely influential and have
changed the way development policy is approached in many contexts.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 4 / 14
9. Impact evaluations & Academia. Recent developments.
Recent developments.
More recently Randomized Control Trials (RCT) have received some
renewed interest.
The proponents of RCTs have been extremely influential and have
changed the way development policy is approached in many contexts.
Overall the development has been positive in many dimensions:
Renewed interest.
Accountability and transparency.
Wealth of new evidence and data.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 4 / 14
10. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
Often the new RCTs are focused on a small policy experiment.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14
11. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
Often the new RCTs are focused on a small policy experiment.
They allow to estimate a very narrowly defined parameter.
... and sometimes researchers have been willing to change the
parameter of interest to the feasibility of a randomization.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14
12. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
Often the new RCTs are focused on a small policy experiment.
They allow to estimate a very narrowly defined parameter.
... and sometimes researchers have been willing to change the
parameter of interest to the feasibility of a randomization.
How useful is this approach?
External validity.
Effectiveness vs Efficacy.
Extrapolation, scalability and policy design: what are the mechanisms
at play?
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14
13. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
Often the new RCTs are focused on a small policy experiment.
They allow to estimate a very narrowly defined parameter.
... and sometimes researchers have been willing to change the
parameter of interest to the feasibility of a randomization.
How useful is this approach?
External validity.
Effectiveness vs Efficacy.
Extrapolation, scalability and policy design: what are the mechanisms
at play?
Sometime (but not always) the proponents of RCTs have
characterized their approach as ’theory-free’.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14
14. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
Often the new RCTs are focused on a small policy experiment.
They allow to estimate a very narrowly defined parameter.
... and sometimes researchers have been willing to change the
parameter of interest to the feasibility of a randomization.
How useful is this approach?
External validity.
Effectiveness vs Efficacy.
Extrapolation, scalability and policy design: what are the mechanisms
at play?
Sometime (but not always) the proponents of RCTs have
characterized their approach as ’theory-free’.
...but is that a virtue?
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14
15. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
Often the new RCTs are focused on a small policy experiment.
They allow to estimate a very narrowly defined parameter.
... and sometimes researchers have been willing to change the
parameter of interest to the feasibility of a randomization.
How useful is this approach?
External validity.
Effectiveness vs Efficacy.
Extrapolation, scalability and policy design: what are the mechanisms
at play?
Sometime (but not always) the proponents of RCTs have
characterized their approach as ’theory-free’.
...but is that a virtue?
Identifying the mechanisms behind certain impacts is crucial to policy
design.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 5 / 14
16. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
But the new wave of RCTs has been an extremely positive
phenomenon.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14
17. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
But the new wave of RCTs has been an extremely positive
phenomenon.
The criticism of models identified exclusively by strong functional
form assumptions is healthy and important.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14
18. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
But the new wave of RCTs has been an extremely positive
phenomenon.
The criticism of models identified exclusively by strong functional
form assumptions is healthy and important.
RCT introduce variation that is controlled by the researcher/evaluator
and therefore exogenous by construction.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14
19. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
But the new wave of RCTs has been an extremely positive
phenomenon.
The criticism of models identified exclusively by strong functional
form assumptions is healthy and important.
RCT introduce variation that is controlled by the researcher/evaluator
and therefore exogenous by construction.
In addition to estimate in a credible fashion the impact of an
intervention, data from RCTs allow the estimation of more credible
and less restrictive models.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14
20. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
But the new wave of RCTs has been an extremely positive
phenomenon.
The criticism of models identified exclusively by strong functional
form assumptions is healthy and important.
RCT introduce variation that is controlled by the researcher/evaluator
and therefore exogenous by construction.
In addition to estimate in a credible fashion the impact of an
intervention, data from RCTs allow the estimation of more credible
and less restrictive models.
The field is extremely vibrant and evolving.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14
21. Impact evaluations & Academia. Some criticism of small and narrow.
Some criticism of RCT.
But the new wave of RCTs has been an extremely positive
phenomenon.
The criticism of models identified exclusively by strong functional
form assumptions is healthy and important.
RCT introduce variation that is controlled by the researcher/evaluator
and therefore exogenous by construction.
In addition to estimate in a credible fashion the impact of an
intervention, data from RCTs allow the estimation of more credible
and less restrictive models.
The field is extremely vibrant and evolving.
Before sharing some thoughts on future directions a small detour on
the political economy of evaluation.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 6 / 14
22. The political economy of impact evaluation. The policy process.
The political economy of evaluations.
Evaluations are not politically rentable.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 7 / 14
23. The political economy of impact evaluation. The policy process.
The political economy of evaluations.
Evaluations are not politically rentable.
You do not win an election with evaluations.
The horizon may be very different.
Control groups can be politically very sensitive.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 7 / 14
24. The political economy of impact evaluation. The policy process.
The political economy of evaluations.
Evaluations are not politically rentable.
You do not win an election with evaluations.
The horizon may be very different.
Control groups can be politically very sensitive.
The relationship between evaluators and policy makers can be delicate
and difficult.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 7 / 14
25. The political economy of impact evaluation. The policy process.
The political economy of evaluations.
Evaluations are not politically rentable.
You do not win an election with evaluations.
The horizon may be very different.
Control groups can be politically very sensitive.
The relationship between evaluators and policy makers can be delicate
and difficult.
For this reason many ’large’ evaluations were started from the outside
(international financial institutions etc.)
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 7 / 14
26. The political economy of impact evaluation. From policy to academia and back.
From policy to academia and back.
For these reasons academia (and international institutions) might
have an important role to play.
It may be easier to maintain independence.
Incentives might be different for local consultants.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 8 / 14
27. The political economy of impact evaluation. From policy to academia and back.
From policy to academia and back.
For these reasons academia (and international institutions) might
have an important role to play.
It may be easier to maintain independence.
Incentives might be different for local consultants.
Interaction with local policy makers and researchers is crucial.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 8 / 14
28. The political economy of impact evaluation. From policy to academia and back.
From policy to academia and back.
For these reasons academia (and international institutions) might
have an important role to play.
It may be easier to maintain independence.
Incentives might be different for local consultants.
Interaction with local policy makers and researchers is crucial.
Given the political difficulties of evaluations, it is advisable to run
evaluations early, when policies at at the design stage.
When there is more flexibility on design.
When the program does not have an established constituency.
When limited resources can be used as arguments to justify
experimentation.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 8 / 14
29. The political economy of impact evaluation. From policy to academia and back.
From policy to academia and back.
For these reasons academia (and international institutions) might
have an important role to play.
It may be easier to maintain independence.
Incentives might be different for local consultants.
Interaction with local policy makers and researchers is crucial.
Given the political difficulties of evaluations, it is advisable to run
evaluations early, when policies at at the design stage.
When there is more flexibility on design.
When the program does not have an established constituency.
When limited resources can be used as arguments to justify
experimentation.
In this context the role of an institution such as 3IE can be crucial.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 8 / 14
30. Large and structural.
An agenda for impact evaluations in developing countries.
More emphasis on the identification of mechanisms behind impacts.
More emphasis on the distinction between efficacy and effectiveness
and scalability.
More emphasis on large and ambitious programs.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 9 / 14
31. Large and structural.
An agenda for impact evaluations in developing countries.
More emphasis on the identification of mechanisms behind impacts.
More emphasis on the distinction between efficacy and effectiveness
and scalability.
More emphasis on large and ambitious programs.
Data and measurements are important:
It is essential not to limit surveys to the measurement of outcomes of
interest.
To identify the determinants of behaviour comprehensive surveys are
needed.
There is much work to be done on measurement issues.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 9 / 14
32. Large and structural.
An agenda for impact evaluations in developing countries.
Much exciting questions need an answer:
The determinants of human capital accumulation:
The production function.
Information people act upon
Incentives and interaction between demand and supply.
Imperfections in credit and insurance markets.
Institution building and social capital.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 10 / 14
33. An Example: CCT evaluations
The evaluation of Conditional Cash Transfers.
CCTs, starting with PROGRESA in Mexico, have been extensively
evaluated.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14
34. An Example: CCT evaluations
The evaluation of Conditional Cash Transfers.
CCTs, starting with PROGRESA in Mexico, have been extensively
evaluated.
The original PROGRESA evaluation constitutes a sterling example of
an evaluation of a very large program based on RCT.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14
35. An Example: CCT evaluations
The evaluation of Conditional Cash Transfers.
CCTs, starting with PROGRESA in Mexico, have been extensively
evaluated.
The original PROGRESA evaluation constitutes a sterling example of
an evaluation of a very large program based on RCT.
That evaluation has generated a very large literature.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14
36. An Example: CCT evaluations
The evaluation of Conditional Cash Transfers.
CCTs, starting with PROGRESA in Mexico, have been extensively
evaluated.
The original PROGRESA evaluation constitutes a sterling example of
an evaluation of a very large program based on RCT.
That evaluation has generated a very large literature.
The evaluation is also illustrative of many interesting issues.
Given the impacts, how does one change the program?
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14
37. An Example: CCT evaluations
The evaluation of Conditional Cash Transfers.
CCTs, starting with PROGRESA in Mexico, have been extensively
evaluated.
The original PROGRESA evaluation constitutes a sterling example of
an evaluation of a very large program based on RCT.
That evaluation has generated a very large literature.
The evaluation is also illustrative of many interesting issues.
Given the impacts, how does one change the program?
Are impacts homogeneous?
Can impacts estimated in a context be generalized?
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14
38. An Example: CCT evaluations
The evaluation of Conditional Cash Transfers.
CCTs, starting with PROGRESA in Mexico, have been extensively
evaluated.
The original PROGRESA evaluation constitutes a sterling example of
an evaluation of a very large program based on RCT.
That evaluation has generated a very large literature.
The evaluation is also illustrative of many interesting issues.
Given the impacts, how does one change the program?
Are impacts homogeneous?
Can impacts estimated in a context be generalized?
What are unintended consequences (positive and negative) and
spillovers?
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14
39. An Example: CCT evaluations
The evaluation of Conditional Cash Transfers.
CCTs, starting with PROGRESA in Mexico, have been extensively
evaluated.
The original PROGRESA evaluation constitutes a sterling example of
an evaluation of a very large program based on RCT.
That evaluation has generated a very large literature.
The evaluation is also illustrative of many interesting issues.
Given the impacts, how does one change the program?
Are impacts homogeneous?
Can impacts estimated in a context be generalized?
What are unintended consequences (positive and negative) and
spillovers?
Can CCT be used as a platform for new additional interventions.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 11 / 14
40. An Example: CCT evaluations
How does one change the program?
To answer this question is necessary to estimate a structural model of
individual behaviour.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 12 / 14
41. An Example: CCT evaluations
How does one change the program?
To answer this question is necessary to estimate a structural model of
individual behaviour.
This was done by Todd and Wolpin (2006) and Attanasio, Meghir
and Santiago (2005).
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 12 / 14
42. An Example: CCT evaluations
How does one change the program?
To answer this question is necessary to estimate a structural model of
individual behaviour.
This was done by Todd and Wolpin (2006) and Attanasio, Meghir
and Santiago (2005).
The availability of the evaluation data allows the estimation of more
flexible models.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 12 / 14
43. An Example: CCT evaluations
How does one change the program?
To answer this question is necessary to estimate a structural model of
individual behaviour.
This was done by Todd and Wolpin (2006) and Attanasio, Meghir
and Santiago (2005).
The availability of the evaluation data allows the estimation of more
flexible models.
Having estimated a structural model, one can simulate changes to the
program and predict impacts.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 12 / 14
44. An Example: CCT evaluations
How does one change the program?
To answer this question is necessary to estimate a structural model of
individual behaviour.
This was done by Todd and Wolpin (2006) and Attanasio, Meghir
and Santiago (2005).
The availability of the evaluation data allows the estimation of more
flexible models.
Having estimated a structural model, one can simulate changes to the
program and predict impacts.
These estimates and simulations have inspired recent innovations to
the program in Mexico:
Oportunidades is piloting new versions of the grant structure in Puebla
and Ecatepec.
The new grant structure eliminates the primary school subsidies and
increases the secondary school ones.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 12 / 14
45. An Example: CCT evaluations
Impact heterogeneity
There is much evidence of impact heterogeneity.
rural v urban.
different states.
different levels of infrastructure.
What is the effect in urban areas?.
What are the spillover effects?
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 13 / 14
46. An Example: CCT evaluations
Impact heterogeneity
There is much evidence of impact heterogeneity.
rural v urban.
different states.
different levels of infrastructure.
What is the effect in urban areas?.
What are the spillover effects?
What is the interaction with quality?
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 13 / 14
47. An Example: CCT evaluations
Other challenges
To model behaviour we need information on its determinants.
Information and expectations.
Beliefs.
Access to (quality) services and markets.
Much work is needed on measurement.
Example of cognitive development in early years.
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 14 / 14
48. An Example: CCT evaluations
Other challenges
To model behaviour we need information on its determinants.
Information and expectations.
Beliefs.
Access to (quality) services and markets.
Much work is needed on measurement.
Example of cognitive development in early years.
Can CCT be used for interventions on nutrition or cognitive
development?
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 14 / 14
49. An Example: CCT evaluations
Other challenges
To model behaviour we need information on its determinants.
Information and expectations.
Beliefs.
Access to (quality) services and markets.
Much work is needed on measurement.
Example of cognitive development in early years.
Can CCT be used for interventions on nutrition or cognitive
development?
Food prices?
Orazio P. Attanasio (EDePo @ IFS) Impact evaluation October 11th 2010 14 / 14